Amazon Bedrock AgentCore Preview: A Solutions Architect’s First Look at the New Agentic AI Platform
By Anand Joshi, Solutions Architect, Rackspace Technology
Introduction
As organizations increasingly explore integrating AI agents into their workflows, Amazon Web Services (AWS) has introduced Amazon Bedrock AgentCore, a new agentic AI platform currently available in preview. I had the opportunity to test the platform hands-on with a Rackspace client and want to share my impressions and key findings for fellow solutions architects and developers considering this emerging technology.
What is Amazon Bedrock AgentCore?
It is a comprehensive platform for deploying and operating highly capable AI agents securely and at scale. The platform takes a code-first approach offering tight integration with AWS services while simplifying the complex orchestration typically required for agent-based applications. It’s based on open foundations, making agents agnostic to its framework technology and agent reasoning models.
Core platform components
Amazon Bedrock AgentCore is built around following modular components that work together to provide a complete Agentic AI platform.
- AgentCore Runtime: The execution environment where agents run and process requests, Runtime can also run model context protocol (MCP) servers at scale. It’s a purpose build runtime for AI agents that can handle asynchronous processing and long running agents up to eight hours and can be extended by configuration.
- AgentCore Gateway:
- Enables existing AWS Lambda Functions and Open API based endpoints to act as MCP.
- Tightly integrates with AgentCore Identity to secure MCP calls with standard authentication/authorization protocols, such as OAuth 2.0/OpenID Connect.
- Enables dynamic updates to available tools that agents can invoke.
- Provides a feature for semantic tool selection, reducing the size of context to reasoning model.
- AgentCore Identity: A comprehensive identity and credential management service designed specifically for AI agents that:
- Manages authentication, authorization to agents/MCP servers deployed to AgentCore Runtime.
- Manages inbound and outbound authentication/authorization to and from MCP Gateway and gateway targets.
- Uses standardized OAth 2.0 flows and API keys.
- AgentCore Memory: Provides sophisticated short-term and long-term memory capabilities.
- AgentCore Observability: Offers monitoring, logging and performance insights using OpenTelemetry (OTel).
- AgentCore tools: Includes built-in capabilities, like code interpreter and browser tools.
- AgentCore starter toolkit: Enables developers to deploy local AI agents to AgentCore with zero infrastructure setup, significantly reducing the barrier to entry for getting started on the platform.
- AgentCore SDK: Currently only available in Python. Like other AWS services, AgentCore SDK can be used to directly create AgentCore resources imperatively. AgentCore resource support for CloudFormation, CDK or Terraform is yet to be available. AgentCore SDK is the solution until then.
Key features and capabilities
Code-first development with open foundations
One of AgentCore's standout features is its emphasis on a code-first methodology built on open foundations. This approach will feel familiar to developers already working within the AWS ecosystem, allowing them to leverage existing skills and workflows. The platform integrates seamlessly with the Strands Agents framework, providing a cohesive development experience.
Importantly, AgentCore's open foundation architecture means that agents built on any popular frameworks, including LangChain, LangGraph and CrewAI, can be deployed to the AgentCore runtime without modification. This framework flexibility eliminates vendor lock-in concerns and allows teams to use their preferred development tools and methodologies.
Multi-provider model support
Beyond framework flexibility, AgentCore demonstrates true vendor neutrality by supporting models from multiple providers, not just Amazon Bedrock. During my tests, I confirmed support for models from OpenAI, Anthropic, Hugging Face and even local deployments through Ollama. This multi-provider approach gives organizations the freedom to choose the best models for their specific use cases while maintaining a consistent deployment and management experience through the AgentCore platform.
AgentCore Gateway and protocol management
The MCP gateway functionality impressed me during testing. This component allows you to transform existing Lambda functions and OpenAPI endpoints into MCP tools without significant refactoring.
What's particularly noteworthy is the dynamic nature of this integration. When new tools are added to the gateway, they become immediately available to agents without requiring restarts or code changes. The AgentCore Gateway serves as more than just a protocol translator. It acts as the central nervous system for tool and service integration, handling routing and API management across the platform.
AgentCore Memory
AgentCore's memory component goes beyond simple conversation history. The platform maintains both short-term conversations and long-term memories, creating a personalized experience for end users. The memory system uses namespaces to organize different types of data, including:
- Short-term memory: Stores recent conversation context with support for top-K retrieval and semantic filtering.
- Long-term memory: Captures user preferences, conversation summaries and insights extracted asynchronously.
This dual-layer approach helps minimize latency and costs while maintaining rich contextual awareness. The context is brought in using API calls to AgentCore Memory and can be configured as agent hooks or agent tools with a framework-native mechanism. For example, the Strands Agents framework provides agent lifecycles hooks association to python functions that can call AgentCore APIs. When using AgentCore Memory as tools, the Strands Agents package (strands_tools.memory) provides tools that can be configured at agent initialization.
`Streamlined CLI tooling and development experience
The AgentCore CLI significantly simplifies the agent development lifecycle. During my testing, I found it handled several critical tasks seamlessly, including:
- Automated Dockerfile generation
- Agent IAM role configuration with policy
- ECR registry creation and management
- Image building and pushing to ECR
- Local testing capabilities
This tooling abstracts away much of the infrastructure complexity, allowing developers to focus on agent logic rather than deployment mechanics. The combination of the CLI tools and the starter toolkit creates an exceptionally smooth development experience from local development to cloud deployment.
AgentCore Identity and security integration
The AgentCore Identity component integrates well with OAuth 2.0 compatible identity servers, including enterprise solutions like Ping Identity. This enterprise-ready approach to authentication and authorization is crucial for production deployments in regulated industries, providing centralized security management across all agent interactions and tool calling.
Architecture and runtime behavior
Unified deployment model
An interesting architectural decision is how AgentCore treats agent and MCP server deployments. The deployment process is nearly identical for both, differentiated only by a protocol flag in the CLI. This consistency simplifies operations and reduces the learning curve for teams managing multiple component types.
Runtime invocation patterns
At runtime, agents are invoked via the /invocations endpoint. MCP servers use the same endpoint but with an important distinction — all MCP JSON-RPC messages are proxied to the MCP endpoint of the running server. This design provides a consistent interface while maintaining protocol-specific functionality.
Memory communication patterns
Agent communication with AgentCore memory follows a synchronous pattern for immediate needs, with data written to short-term memory when agents return responses to users. The system then processes insights, like user preferences and conversation summaries, asynchronously into long-term memory, balancing responsiveness with comprehensive context retention.
Current limitations and considerations
A2A server support
Currently, AgentCore does not support hosting agent-to-agent (A2A) servers. This limitation may impact architectures requiring complex multi-agent interactions, though this could change as the platform evolves. If agents absolutely need to use an A2A server, it can be deployed to Amazon Elastic Container Service (Amazon ECS).
Memory visibility challenges
One area needing improvement is memory content visibility within the AgentCore Memory component. The platform currently lacks good CLI or console-based methods to inspect memory contents. Developers must use the SDK and make direct API calls to retrieve this data, which can complicate debugging and monitoring.
Observability maturity
While AgentCore includes an observability component, the depth and richness of monitoring capabilities will be crucial for production deployments. A significant strength of the platform is that all major components of AgentCore support OTel, providing comprehensive observability for AI agent workflows. This integration allows you to trace, monitor and debug your agents and related resources with data automatically emitted in a standardized, OTel-compatible format.
This OTel support means organizations can integrate AgentCore monitoring into their existing observability infrastructure, whether they use tools, like Datadog, New Relic, or open-source solutions, like Jaeger and Prometheus. The standardized format ensures that AgentCore fits naturally into established monitoring workflows rather than requiring separate, siloed observability tools.
Infrastructure-as-code limitations
As a preview service, AgentCore currently lacks support for popular infrastructure-as-code (IaC) tools, including AWS CDK, CloudFormation and Terraform. This limitation means that organizations with established IaC practices cannot yet integrate AgentCore provisioning into their existing deployment pipelines.
However, AWS has indicated that support for these tools will be added as the service transitions from preview to general availability. In the interim, the Amazon AgentCore Python SDK provides programmatic access for resource provisioning, offering a bridge solution for organizations that need automated deployment capabilities.
Practical implications for enterprise adoption
Framework and model flexibility
The open foundation approach provides significant strategic value for enterprise adoption. Organizations can:
- Preserve existing investments: Teams already using LangChain, LangGraph or CrewAI can migrate to AgentCore without rewriting their agents.
- Avoid model vendor lock-in: The ability to use models from OpenAI, Anthropic, Hugging Face, Ollama and Amazon Bedrock provides the flexibility to optimize for cost, performance or specific capabilities.
- Maintain development consistency: Different teams can continue using their preferred frameworks while standardizing on AgentCore for deployment and operations.
Developer experience
The code-first approach and comprehensive CLI tooling create a positive developer experience, especially for teams already familiar with AWS services. The ability to leverage existing Lambda functions through the MCP gateway provides a clear migration path for organizations with established serverless architectures.
The open foundation architecture further enhances the developer experience by eliminating the need to learn platform-specific frameworks or be constrained to specific model providers.
The intelligent memory management, with its semantic filtering and top-K retrieval capabilities, shows promise for optimizing both latency and costs — critical factors for production agent deployments at scale.
Enterprise readiness
The OAuth integration and namespace-based memory organization indicate Amazon's focus on enterprise requirements. The comprehensive OTel support across all major components is particularly noteworthy for enterprise adoption, as it allows seamless integration with existing observability toolchains and eliminates the need for proprietary monitoring solutions.
However, the current limitations around memory visibility, combined with the lack of IaC support, may require additional planning for production operations. Organizations should factor in the preview status when planning deployments, particularly those with strict IaC governance requirements. The Python SDK provides a programmatic alternative for now, but teams may want to wait for full CDK/CloudFormation/Terraform support if IaC integration is critical to their deployment strategy.
Looking forward
Amazon Bedrock AgentCore represents a solid foundation for agentic AI applications, particularly for organizations already invested in the AWS ecosystem. The platform's code-first approach, combined with its sophisticated memory management and streamlined deployment tooling, addresses many common pain points in agent development.
However, as with any preview service, there are areas for improvement and considerations for enterprise adoption. Enhanced visibility into memory contents, expanded A2A server support, and the planned addition of IaC support will be important milestones as the service matures toward general availability.
The current reliance on the Python SDK for programmatic provisioning, while functional, may limit adoption among organizations with strict IaC governance. The promised addition of CDK, CloudFormation and Terraform support will be crucial for broader enterprise acceptance.
For organizations evaluating agentic AI platforms, AgentCore warrants serious consideration, especially if you're looking for tight AWS integration and have existing investments in Lambda and other AWS services. The MCP gateway functionality alone could provide significant value for organizations looking to enhance existing APIs with agent capabilities.
As the platform continues to evolve through its preview phase, I'll be monitoring its development closely and plan to share additional insights as new capabilities become available.

Recent Posts
Amazon Bedrock AgentCore Preview: A Solutions Architect’s First Look at the New Agentic AI Platform
November 11th, 2025
My Journey of Building a Terraform AI Agent — Automating Cloud Infrastructure with AI
October 31st, 2025
Microsoft's 2025 Licensing Evolution: Transform Change into Opportunity
October 9th, 2025
7 Critical AWS Architecture Risks: How to Assess and Remediate Security Gaps
September 25th, 2025
From Data Rich to Insight Rich: A Cloud Charter for 2025
September 2nd, 2025
