Elastic Agent Builder is a set of capabilities for creating data-driven, AI agents directly in Elasticsearch. In previous posts of the series, we demonstrated how to equip custom agents with tools to perform complex tasks and provide them with a set of custom instructions to guide their behavior.
But what if you want to use your custom agents with the applications and productivity tools you already rely on?
That's where the Agent-to-Agent (A2A) protocol comes in. A2A is an open standard for interoperability, allowing agents from different platforms to communicate and collaborate. And we’ve built it directly into the Elastic Agent Builder.
Today, we're going to show you how to take a custom agent you've built and expose it to other services, specifically, Gemini Enterprise (formerly Agentspace).
The power of open standards: why A2A matters
In the blog post Your first Elastic Agent, we showed how to build custom agents, such as a Financial Assistant agent with secure access to your market data. But its value is limited if you can't make its insights available in other environments, like Gemini Enterprise, without rebuilding your work.
This challenge of interoperability is what holds agentic AI back. Agents need a common language to communicate across platforms, which is precisely the role of the A2A protocol. It provides a standard communication layer that not only lets you interact with your agent directly, but also unlocks a future where specialized agents across your organization can collaborate and share insights.
To make this possible, the Elastic Agent Builder natively supports the A2A protocol through two standard endpoints for all your agents:
- The Agent Card endpoint (
GET {your-kibana-url}/api/agent_builder/a2a/{agentId}.json
) - This acts as your custom agent's business card. It provides metadata about your agent (name, description, capabilities, etc) to any A2A-compatible service. - The A2A Protocol endpoint (
POST {your-kibana-url}/api/agent_builder/a2a/{agentId}
) - This is the communication channel. Other agents send their requests here, and your agent processes them and returns a response, all following the A2A protocol specification.
Test your agent with the A2A inspector
Before connecting our agent to a production system, it's good to check that it's communicating correctly. The easiest way to do this is with the A2A Inspector, a tool designed specifically for testing and debugging A2A integrations.
Getting the inspector running is straightforward. You can clone the a2a-inspector repository and follow the README instructions to run the application. Once started, the UI is available by default at http://localhost:5001/
.
To connect the A2A Inspector to your agent, you'll need to provide two key pieces of information:
- Agent Card URL: This is the endpoint that describes your agent. For the Financial Assistant agent from our previous post, this URL would be
{your-kibana-url}/api/agent_builder/a2a/financial_assistant.json
. - Authentication Header: We'll use a standard API Key for authentication.
Once you enter these details in the inspector's UI, you can connect and begin chatting with your agent immediately.

This simple validation gives us the confidence that our agent is configured correctly and is ready for the next step.
Go live! Your custom agent in Gemini Enterprise
Now for the exciting part: bringing our custom financial advisor agent to life within Gemini Enterprise (formerly Agentspace). This integration is powered by the Elastic AI Agent, which is available on the Google Cloud Marketplace.
Once connected, Gemini Enterprise uses the A2A protocol to communicate directly with your agent. This is where the true power of interoperability shines: users can now access the deep, data-driven insights from your custom Elasticsearch agent without ever leaving their familiar environment. You can see your custom Elastic Agent in the agents list:

Imagine a user in Gemini Enterprise asking:
"I'm worried about market sentiment. Can you show me which of our clients are most at risk from bad news?"
Behind the scenes, Gemini Enterprise routes this query through the A2A protocol to your custom Elastic Agent. Your agent then uses its specialized tools to query your data, formulate an answer, and send it back. For the end-user, the experience is seamless.

And it doesn’t stop here! The answer retrieved with Elastic agent can now be used as a context for your next questions that may be triggering a different specialized agent (e.g. your investments platform agent to adjust exposure to listed companies). All without leaving your search bar.
With your Elastic agents deployed on Gemini Enterprise with A2A, you can unify access, orchestration, and workflows removing friction between AI, search, and enterprise systems by offering a single UI where users talk to their data and tools — all in context. For users, that means less tool-switching and more intuitive, capable AI assistants. For organizations, it means coherent governance, scalability, and interoperability built in.
Your Turn to Build
You now have the tools to make your Elastic Agents available anywhere. By leveraging the open A2A protocol, you can extend the reach of your custom, data-aware agents.
In this post, we walked you through the key steps:
- Exposing your agent via the A2A Agent Card and Protocol endpoints.
- Testing the connection with the A2A Inspector.
- Integrating your agent live into an external service like Google's Gemini Enterprise.
Your agents no longer need to be isolated. We can’t wait to see the powerful, interconnected systems you create. Happy building!
The easiest way to get started is with your Elastic Cloud free trial on Google Cloud Marketplace
Level up your skills with our on-demand webinars: Agentic RAG using Elasticsearch and Intro to MCP with Elasticsearch MCP Server.
You can also take advantage of Elastic’s gen AI capabilities now by starting a free cloud trial or running Elasticsearch locally.