There are already tons of blog posts and vlogs floating around about agent-based systems, protocols, and AI orchestration. Most of them go deep into the theory or stick to flashy diagrams — but leave developers wondering:"Okay, but how do I actually use this?"
In this post, I’ll take a different approach.
Instead of just talking about what the Google A2A Protocol is, I’ll walk you through how to build a real system with it — step by step — using working Python examples. You’ll see how agents can talk to each other, how they collaborate to solve a task like travel planning, and how you can plug this into a local LLM like Ollama.
Google's Agent-to-Agent (A2A) protocol is an open specification that allows AI agents to:
Expose capabilities (like "get weather" or "search the web"),
Advertise themselves on a shared network,
And receive and handle tasks using standard HTTP-based JSON APIs.
Think of it like giving each agent its own little resume, email address, and inbox — so that other agents can contact it and ask for help.
If you’re curious how the protocol works under the hood, please refer to the official protocol specification and the excellent hands-on tutorial from the Google Cloud community: Getting Started with Google A2A – A Hands-On Tutorial
However, the official GitHub repository provides good examples to get started, but for Python beginners, it’s not so straightforward to dive into right away. That’s why this blog post focuses on a human-friendly, practical approach — showing you exactly what to write and how it works, one line at a time.
While Google provides a solid GitHub repository with protocol definitions and usage examples, the experience can be overwhelming for beginners, especially if you're not comfortable setting up everything from scratch.
Therefore, I explored several third-party community implementations that help reduce boilerplate code and offer a much smoother entry point into working with A2A agents. These tools allow you to focus more on logic and behavior rather than wiring up the protocol manually.
On GitHub, I found two popular high-level Python libraries that stand out:
🧩 python-a2a
— a simple abstraction layer that makes agent creation and communication intuitive.
🔄 a2a-server
— another community package that wraps around the A2A protocol and integrates nicely with FastAPI and LangChain.
These libraries are great for getting your hands dirty quickly, especially if you're building practical use cases like agent orchestration or work-flow. And that’s exactly what we’ll use in the upcoming examples.
From these, python-a2a
appears to be the more advanced option. It requires minimal external dependencies, making it easier to use in lightweight setups or small projects. With python-a2a
, you can spin up agents and connect them with just a few lines of code — perfect for quick experimentation or educational use cases.
Before we jump into coding, let’s first understand in plain language what the Google Agent-to-Agent (A2A) protocol is all about. Think of it as a set of rules that let AI agents talk to each other smoothly over the web.
Here are the main building blocks you need to know:
Every agent has a public "business card" — a small file (usually found at /.well-known/agent.json
) that tells the world:
What the agent can do (its skills),
Where to find it (the URL),
How to talk to it (authentication and more).
Clients read this card to discover and connect to agents.
This is the agent that listens for requests. It exposes an HTTP API following the A2A protocol. You can think of it like a smart bot that's ready to accept tasks and do something useful — like fetching weather, searching the web, or generating text.
This is any app (or another agent) that sends tasks to the A2A server. It knows how to talk to the agent using the protocol, and it’s the one that says: “Hey agent, I need you to do this for me.”
A task is the actual "job" or "command" you want the agent to perform. It could be anything — "What's the weather in Paris?" or "Summarize this article." Each task has a unique ID and goes through stages like:
submitted
→ just sent
working
→ the agent is doing its job
input-required
→ the agent is waiting for more input
completed
→ all done
failed
or canceled
→ something went wrong or it was stopped
Messages are the back-and-forth communication between the client and the agent during a task. Each message has a role:
"user"
→ from the client
"agent"
→ from the agent
Messages carry actual content (called "Parts") — like text, images, or structured data — that help the agent understand what to do.
While the A2A Protocol provides several strengths that make it well-suited for building agent-based systems, it also comes with certain drawbacks.
JSON-RPC schema: Provides a standardized and lightweight communication format between agents.
Agent marketplace and discovery: Enables agents to find and interact with one another dynamically.
Agent card concept: Offers a structured way to define agent capabilities and metadata.
Built-in authentication: Ensures secure interactions from the ground up.
Support for MCP (Model Context Protocol): Enhances compatibility with other agent-based systems and tooling.
Complex codebase: Difficult to read and understand, especially for beginners.
Challenging testing process: Writing tests for agent interactions can be non-trivial.
Multi-agent coordination issues: Managing communication and logic across multiple agents is often error-prone.
Library conflicts: Integration can be problematic due to clashes with various Python libraries.
One more thing to clarify: there’s no competition between A2A and the MCP (Model Context Protocol). They serve different but complementary purposes:
MCP is great for low-level operations like accessing local files, querying databases, or running tools and functions on the same machine.
A2A, on the other hand, shines in communication and collaboration between different AI agents, even if they live on separate servers.
In real-world AI applications, you often need both:
Use MCP when an agent needs to use a tool.
Use A2A when multiple agents need to collaborate as a team.
In the next section, we’ll walk through building a real-world example: a Travel Planner AI that uses multiple A2A agents to provide a personalized itinerary.
To put everything we've learned into action, let’s walk through a hands-on example of how to build a Travel Planner AI using the A2A protocol.
We’re planning a short holiday trip. We want an AI assistant (an A2A agent server) that can help us decide where to go and what to do based on the weather.
Here's the idea:
We create a Travel Planner Agent Server using A2A protocol.
This server doesn't do all the work by itself—it talks to two other agents:
🌤 Weather Agent – fetches real-time weather forecast using the OpenWeather API.
🔍 Brave Search Agent – uses Brave Search API to recommend activities depending on the weather (indoor for bad weather, outdoor for good weather).
By combining these two sources, the travel planner can give smart, personalized suggestions for your trip.
The user asks the Travel Planner Agent to plan a trip to Paris.
The Planner agent first asks the Weather Agent: “What’s the weather like in Paris?”
Depending on whether it’s “clear” or “rainy,” the Planner decides what kind of activity is best (outdoor or indoor).
Then it calls the Brave Search Agent with that context: “Show me outdoor activities in Paris.”
After the Travel Planner Agent gathers weather information and activity suggestions from the other agents, the final step is to turn all that data into a friendly and helpful summary. This is where a local LLM (Large Language Model) comes in.
Instead of sending your private travel data to a third-party API, we’ll run a local LLM, such as one powered by Ollama, to generate the final travel itinerary.
In the next section, we’ll walk through step-by-step how to:
Create the Travel Planner A2A server
Implement the Weather Agent
Implement the Brave Search Agent
Connect them using the python-a2a
library
Use an LLM to summarize everything
This approach shows how real AI agents can collaborate using A2A, each doing its job well—and together building something smarter.
Local LLM reference: Ollama with any LLM like llama3.2
UV: a modern Python project & package manager
Venv
OPENWEATHER API KEY
BRAVE API KEY
# Clone the repository
git clone https://github.com/themanojdesai/python-a2a.git
cd python-a2a
# Create a virtual environment and install development dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e ".[dev]"
Create a new Python file named WeatherAgent.py and add the following code:
# uv run simple_agent.py
from python_a2a import A2AServer, skill, agent, run_server, TaskStatus, TaskState
import os
import requests
import logging
@agent(
name="Weather Agent",
description="Provides weather information",
version="1.0.0",
url="https://zzz.example.com"
)
class WeatherAgent(A2AServer):
@skill(
name="Get Weather",
description="Get current weather for a location",
tags=["weather", "forecast"],
examples="I am a weather agent for getting weather forecast from Open weather"
)
def get_weather(self, location):
"""Get real weather for a location using OpenWeatherMap API."""
api_key = os.getenv("OPENWEATHER_API_KEY")
if not api_key:
return "Weather service not available (missing API key)."
try:
url = (
f"https://api.openweathermap.org/data/2.5/weather?"
f"q={location}&units=imperial&appid={api_key}"
)
logging.debug(f"Request URL: {url}") # Log the full request URL
response = requests.get(url, timeout=5)
response.raise_for_status()
logging.debug(f"Response Status Code: {response.status_code}") # Log status code
logging.debug(f"Response Text: {response.text}") # Log raw response text
data = response.json()
temp = data["main"]["temp"]
description = data["weather"][0]["description"]
city_name = data["name"]
logging.debug(f"Parsed Data: Temp = {temp}, Description = {description}, City = {city_name}")
return f"The weather in {city_name} is {description} with a temperature of {temp}°F."
except requests.RequestException as e:
return f"Error fetching weather: {e}"
except (KeyError, TypeError):
return "Could not parse weather data."
def handle_task(self, task):
# Extract location from message
message_data = task.message or {}
content = message_data.get("content", {})
text = content.get("text", "") if isinstance(content, dict) else ""
if "weather" in text.lower() and "in" in text.lower():
location = text.split("in", 1)[1].strip().rstrip("?.")
# Get weather and create response
weather_text = self.get_weather(location)
task.artifacts = [{
"parts": [{"type": "text", "text": weather_text}]
}]
task.status = TaskStatus(state=TaskState.COMPLETED)
else:
task.status = TaskStatus(
state=TaskState.INPUT_REQUIRED,
message={"role": "agent", "content": {"type": "text",
"text": "Please ask about weather in a specific location."}}
)
return task
# Run the server
if __name__ == "__main__":
agent = WeatherAgent(google_a2a_compatible=True)
run_server(agent, port=8001, debug=True)
This above Python script implements a simple Weather Agent using the python-a2a
library. The agent follows the Google A2A protocol and exposes a skill to provide real-time weather forecasts using the OpenWeatherMap API.
Registers itself as an A2A-compliant agent named "Weather Agent".
Exposes one skill: get_weather(location)
that fetches the current weather based on the location provided.
Responds to natural language prompts like "What's the weather in Paris?"
.
If the query is valid, it fetches the temperature and description (e.g., "clear sky, 76°F").
If the input is unclear, it politely asks for clarification.
Uses an environment variable (OPENWEATHER_API_KEY
) to access the weather API.
Logs key debugging info like full request URLs and response data.
Runs locally on port 8001
using run_server()
with Google A2A compatibility enabled.
Create a new Python file named BraveSearchAgent.py and add the following code:
# brave_search_agent.py
from python_a2a import A2AServer, skill, agent, run_server, TaskStatus, TaskState
import os
import requests
import logging
@agent(
name="Brave Search Agent",
description="Performs internet search using Brave Search API",
version="1.0.0",
url="https://yourdomain.com"
)
class BraveSearchAgent(A2AServer):
@skill(
name="Search Internet",
description="Perform a web search using Brave Search API",
tags=["search", "internet", "brave"],
examples="Search 'must visit places in utah in may'"
)
def search(self, query: str):
"""Perform search using Brave Search API"""
api_key = os.getenv("BRAVE_API_KEY")
if not api_key:
return "Search service not available (missing Brave API key)."
headers = {
"Accept": "application/json",
"X-Subscription-Token": api_key,
}
url = "https://api.search.brave.com/res/v1/web/search"
params = {"q": query, "count": 5}
try:
response = requests.get(url, headers=headers, params=params, timeout=5)
response.raise_for_status()
data = response.json()
results = data.get("web", {}).get("results", [])
if not results:
return "No search results found."
summary = "\n".join(
[f"- {r.get('title')}: {r.get('url')}" for r in results]
)
return f"Top results for '{query}':\n{summary}"
except requests.RequestException as e:
logging.error(f"Error during Brave search: {e}")
return f"Search failed: {e}"
except Exception as e:
return f"Unexpected error: {e}"
def handle_task(self, task):
message_data = task.message or {}
content = message_data.get("content", {})
text = content.get("text", "") if isinstance(content, dict) else ""
if text.strip():
query = text.strip()
result = self.search(query)
task.artifacts = [{
"parts": [{"type": "text", "text": result}]
}]
task.status = TaskStatus(state=TaskState.COMPLETED)
else:
task.status = TaskStatus(
state=TaskState.INPUT_REQUIRED,
message={"role": "agent", "content": {"type": "text",
"text": "Please provide a search query."}}
)
return task
if __name__ == "__main__":
agent = BraveSearchAgent(google_a2a_compatible=True)
run_server(agent, port=8002, debug=True)
This Python script sets up a web search agent that uses the Brave Search API and follows the Google A2A protocol, built with the python-a2a
library.
Registers itself as a Brave Search Agent with a skill called search(query)
.
Listens for natural language prompts (like “top museums in Paris”) and uses Brave Search to return relevant results.
Extracts and formats the top 5 search results as a clean, readable list with titles and URLs.
If no query is provided, it prompts the user for clarification.
Uses an environment variable (BRAVE_API_KEY
) to authenticate with Brave’s API.
Returns results in a user-friendly bullet-point format.
Logs and handles API errors gracefully.
Runs locally on port 8002
as a fully compatible A2A agent.
Create a new Python file named local_llm.py and add the following code:
from python_a2a import A2AClient, run_server
from python_a2a.langchain import to_a2a_server
from langchain_ollama.llms import OllamaLLM
import os
# Create a LangChain LLM
#llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
llm = OllamaLLM(model="llama3.2:latest")
# Convert LLM to A2A server
llm_server = to_a2a_server(llm)
# Run LLM agent server in background threads
import threading
import signal
import sys
def main():
llm_thread = threading.Thread(
target=lambda: run_server(llm_server, port=5001),
daemon=True
)
llm_thread.start()
# Wait here until Ctrl+C
try:
print("Servers are running. Press Ctrl+C to stop.")
signal.pause() # Wait for signals
except KeyboardInterrupt:
print("\nStopping servers...")
sys.exit(0)
if __name__ == "__main__":
main()
This script wraps a local LLM (LLaMA 3.2) as an A2A-compatible agent server using the python-a2a
and langchain
libraries.
Uses LangChain’s OllamaLLM to load a local LLaMA 3.2 model.
Converts the LLM into an A2A agent using to_a2a_server()
, allowing it to respond to messages like any other A2A agent.
Runs the agent on port 5001 in a background thread, so it doesn't block your main app.
Keeps the agent alive until manually stopped with Ctrl+C.
Fully A2A-compatible—this LLM can now talk to other agents like Weather or BraveSearch.
Requires the Ollama backend running with LLaMA 3.2 pulled locally.
Runs independently, making it easy to plug into larger agent workflows.
Note that, to run the above code, you have install langchain-ollama which can be installed by the following command:
uv pip install langchain-ollama
Create a new Python file named Travel_Planner_Agent.py and add the following code:
rom python_a2a import AgentNetwork, A2AClient, AIAgentRouter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_ollama.llms import OllamaLLM
import asyncio
async def main():
# Create an agent network
network = AgentNetwork(name="Travel Assistant Network")
# Add agents to the network
network.add("weather", "http://localhost:8001")
network.add("search", "http://localhost:8002")
# List all available agents
print("\nAvailable Agents:")
for agent_info in network.list_agents():
print(f"- {agent_info['name']}: {agent_info['description']}")
# Create a llm client
llm_client=A2AClient("http://localhost:5001")
params = {
"destination": "Paris",
"travel_dates": "June 21-25"
}
weather_agent = network.get_agent("weather")
forecast = weather_agent.ask(f"What's the weather in {params["destination"]}?")
print ("Weather forecast: "+ forecast)
search_agent = network.get_agent("search")
if "sunny" in forecast.lower() or "clear" in forecast.lower():
activities = search_agent.ask(f"Recommend outdoor activities in {params["destination"]}")
else:
activities = search_agent.ask(f"Recommend indoor activities in {params["destination"]}")
# Make summary of the plan
prompt = f"You are a travel assistant. Based on the weather forecast result {forecast} and the recommendations [{activities}], suggest me a few must-see attractions on date {params["travel_dates"]}."
print(f"Prompt: {prompt}")
llm_result = llm_client.ask(prompt)
print(f"LLM response: {llm_result}")
if __name__ == "__main__":
asyncio.run(main())
This async Python script demonstrates how to orchestrate multiple A2A agents to build a smart travel planning assistant using weather data, search results, and a local LLM for summarization.
Create an Agent Network using python-a2a
's AgentNetwork
.
Register two agents:
weather
: Queries OpenWeather API.
search
: Uses Brave Search API to suggest places.
Query the weather agent for a given destination (e.g., Paris).
Based on the forecast, ask the search agent to recommend either indoor or outdoor activities.
Compose a trip summary prompt.
Send it to a local LLM agent (LLaMA 3.2) running on port 5001 via A2AClient
.
Print the final summarized travel plan from the LLM.
Shows multi-agent collaboration using the A2A protocol.
Uses real APIs and conditions to dynamically shape the query flow.
Demonstrates the power of combining tool agents with language models.
Now that we’ve built all the components of our A2A-based travel planner, it’s time to run the agents and see them work together in action.
Here’s the step-by-step execution order:
If you haven’t already, pull and run the LLaMA 3.2 model with the following command:
ollama run llama3.2
🔁 Leave this running — it serves the model for local inference via the
langchain_ollama
integration.
uv run WeatherAgent.py
Make sure your OPENWEATHER_API_KEY
is available in your environment. The Agent will be avilaable at the port 8001.
Here is a console output:
$ uv run WeatherAgent.py
Starting A2A server on http://0.0.0.0:8001/a2a
Google A2A compatibility: Enabled
* Serving Flask app 'python_a2a.server.http'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8001
* Running on http://10.7.0.5:8001
Press CTRL+C to quit
uv run BraveSearchAgent.py
Ensure BRAVE_API_KEY
is set correctly through export BRAVE_API_KEY=YOUR_API_KEY. The agent will be available at 8002 port
uv run local_llm.py
This wraps the Ollama LLM in an A2A-compatible agent server running at http://localhost:5001
.
uv run Travel_Planner_Agent.py
This will:
Fetch weather forecast from the Weather Agent.
Get indoor/outdoor attractions via Brave Search Agent.
Ask the LLM Agent to summarize the trip.
If everything goes fine, you should get a simillar output as shown below:
Available Agents:
- weather: Provides weather information
- search: Performs internet search using Brave Search API
Weather forecast: The weather in Paris is clear sky with a temperature of 78.82°F.
Prompt: You are a travel assistant. Based on the weather forecast result The weather in Paris is clear sky with a temperature of 78.82°F. and the recommendations [Top results for 'Recommend outdoor activities in Paris':
- THE 10 BEST Outdoor Activities in Paris (Updated 2025) - Tripadvisor: https://www.tripadvisor.com/Attractions-g187147-Activities-c61-Paris_Ile_de_France.html
- 18+ of the Best Outdoor Activities in Paris | INSPIRELLE: https://inspirelle.com/18-best-outdoor-activities-paris/
- Outdoor activities in Paris • Paris je t'aime - Tourist office: https://parisjetaime.com/eng/article/activities-paris-outdoors-a1092
- Paris outdoors • Paris je t'aime - Tourist office: https://parisjetaime.com/eng/discover-paris/paris-by-theme/paris-outdoors-i105
- Outdoor Activities in Paris: Book Your Outdoor Experiences Online • Come to Paris: https://www.cometoparis.com/outdoor-activities-paris-c9000705], suggest me a few must-see attractions on date June 21-25.
LLM response: What a perfect time to visit Paris! With clear skies and pleasant temperatures (78.82°F) from June 21st to 25th, you'll have an ideal opportunity to enjoy the city's outdoor activities. Based on the recommendations provided, here are a few must-see attractions for your consideration:
1. **Explore the Luxembourg Gardens** (Jardin du Luxembourg): A beautiful green oasis in the heart of Paris, perfect for picnics, strolls, or people-watching. This 23-hectare park is ideal for a sunny day like June 21st.
2. **Walk along the Seine River**: Take a leisurely walk along the Seine, enjoying the city's scenic views, street performers, and historic landmarks like Notre-Dame Cathedral (currently under renovation). You can also rent a boat and enjoy a relaxing river cruise.
3. **Visit the Tuileries Garden** (Jardin des Tuileries): Another picturesque garden in Paris, offering beautiful flowers, fountains, and sculptures. It's a great spot to relax, take photos, or simply enjoy the fresh air.
4. **Rent a bike and ride along the Canal Saint-Martin**: This charming canal offers a peaceful escape from the city bustle. Rent a bike and explore the picturesque streets, cafes, and boutiques along the way.
5. **Visit the Eiffel Tower** (Tour Eiffel): While you can't climb to the top due to safety restrictions during peak season, you can still take in the breathtaking views of the city from the first or second floor. Don't forget your camera!
Additional recommendations:
- Take a stroll through the historic Montmartre neighborhood and visit the Sacré-Cœur Basilica.
- Visit the Sainte-Chapelle (also known as the "Pearl of the Crown Jewels") for its stunning stained-glass windows.
Remember to check the opening hours, ticket prices, and any COVID-19 protocols before visiting these attractions. Enjoy your time in Paris!
So, we walked through the process of building a simple yet powerful AI agent network using the A2A (Agent-to-Agent) protocol. Our goal was to design a travel assistant that can dynamically:
Check weather conditions using a dedicated Weather Agent.
Search for location-specific attractions (indoor or outdoor) using a Brave Search Agent.
Summarize the entire trip plan using a Local LLM (LLaMA 3.2 via Ollama).
Along the way, we learned:
How to define A2A-compliant agents using the python-a2a
library.
How to build and register skills using decorators like @skill
.
How to run each agent as a separate HTTP service.
How to use AgentNetwork
and A2AClient
to orchestrate interactions between agents.
How to wrap a local LLM (e.g., LLaMA 3.2) in an A2A-compatible interface to support natural language summarization.
This example lays the groundwork for scalable, modular AI applications where agents collaborate to perform complex tasks — a key principle in the future of agent-based software systems.
Special thanks and full credit go to the author and contributors of the python-a2a
library, whose work made this seamless agent communication framework possible.