hrushikesh-dhumal
MCP Serverhrushikesh-dhumalpublic

mcp_a2a

Exploration of MCP and A2A protocol

Repository Info

0
Stars
0
Forks
0
Watchers
0
Issues
Python
Language
Apache License 2.0
License

About This Server

Exploration of MCP and A2A protocol

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

Introduction

Together, A2A and MCP protocols could enable advanced AI ecosystems where agents equipped with MCP (providing context and tools) collaborate using A2A. Here, I would like to present a simple example. Consider an MCP-powered agent wrapped in A2A named PDF_extract_and_translate_agent that performs the following: Internal Protocol: Uses MCP to communicate with its underlying LLM. Reads and Translates the PDF content into English. Key Tool: Has a specific tool (read_pdf_tool) that can ingest a PDF file (given a path) and extract its text content. A2A Role: Acts as a service provider, offering its PDF reading & translation capability to other agents.

Scenario: Extracting & translating Information from a PDF

sequenceDiagram
    participant Client as A2A Client
    participant Server as A2A Server
    participant Agent as LangGraph Agent
    participant Tool as MCP tool

    Client->>Server: Send task with reading pdf query
    Server->>Agent: Forward query to PDF_Extractor_Agent agent

    alt Complete Information
        Agent->>Tool: Call read_pdf_tool tool
        Tool->>Agent: Return content of file
        Agent->>Server: Translate to English 
        Server->>Client: Respond with translated content
    else Incomplete Information
        Agent->>Server: Request additional input
        Server->>Client: Set state to "input-required"
        Client->>Server: Send additional information
        Server->>Agent: Forward additional info
        Agent->>Tool: Call read_pdf_tool tool
        Tool->>Agent: Return content of file
        Agent->>Server: Translate to English
        Server->>Client: Respond with translated content
    end

Prerequisites

  • I have used uv as package manager to set up the project in a MS Windows 11. The A2A libraries we'll be using require python >= 3.12 which uv can install if you don't already have a matching version.
    Check
echo 'import sys; print(sys.version)' | uv run -

If you see something similar to the following, you are ready to proceed!

3.12.7 | packaged by Anaconda, Inc. | (main, Oct  4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]
  • Valid OpenAI credentials

Setup and running

  • Git clone the repository
git clone https://github.com/hrushikesh-dhumal/mcp_a2a
  • Navigate to the mcp-a2a directory
cd .\mcp_a2a\mcp-a2a\
  • Create an environment file (.env) with your API key and the model ID (e.g., "gpt-4.1-nano")::
OPENAI_API_KEY="your_api_key_here"
OPENAI_CHAT_MODEL_ID="your-model-id"

Optionally you can add tracing using Langsmith. Its pretty awesome for breaking down the LLM calls.

LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="your_api_key_here"
LANGSMITH_PROJECT="your-project-id"
  • Set up the Python Environment:
uv python pin 3.12
uv venv
source .venv/bin/activate

for windows source .\.venv\Scripts\activate

  • Run the tool:
python .\mcp_pdf_server.py

This does not print anything in the terminal.

  • In a separate terminal, run the agent:
uv run mcp-a2a

The output should look something like this.

INFO:     Started server process [20840]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:10002 (Press CTRL+C to quit)
  • In a separate terminal, run the A2A client:
uv run google-a2a-cli --agent http://localhost:10002

You can then send messages to your server and pressing Enter

=========  starting a new task ========

What do you want to send to the agent? (:q or quit to exit): Parse the pdf  "sample.pdf"

If everything is working correctly you'll see this in the response

"message":{"role":"agent","parts":[{"type":"text","text":"Thank you very much"}]}...

References

  1. A2A tutorial is an amazing resource for understanding and setup.
  2. blpapi-a2a is a more complex implementation of MCP with A2A.

Future Work

  1. Try alternative for langchain-mcp because of the error that makes me declare the agent when it is used.
  2. Complex structure of agents created using different libraries and their interaction.
  3. Explore other implementations such as mcpdoc

Author

Hrushikesh Dhumal

  • GitHub: hrushikesh-dhumal
  • Medium: hrushikesh.dhumal

Quick Start

1

Clone the repository

git clone https://github.com/hrushikesh-dhumal/mcp_a2a
2

Install dependencies

cd mcp_a2a
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerhrushikesh-dhumal
Repomcp_a2a
LanguagePython
LicenseApache License 2.0
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation