kimyu-ng
MCP Serverkimyu-ngpublic

calculator_mcp

MCP tool for calculator

Repository Info

0
Stars
0
Forks
0
Watchers
0
Issues
Python
Language
MIT License
License

About This Server

MCP tool for calculator

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

Calculator MCP Server

A simple MCP server exposing basic arithmetic (add, subtract, multiply, divide), mathematical expression evaluation, statistical (mean, median, mode, standard deviation, variance), and calculus (numerical integration, differentiation) operations as ADK FunctionTools over stdio and SSE.

Prerequisites

  • Python 3.13+
  • uv

Installation

  1. Clone or download this repository.

  2. Create virtual environment

    uv venv
    source .venv/bin/activate
    
  3. In a terminal, install dependencies:

    uv sync
    

Running the Server

Start the MCP server on standard I/O:

via STDIO

uv run stdio_server.py

via SSE

uv run sse_server.py

You should see:

Launching Calculator MCP Server...

Running the client

  1. open new terminal tab, source .venv/bin/activate to activate the env
  2. cd path/to/your/calculator_agent_litellm
  3. create .env, add key and adjust model appropriately 3a. eg. GEMINI_API_KEY or MY_OPENAI_API_KEY with GOOGLE_GENAI_USE_VERTEXAI=false 3b. for all available models, please refer to https://docs.litellm.ai/docs/providers/gemini
  4. run the server
cd path/to/your/calculator_agent_litellm
adk web ../

Using from an ADK Agent

Below is an example Python client that connects to this MCP server and invokes calculator operations:

import asyncio
import json
from google.adk.tools.mcp_tool.mcp import MCPToolset
from google.adk.tools.mcp_tool.mcp import StdioServerParameters
from google.adk.agent.tool_runner import Runner

async def main():
    # Launch and connect to the server
    tools, stack = await MCPToolset.from_server(
        connection_params=StdioServerParameters(
            command="python3",
            args=["/absolute/path/to/stdio_server.py"]
        )
    )

    # Use the Runner to call a tool
    runner = Runner(tools)
    result = await runner.call_tool("add", {"a": 5, "b": 3})
    print(json.loads(result.text)["result"])  # prints 8

    # Close the connection
    await stack.aclose()

if __name__ == "__main__":
    asyncio.run(main())

Replace /absolute/path/to/stdio_server.py with the correct path on your system.

Listing Available Tools

You can also list available tools using any MCP-capable client. In Python:

tools, stack = await MCPToolset.from_server(
    connection_params=StdioServerParameters(
        command="python3",
        args=["/absolute/path/to/stdio_server.py"]
    )
)
print([t.name for t in tools])  # ["add", "subtract", "multiply", "divide"]
await stack.aclose()

Cline MCP Configuration

If using Cline MCP, add the following configuration:

{
  "mcpServers": {
    "calculator": {
      "type": "stdio",
      "command": "uv",
      "args": [
        "--directory",
        "<absolute_path>/calculator_mcp/",
        "run",
        "stdio_server.py"
      ]
    }
  }
}

Using calculator_agent with Gemini (via adk web SSE Server)

This section explains how to connect the calculator_agent (configured for Gemini) to the SSE-based MCP server using adk web.

  1. Create a new folder for your ADK agent, e.g., calculator_agent. (This folder should already exist if you cloned the repository).

  2. Ensure calculator_agent/agent.py is configured for Gemini (e.g., model="gemini-2.0-flash").

  3. Create or verify the .env file inside the calculator_agent folder with the following content for Gemini:

    GOOGLE_GENAI_USE_VERTEXAI=FALSE
    GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_API_KEY_HERE
    

    Note: Replace PASTE_YOUR_ACTUAL_API_KEY_HERE with your actual Google API key.

  4. Start the SSE server from the calculator_mcp root directory:

    uv run sse_server.py
    
  5. In another terminal, navigate to your calculator_agent folder and run adk web pointing to its parent directory:

    cd path/to/your/calculator_agent
    adk web ../
    

    For example, if calculator_agent is inside calculator_mcp, and you are in calculator_mcp/calculator_agent, you would run adk web ../. If calculator_agent is a sibling to calculator_mcp, you might run adk web ../calculator_agent from within calculator_mcp or adjust paths accordingly. The key is that adk web needs to find the agent's directory.

Your browser will open the ADK web UI, allowing you to interact with the calculator tools using the Gemini-powered calculator_agent.

Using calculator_agent_litellm with Other LLMs (e.g., OpenAI)

This section describes how to run the calculator_agent_litellm example agent using non-Gemini models like OpenAI, leveraging LiteLLM.

Configuring calculator_agent_litellm for OpenAI

The calculator_agent_litellm uses LiteLLM, which simplifies using various LLM providers, including OpenAI.

  1. Set Environment Variables for OpenAI: Create or update a .env file in the root of your calculator_agent_litellm project folder. For OpenAI, you primarily need to set:

    # Required: Your OpenAI API key
    OPENAI_API_KEY="your_openai_api_key_here"
    
    # Optional: LiteLLM allows specifying the model directly in the agent code
    # (as done in the example agent.py: model="openai/gpt-4.1-nano")
    # or via other environment variables if your agent code is set up to read them.
    # If agent.py uses a generic "MODEL" env var for LiteLLM model string:
    # MODEL="openai/gpt-4"
    

    Replace your_openai_api_key_here with your actual OpenAI API key. LiteLLM will automatically use this for OpenAI calls if the model string in agent.py (e.g., openai/gpt-4.1-nano) indicates an OpenAI model.

  2. Ensure Agent Code Specifies an OpenAI Model: Verify that your agent.py within calculator_agent_litellm is configured to use an OpenAI model string recognized by LiteLLM (e.g., "openai/o4-mini", "gpt-4.1-nano" if it's an OpenAI model). The example uses model="openai/gpt-4.1-nano".

    # In calculator_agent_litellm/agent.py
    # root_agent = LlmAgent(
    #     model=LiteLlm(
    #         model="openai/gpt-4.1-nano", # This specifies an OpenAI model via LiteLLM
    #         api_key=os.getenv("MY_OPENAI_API_KEY"), # Ensure this matches your .env key if different
    #     ),
    # ...
    # )
    

    Note: The example agent.py for calculator_agent_litellm uses os.getenv("MY_OPENAI_API_KEY"). Ensure your .env file for calculator_agent_litellm uses MY_OPENAI_API_KEY or update the agent.py to use OPENAI_API_KEY. For consistency with common practices, using OPENAI_API_KEY in both the .env file and os.getenv() is recommended.

  3. Start the SSE Server (if not already running): In your calculator_mcp project root:

    uv run sse_server.py
    
  4. Run the ADK Web UI for calculator_agent_litellm: In another terminal, navigate to your calculator_agent_litellm folder and run adk web pointing to its parent directory or the agent directory itself as appropriate:

    cd path/to/your/calculator_agent_litellm
    adk web ../
    

    The agent will use LiteLLM to proxy requests to the specified OpenAI model using your API key.

Running Tests

This project uses pytest for testing.

  1. From the project root directory (calculator_mcp), run:

    uv run pytest tests
    

    This will discover and run all tests in the tests/test_calculator.py file.

Demo Videos

Here are three demonstration videos showing different ways to use the Calculator MCP Server:

1. Basic Addition Operation

https://github.com/user-attachments/assets/e65a9074-2bed-4647-af80-97d8c193e90b

This video demonstrates the basic addition operation using the MCP server, showing how to add two numbers together.

2. Statistical Mean Calculation

https://github.com/user-attachments/assets/0a9d7410-b5bf-465b-8d61-7dab1d5f1701

Watch how to calculate the statistical mean of a dataset using the calculator's statistical functions.

3. Mathematical Expression Evaluation

https://github.com/user-attachments/assets/6939b92a-0d66-4205-aea7-291114a258f3

Learn how to evaluate complex mathematical expressions using the calculator's expression evaluation capabilities.

References

  1. MCP SSE example:

    • https://github.com/google/adk-python/blob/main/contributing/samples/mcp_sse_agent/filesystem_server.py
  2. MCP STDIO example:

    • https://github.com/google/adk-python/blob/main/contributing/samples/mcp_stdio_server_agent/agent.py
  3. Setting up runtime keys

    • https://docs.litellm.ai/docs/set_keys

Quick Start

1

Clone the repository

git clone https://github.com/kimyu-ng/calculator_mcp
2

Install dependencies

cd calculator_mcp
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerkimyu-ng
Repocalculator_mcp
LanguagePython
LicenseMIT License
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation