
mcp aware
一个可以通过 MCP 服务器进行计算并支持 OpenAI API 的聊天机器人。
Repository Info
About This Server
一个可以通过 MCP 服务器进行计算并支持 OpenAI API 的聊天机器人。
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
MCP Chatbot
A chatbot that can perform calculations using an MCP server and optionally use OpenAI's API for more advanced responses.
Features
- Multiple Backend Support: Switch between OpenAI's GPT models, Google Gemini, and local models
- MCP Integration: Connect to MCP servers for specialized tools and resources
- Environment Configuration: Easy configuration through environment variables
- Rate Limiting: Built-in rate limiting to prevent abuse
- Logging: Comprehensive logging for debugging and monitoring
Prerequisites
- Python 3.8+
- OpenAI API key (for GPT models)
- Google API key (for Gemini models)
- MCP server (for MCP tools)
Installation
-
Clone the repository:
git clone <repository-url> cd mcp_chatbot -
Create and activate a virtual environment:
# On Windows python -m venv venv .\venv\Scripts\activate # On macOS/Linux python3 -m venv venv source venv/bin/activate -
Install dependencies:
pip install -r requirements.txt -
Copy the example environment file and update with your settings:
copy .env.example .envEdit the
.envfile and add your API keys (OpenAI and/or Google Gemini).
Configuration
Edit the .env file to configure the application:
API Keys
OPENAI_API_KEY: Your OpenAI API key (required for GPT models)GOOGLE_API_KEY: Your Google API key (required for Gemini models)
Backend Selection
LLM_BACKEND: Set to 'openai', 'gemini', or 'local' (default: 'openai')
Gemini Settings (when using Gemini backend)
GEMINI_MODEL: Model to use (default: gemini-1.5-flash)GEMINI_TEMPERATURE: Controls randomness (0.0 to 1.0, default: 0.7)GEMINI_MAX_TOKENS: Maximum tokens in response (default: 2048)
MCP Configuration
MCP_SERVER_URL: URL of your MCP server (default: http://localhost:6789)RATE_LIMIT_REQUESTS: Maximum number of requests per time windowRATE_LIMIT_SECONDS: Time window for rate limiting in secondsLOG_LEVEL: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)LOG_FILE: Path to the log file
Running the Application
-
Start the MCP server in a separate terminal:
python server.py -
In another terminal, start the chatbot:
python app.py -
Open your web browser and navigate to:
http://localhost:7860
Usage
- Type your message in the input box and press Enter or click Send.
- The chatbot will respond based on the selected backend.
- Use the dropdown to switch between local and OpenAI backends (if configured).
Examples
- "What is 5 plus 3?"
- "Add 10 and 20"
- "Hello! How are you?"
Project Structure
app.py: Main application with Gradio interfaceserver.py: MCP server implementationconfig.py: Configuration settingsrequirements.txt: Python dependencies.env.example: Example environment variablesREADME.md: This file
License
MIT License
Quick Start
Clone the repository
git clone https://github.com/gevans3000/mcp-awareInstall dependencies
cd mcp-aware
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.