
telegram mcp bot
一款基于 Telegram 的 AI 开发助手,支持软件设计、编码和部署任务。
Repository Info
About This Server
一款基于 Telegram 的 AI 开发助手,支持软件设计、编码和部署任务。
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
Telegram MCP Bot
An AI-powered Telegram bot for software development assistance. This bot uses Model Context Protocol (MCP) servers to provide help with software design, coding, and deployment tasks.
Features
- Software Design: Get comprehensive software architecture designs
- Code Implementation: Generate code based on design specifications
- Deployment Planning: Create deployment plans for various platforms
- Multiple Model Support: Connect to different AI models through MCP servers
- Supports Gemini API with free tier rate limiting
- MCP server protocol for other AI models
- Task Management: Track and manage multiple ongoing tasks
- Standardized IO Interface: Consistent communication with all AI providers
Project Structure
The project follows a modular architecture:
telegram_mcp_bot/
├── config/ # Configuration management
├── src/ # Source code
│ ├── agent/ # AI agent functionality
│ │ ├── models.py # Data models for tasks
│ │ ├── task_manager.py # Task execution engine
│ │ └── software_agent.py # Software development agent
│ ├── core/ # Core application components
│ │ └── config.py # Configuration loading
│ ├── mcp/ # Model Context Protocol client
│ │ ├── client.py # MCP server client
│ │ ├── manager.py # Multiple server management
│ │ ├── models.py # Data models for MCP
│ │ ├── providers/ # Model-specific providers
│ │ └── stdio/ # Standardized IO interface
│ │ ├── interface.py # Base interface definition
│ │ ├── adapter.py # Adapter for legacy providers
│ │ ├── providers.py # Provider implementations
│ │ └── factory.py # Provider factory
│ ├── telegram/ # Telegram bot interface
│ │ ├── bot.py # Main bot class
│ │ ├── handlers/ # Command handlers
│ │ └── utils.py # Telegram utilities
│ ├── utils/ # Shared utilities
│ │ └── logger.py # Logging setup
│ └── main.py # Application entry point
├── .env # Environment variables (not tracked by git)
├── .env.example # Example environment variables
├── requirements.txt # Python dependencies
└── run.py # Run script
Setup Instructions
-
Clone the repository:
git clone https://github.com/your-username/telegram_mcp_bot.git cd telegram_mcp_bot -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txt -
Create a configuration file:
cp .env.example .env # Edit .env with your settings -
Set up a Telegram bot:
- Talk to BotFather on Telegram to create a new bot
- Get your bot token and add it to the
.envfile
-
Configure MCP servers:
- Add MCP server URLs, API keys, and available models to the
.envfile
- Add MCP server URLs, API keys, and available models to the
Running the Bot
Start the bot:
python run.py
Usage
Once the bot is running, you can interact with it on Telegram:
/start- Start the bot/help- Show help message/design <requirements>- Create a software design/code <language>, <framework> - <description>- Generate code/deploy <platform> - <description>- Create deployment instructions/task [task_id]- Show task status or list recent tasks/cancel [task_id]- Cancel a running task/models- List available AI models/servers- Show status of all connected MCP servers/server_info [server_name]- Show detailed information about a specific server
Environment Variables
The bot uses the following environment variables:
# Telegram configuration
TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here
ALLOWED_CHAT_IDS=123456789,987654321 # Optional, restrict access to specific users
# MCP server configuration
MCP_SERVER_URL=http://localhost:5000
MCP_API_KEY=your_mcp_api_key_here
MCP_MODELS=gpt-4-turbo,claude-3-opus
MCP_PROVIDER=
# Gemini API configuration
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_RPM_LIMIT=60 # Requests per minute (free tier limit)
GEMINI_RPD_LIMIT=120 # Requests per day (free tier limit)
GEMINI_CONCURRENT_LIMIT=5 # Maximum concurrent requests
# Agent configuration
DEFAULT_MODEL=gemini-pro
MAX_TOKENS=8192
TEMPERATURE=0.7
SYSTEM_PROMPT=You are an AI assistant with expertise in software design, coding, deployment, and planning.
# Task configuration
MAX_ITERATIONS=10
TASK_TIMEOUT=600 # In seconds
Standardized IO Interface
The MCP module includes a standardized IO (stdio) interface that provides a consistent way to communicate with all AI model providers. This abstraction layer ensures that adding new providers or switching between them is seamless.
Key Components
- MCPStdio Interface: Base abstract class defining the common interface for all providers
- IOMessage/IORequest/IOResponse: Standardized data models for communication
- Provider Implementations: Concrete implementations for different AI providers
- Factory Pattern: Easy creation of appropriate providers based on configuration
Using the Interface
To use the standardized interface in your code:
from src.mcp.stdio import StdioProviderFactory, IOMessage, IORequest
# Create a provider using the factory
provider = StdioProviderFactory.create_provider(
provider_name="gemini", # or None for default MCP
url="https://api.example.com",
api_key="your-api-key",
models=["model-1", "model-2"]
)
# Create a request
messages = [
IOMessage(role="system", content="You are a helpful assistant"),
IOMessage(role="user", content="Hello, how are you?")
]
request = IORequest(
model="model-1",
messages=messages,
max_tokens=1024,
temperature=0.7
)
# Generate a response
response = await provider.generate(request)
print(response.choices[0].message.content)
# Or stream a response
async for chunk in provider.generate_stream(request):
print(chunk.choices[0].message.content, end="")
Adding a New Provider
To add a new AI provider:
- Create a new provider class in
src/mcp/stdio/providers.pythat implements theMCPStdiointerface - Add the provider to the
StdioProviderFactoryclass - Update the client initialization in
src/mcp/client.pyif needed
Getting Started with Gemini
To quickly test the Gemini API integration:
-
Set up your API key:
# Create a .env file cp .env.example .env # Edit .env and add your Gemini API key -
Run the test script:
python test_gemini.py -
Set Gemini as your default model: Add or change the following line in your
.envfile:DEFAULT_MODEL=gemini-pro
The system will automatically handle:
- Rate limiting (60 RPM, 120 RPD)
- Message format conversion
- Response parsing
- Error handling
When running the full bot, any tasks will use Gemini by default if you've set up the API key.
License
MIT
Contributing
MCP Server Integration
The bot has been enhanced with advanced MCP server integration capabilities, allowing it to connect to and manage multiple AI model providers simultaneously:
Features
- Multiple Provider Support: Connect to OpenAI, Anthropic, Gemini, and custom MCP servers
- Dynamic Server Discovery: Automatically discover and add new MCP servers
- Health Monitoring: Continuously check server health and availability
- Load Balancing: Intelligently route requests to the most suitable server
- Fallback Mechanisms: Automatically switch to alternate servers if one fails
- Server Management: View and manage servers through Telegram commands
Telegram Commands for Server Management
/servers- List all connected MCP servers with their status/server_info [server_name]- Show detailed information about a specific server
Supported Providers
-
OpenAI API
- Compatible with OpenAI's ChatGPT models and compatible APIs
- Full streaming support
- Usage tracking and rate limiting
-
Anthropic API
- Support for Claude models
- API version compatibility
- Response streaming
-
Gemini API
- Google's Gemini Pro and Vision models
- Free tier rate limiting support
-
Custom MCP Servers
- Support for self-hosted model servers
- Compatible with standard MCP protocol
Example Integrations
The bot includes example configurations for integrating with:
- LM Studio: Local inference server for open-source models
- Hugging Face Inference API: Access to thousands of open models
- Ollama: Local model server for efficient inference
- Perplexity AI: Specialized search and answer models
Adding a New MCP Server
To add a new MCP server, add the following to your .env file:
MCP_YOUR_SERVER_API_KEY=your_api_key_here
MCP_YOUR_SERVER_URL=https://your-server-url.com
MCP_YOUR_SERVER_MODELS=model1,model2,model3
MCP_YOUR_SERVER_PROVIDER=provider_type # openai, anthropic, etc.
MCP_YOUR_SERVER_PRIORITY=1 # Higher number = higher priority
You can also enable dynamic server discovery:
MCP_ENABLE_DISCOVERY=true
MCP_DISCOVERY_URLS=https://example.com/mcp-servers.json
MCP_DISCOVERY_INTERVAL=3600 # Every hour
Contributions are welcome! Please feel free to submit a Pull Request.
Quick Start
Clone the repository
git clone https://github.com/casperbanks/telegram-mcp-botInstall dependencies
cd telegram-mcp-bot
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.