timonharz
MCP Servertimonharzpublic

agent orchestra

一个灵活且可扩展的开源框架,用于创建能够解决多种任务的通用AI代理。

Repository Info

0
Stars
0
Forks
0
Watchers
3
Issues
Python
Language
MIT License
License

About This Server

一个灵活且可扩展的开源框架,用于创建能够解决多种任务的通用AI代理。

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

English | 中文 | 한국어 | 日本語

License: MIT

👋 Agent Orchestra

Agent Orchestra is an open-source framework for creating powerful, general-purpose AI agents that can solve a wide variety of tasks through orchestration of different capabilities and models.

Our goal is to provide a flexible, extensible architecture that enables developers to build agents that can:

  • Work with multiple LLM providers (OpenAI, Google, Anthropic, and more)
  • Use tools and APIs to accomplish complex tasks
  • Navigate and interact with web content
  • Execute code and terminal commands
  • Adapt to different environments and contexts

Whether you're building a personal assistant, a research tool, or a specialized automation agent, Agent Orchestra provides the foundation for creating intelligent, capable AI systems.

Project Demo

Features

  • Multi-Model Support: Use GPT-4, Claude, Gemini, and more
  • Tool Integration: Built-in support for various tools and APIs
  • Browser Automation: Navigate and interact with web content
  • Planning Capabilities: Generate and execute complex plans
  • Flexible Architecture: Extend with custom tools and components
  • Memory Management: Efficient handling of context and history

Installation

We provide two installation methods. Method 2 (using uv) is recommended for faster installation and better dependency management.

Method 1: Using conda

  1. Create a new conda environment:
conda create -n agent_orchestra python=3.12
conda activate agent_orchestra
  1. Clone the repository:
git clone https://github.com/timonharz/agent-orchestra.git
cd agent-orchestra
  1. Install dependencies:
pip install -r requirements.txt
  1. Install uv (A fast Python package installer and resolver):
curl -LsSf https://astral.sh/uv/install.sh | sh
  1. Clone the repository:
git clone https://github.com/timonharz/agent-orchestra.git
cd agent-orchestra
  1. Create a new virtual environment and activate it:
uv venv --python 3.12
source .venv/bin/activate  # On Unix/macOS
# Or on Windows:
# .venv\Scripts\activate
  1. Install dependencies:
uv pip install -r requirements.txt

Browser Automation Tool (Optional)

playwright install

Configuration

Agent Orchestra requires configuration for the LLM APIs it uses. Follow these steps to set up your configuration:

  1. Create a config.toml file in the config directory (you can copy from the example):
cp config/config.example.toml config/config.toml
  1. Edit config/config.toml to add your API keys and customize settings:
# Global LLM configuration
[llm]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."  # Replace with your actual API key
max_tokens = 4096
temperature = 0.0

# Optional configuration for specific LLM models
[llm.vision]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."  # Replace with your actual API key

Using Google Gemini Models

To use Google Gemini models, copy the Gemini configuration example:

cp config/config.example-gemini.toml config/config.toml

Then edit config/config.toml to add your Google API key:

# Global LLM configuration for Google Gemini
[llm]
model = "gemini-2.0-flash-001"                                          # The LLM model to use
base_url = "https://generativelanguage.googleapis.com/v1beta/openai/"   # API endpoint URL
api_key = "YOUR_API_KEY"                                                # Your API key
temperature = 0.0                                                       # Controls randomness
max_tokens = 8096                                                       # Maximum number of tokens in the response

Quick Start

One line for running Agent Orchestra:

python main.py

Then input your idea via terminal!

For MCP tool version, you can run:

python run_mcp.py

For multi-agent version, you can run:

python run_flow.py

How to Contribute

We welcome any contributions to help improve Agent Orchestra! Feel free to submit issues or pull requests for:

  • Bug fixes and improvements
  • New features and tools
  • Documentation updates
  • Performance optimizations

Before submitting a pull request, please use the pre-commit tool to check your changes. Run pre-commit run --all-files to execute the checks.

License

Agent Orchestra is released under the MIT License. See the LICENSE file for details.

Acknowledgments

This project builds upon and draws inspiration from various open-source projects in the AI agent space. We extend our gratitude to all contributors of these projects for their valuable work.

Special thanks to anthropic-computer-use and browser-use for providing foundational capabilities.

Deployment Options

Run the server directly, but note that port 80 requires root privileges:

sudo python3 server.py

Alternatively, use a higher port number:

PORT=8080 python3 server.py

Option 2: Systemd Service

Use the provided deployment script:

sudo ./deploy.sh

This creates and starts a systemd service running on port 80.

The easiest way to run the server without root privileges:

docker-compose up -d

Option 4: Nginx with HTTPS (Best for Production)

For a production environment, use Nginx as a reverse proxy with proper SSL certificates:

  1. Run the deployment script with your domain name:
sudo ./deploy-with-nginx.sh your-domain.com

This script:

  • Installs Nginx and Certbot
  • Configures the Agent Orchestra app to run on port 8080
  • Sets up Nginx as a reverse proxy
  • Obtains and configures SSL certificates via Let's Encrypt
  • Configures HTTP to HTTPS redirection

Your API will be accessible at https://your-domain.com.

Manual Setup

If you prefer to set up Nginx manually:

  1. Run the server on a non-privileged port:
PORT=8080 python3 server.py
  1. Install Nginx:
sudo apt-get install nginx
  1. Use our provided Nginx config:
sudo cp nginx.conf /etc/nginx/sites-available/agent-orchestra
  1. Edit the config file to match your domain:
sudo nano /etc/nginx/sites-available/agent-orchestra
  1. Enable the site and get SSL certificates:
sudo ln -s /etc/nginx/sites-available/agent-orchestra /etc/nginx/sites-enabled/
sudo certbot --nginx -d your-domain.com
sudo systemctl restart nginx

API Endpoints

  • GET /health - Health check
  • GET /api/models - List available models
  • POST /api/agent/run - Run agent with messages
  • POST /api/chat/completions - Legacy chat completion API

Quick Start

1

Clone the repository

git clone https://github.com/timonharz/agent-orchestra
2

Install dependencies

cd agent-orchestra
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownertimonharz
Repoagent-orchestra
LanguagePython
LicenseMIT License
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation