
ollama mcp server
Use fast-agent to use MCP tools with local LLM, API or Claude Desktop. WIP
Repository Info
About This Server
Use fast-agent to use MCP tools with local LLM, API or Claude Desktop. WIP
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
Ollama MCP Server
A comprehensive Model Context Protocol (MCP) server for Ollama integration with advanced features including script management, multi-agent workflows, and process leak prevention.
🌟 Features
- 🔄 Async Job Management: Execute long-running tasks in the background
- 📝 Script Templates: Create reusable prompt templates with variable substitution
- 🤖 Fast-Agent Integration: Multi-agent workflows (chain, parallel, router, evaluator)
- 🛡️ Process Leak Prevention: Proper cleanup and resource management
- 📊 Comprehensive Monitoring: Job tracking, status monitoring, and output management
- 🎯 Built-in Prompts: Interactive guidance templates for common tasks
- ⚡ Multiple Model Support: Work with any locally installed Ollama model
🚀 Quick Start
Prerequisites
- Python 3.8+ with uv package manager
- Ollama installed and running
- Claude Desktop for MCP integration
Installation
- Setup Environment: Be advised- This readme was revised by a less than concientious AI.
cd /path/to/ollama-mcp-server
uv venv --python 3.12 --seed
source .venv/bin/activate
uv add mcp[cli] python-dotenv
- Configure Claude Desktop:
Copy configuration from
example_of_bad_ai_gen_mcp_config_do_not_use.json(Don't lol. Use the example_claude_desktop_config.json)to your Claude Desktop config file:
- Linux:
~/.config/Claude/claude_desktop_config.json - macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
-
Update paths in the config to match your system
-
Restart Claude Desktop
🛠️ Available Tools
Core Operations
list_ollama_models- Show all available Ollama modelsrun_ollama_prompt- Execute prompts with any model (sync/async)get_job_status- Check job completion statuslist_jobs- View all running and completed jobscancel_job- Stop running jobs
Script Management
save_script- Create reusable prompt templateslist_scripts- View saved templatesget_script- Read template contentrun_script- Execute templates with variables
Fast-Agent Workflows
create_fastagent_script- Single-agent scriptscreate_fastagent_workflow- Multi-agent workflowsrun_fastagent_script- Execute agent workflowslist_fastagent_scripts- View available workflows
System Integration
run_bash_command- Execute system commands safelyrun_workflow- Multi-step workflow execution
📖 Built-in Prompts
Interactive prompts to guide common tasks:
ollama_guide- Interactive user guideollama_run_prompt- Simple prompt executionmodel_comparison- Compare multiple modelsfast_agent_workflow- Multi-agent workflowsscript_executor- Template executionbatch_processing- Multiple prompt processingiterative_refinement- Content improvement workflows
📁 Directory Structure
ollama-mcp-server/
├── src/ollama_mcp_server/
│ └── server.py # Main server code
├── outputs/ # Generated output files
├── scripts/ # Saved script templates
├── workflows/ # Workflow definitions
├── fast-agent-scripts/ # Fast-agent Python scripts
├── prompts/ # Usage guides
│ ├── tool_usage_guide.md
│ ├── prompt_templates_guide.md
│ └── setup_guide.md
├── example_mcp_config.json # Claude Desktop config
└── README.md
🔧 Development
Run Development Server
cd ollama-mcp-server
uv run python -m ollama_mcp_server.server
Debug with MCP Inspector
mcp dev src/ollama_mcp_server/server.py
🛡️ Process Management
The server includes comprehensive process leak prevention:
- Signal Handling: Proper SIGTERM/SIGINT handling
- Background Task Tracking: All async tasks monitored
- Resource Cleanup: Automatic process termination
- Memory Management: Prevents accumulation of zombie processes
Monitor health with:
ps aux | grep mcp | wc -l # Should show <10 processes
📊 Usage Examples
Simple Prompt Execution
1. Use "ollama_run_prompt" prompt in Claude
2. Specify model and prompt text
3. Get immediate results
Multi-Agent Workflow
1. Use "fast_agent_workflow" prompt
2. Choose workflow type (chain/parallel/router/evaluator)
3. Define agents and initial prompt
4. Monitor execution
Script Templates
1. Create template with save_script
2. Use variables: {variable_name}
3. Execute with run_script
4. Pass JSON variables object
🚨 Troubleshooting
Model not found: Use list_ollama_models for exact names
Connection issues: Start Ollama with ollama serve
High process count: Server now prevents leaks automatically
Job stuck: Use cancel_job to stop problematic tasks
🤝 Contributing
- Follow the MCP Python SDK development guidelines
- Use proper type hints and docstrings
- Test all new features thoroughly
- Ensure process cleanup in all code paths
📄 License
This project follows the same license terms as the MCP Python SDK.
🙏 Acknowledgments
Built on the Model Context Protocol and Ollama with process management patterns from MCP best practices.
Ready to get started? Check the prompts/setup_guide.md for detailed installation instructions!
Quick Start
Clone the repository
git clone https://github.com/angrysky56/ollama-mcp-serverInstall dependencies
cd ollama-mcp-server
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.