compiledwithproblems
MCP Servercompiledwithproblemspublic

mcp

一个基于 Spring Boot 的服务,提供统一接口与多种大语言模型交互。

Repository Info

0
Stars
0
Forks
0
Watchers
0
Issues
Kotlin
Language
-
License

About This Server

一个基于 Spring Boot 的服务,提供统一接口与多种大语言模型交互。

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

MCP (Model Context Protocol) LLM Service

Overview

MCP is a Spring Boot-based service that provides a flexible and extensible interface for interacting with various Large Language Models (LLMs). The service is designed to support multiple LLM providers (like OpenAI, Anthropic, Ollama, etc.) through a unified API, making it easy to switch between different LLM backends without changing your application code.

Key Features

  • Multi-Provider Support: Built-in support for multiple LLM providers:
    • Ollama (local deployment)
    • OpenAI
    • Anthropic
    • Together.ai
    • Custom providers
  • Unified API: Consistent interface regardless of the underlying LLM provider
  • Tool Integration: Extensible tool system for adding custom capabilities
  • Coroutine Support: Built with Kotlin coroutines for efficient async operations
  • Spring Boot Integration: Leverages Spring Boot's robust ecosystem

Architecture

Core Components

  1. LLM Client Layer

    • LlmClient interface: Defines the contract for LLM interactions
    • GenericLlmClient: Implementation handling different provider-specific details
  2. Service Layer

    • LlmService: Manages chat interactions and tool integration
    • ToolService: Handles tool execution and lifecycle
  3. Tool System

    • ToolRegistry: Central registry for available tools
    • Tool: Represents executable functions that LLMs can use
    • ToolExecutor: Interface for implementing tool behavior
  4. Configuration

    • Provider-specific configurations
    • Environment-based configuration
    • Flexible API endpoint configuration

Setup and Configuration

Prerequisites

  • Java 17 or higher
  • Maven
  • (Optional) Local Ollama installation for local LLM support

Installation

  1. Clone the repository:
git clone [repository-url]
cd mcp
  1. Configure environment variables:
# For OpenAI
export OPENAI_API_KEY=your_key_here

# For Anthropic
export ANTHROPIC_API_KEY=your_key_here

# For Together.ai
export TOGETHER_API_KEY=your_key_here

# For custom provider
export LLM_BASE_URL=your_url_here
export LLM_API_KEY=your_key_here
export LLM_MODEL_NAME=your_model_here
  1. Build the project:
mvn clean install
  1. Run the application:
mvn spring-boot:run

Quick Start with Ollama

Get up and running in under 5 minutes:

  1. Install Ollama locally following instructions at Ollama.ai
  2. Clone and build MCP:
git clone [repository-url]
cd mcp
mvn clean install
  1. Start the service:
export LLM_PROVIDER=OLLAMA
mvn spring-boot:run
  1. Test with a simple request:
curl -X POST http://localhost:8080/api/chat/simple \
  -H "Content-Type: text/plain" \
  -d "What is the meaning of life?"

Docker Deployment

Prerequisites

  • Docker
  • Docker Compose
  • At least 12GB of available RAM (for running both MCP and Ollama)
  • At least 2GB of RAM (for standalone MCP)

Deployment Options

Option 1: Full Stack with Ollama (Local LLM)

This option starts both MCP and Ollama services, providing a completely local LLM solution:

  1. Clone the repository:
git clone [repository-url]
cd mcp
  1. Start the services:
docker-compose up -d

Option 2: Standalone MCP (Cloud LLM)

This option starts only the MCP service, configured to use cloud LLM providers:

  1. Clone the repository:
git clone [repository-url]
cd mcp
  1. Create a .env file with your API keys:
# .env
OPENAI_API_KEY=your_key_here
# ANTHROPIC_API_KEY=your_key_here
# TOGETHER_API_KEY=your_key_here
  1. Start the standalone service:
docker-compose -f docker-compose.standalone.yml up -d
  1. To use a different provider, set the appropriate environment variables in your .env file and update the LLM_PROVIDER in docker-compose.standalone.yml.

Checking Service Status

For either deployment option:

  1. Check the status:
# For full stack
docker-compose ps

# For standalone
docker-compose -f docker-compose.standalone.yml ps
  1. View logs:
# For full stack
docker-compose logs -f

# For standalone
docker-compose -f docker-compose.standalone.yml logs -f
  1. Test the service:
curl -X POST http://localhost:8080/api/chat/simple \
  -H "Content-Type: text/plain" \
  -d "What is the meaning of life?"

Docker Configuration

The Docker setup includes:

  • Multi-stage build for optimal image size
  • Non-root user for security
  • Health checks for both services
  • Resource limits to prevent container issues
  • Persistent volume for Ollama models
  • Bridge network for service communication

Resource Requirements

  • MCP Service:

    • Minimum: 1GB RAM
    • Recommended: 2GB RAM
  • Ollama Service:

    • Minimum: 4GB RAM
    • Recommended: 8GB RAM

Customizing Docker Setup

  1. Modify resource limits in docker-compose.yml:
deploy:
  resources:
    limits:
      memory: 2G
    reservations:
      memory: 1G
  1. Change ports in docker-compose.yml:
ports:
  - "custom_port:8080"  # For MCP
  - "custom_port:11434" # For Ollama
  1. Add custom environment variables:
environment:
  - CUSTOM_VAR=value

Usage

Basic Chat Endpoint

POST /api/chat
Content-Type: application/json

{
    "message": "Your message here",
    "temperature": 0.7,
    "maxTokens": 1000
}

Simple Chat Endpoint

POST /api/chat/simple
Content-Type: text/plain

Your message here

Provider Configuration

Ollama (Local)

LLM_PROVIDER=OLLAMA
# No API key needed for Ollama

Important Ollama Setup Steps

  1. First start the services:
docker-compose up -d
  1. Pull the required model (after services are running):
# Connect to the Ollama container
docker exec mcp-ollama-1 ollama pull llama3.2
  1. Verify the model is available:
docker exec mcp-ollama-1 ollama list
  1. Test the service:
curl -X POST http://localhost:8080/api/chat/simple \
  -H "Content-Type: text/plain" \
  -d "What is the meaning of life?"

Using Different Ollama Models

  • To use a different model, update the modelName in LlmProviderConfig.kt
  • Available models can be found at Ollama Model Library
  • Example models:
    • llama3.2 (default)
    • mistral
    • codellama
    • llama2
    • neural-chat

Troubleshooting Ollama

  1. Model Not Found

    # List available models
    docker exec mcp-ollama-1 ollama list
    # Pull missing model
    docker exec mcp-ollama-1 ollama pull <model_name>
    
  2. Connection Issues

    • Ensure Ollama container is running: docker-compose ps
    • Check Ollama logs: docker-compose logs ollama
    • Verify network connectivity: docker network inspect mcp-network
  3. Performance Issues

    • Adjust memory limits in docker-compose.yml
    • Monitor resource usage: docker stats

OpenAI

LLM_PROVIDER=OPENAI
OPENAI_API_KEY=your_key_here

Anthropic

LLM_PROVIDER=ANTHROPIC
ANTHROPIC_API_KEY=your_key_here

Together.ai

LLM_PROVIDER=TOGETHER_AI
TOGETHER_API_KEY=your_key_here

Tool System

The MCP includes a powerful tool system that allows LLMs to execute custom functions. Tools are registered with the ToolRegistry and can be invoked during chat sessions.

Creating a Custom Tool

  1. Define your tool:
data class MyTool(
    override val name: String = "myTool",
    override val description: String = "Description of what the tool does",
    override val schema: String = """{"type": "object", "properties": {...}}"""
) : Tool

class MyToolExecutor : ToolExecutor {
    override suspend fun execute(input: Map<String, Any>): ToolResult {
        // Tool implementation
    }
}
  1. Register the tool:
toolRegistry.registerTool(MyTool(), MyToolExecutor())

Development

Project Structure

src/
├── main/
│   └── kotlin/
│       └── com/
│           └── mcp/
│               ├── controller/    # REST endpoints
│               ├── llm/          # LLM integration
│               ├── model/        # Data models
│               ├── registry/     # Tool registry
│               └── service/      # Business logic
└── test/
    └── kotlin/
        └── com/
            └── mcp/
                └── tests/        # Test cases

Adding a New Provider

  1. Add the provider to LlmProvider enum
  2. Implement provider configuration in LlmProviders
  3. Add provider-specific logic in GenericLlmClient

Future Updates

The following features are planned for upcoming releases:

  1. Infrastructure as Code (IaC) Implementation

    • Converting configuration to YAML format
    • Improved configuration management
    • Better deployment automation
  2. Enhanced Tool System

    • Adding more built-in tools
    • Improved tool discovery and documentation
    • Enhanced tool execution pipeline
  3. File Upload and Analysis

    • Support for file uploads
    • Document analysis and querying
    • Context-aware responses based on uploaded content

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request

Author

Brandon Tate

License

MIT License

Copyright (c) 2025 Tate Industries LLC

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Support

Please don't contact me.

Security Considerations

API Key Management

  • Never commit API keys to version control
  • Use environment variables or secure secret management systems
  • Consider using AWS Secrets Manager, HashiCorp Vault, or similar for production deployments
  • Rotate API keys regularly

Rate Limiting

  • Be aware of provider-specific rate limits
  • Implement appropriate retry mechanisms
  • Monitor API usage to avoid unexpected costs

Data Privacy

  • Be cautious with sensitive data sent to LLM providers
  • Consider using Ollama for sensitive local deployments
  • Review provider privacy policies and data handling practices

Provider Version Compatibility

ProviderAPI VersionSupported ModelsNotes
OpenAIv1gpt-3.5-turbo, gpt-4Supports function calling
Anthropicv1claude-3-opus-20240229Best for complex reasoning
Together.aiv1mistral-7b-instructGood balance of speed/quality
Ollamav1llama2, mistral, othersLocal deployment, no API key needed

Troubleshooting

Common Issues

  1. Connection Refused to Ollama

    • Ensure Ollama is running locally
    • Check if the default port (11434) is available
    • Verify firewall settings
  2. API Key Issues

    • Verify environment variables are set correctly
    • Check for trailing spaces in API keys
    • Ensure proper permissions for the API key
  3. Memory Issues

    • Adjust JVM heap size if needed
    • Monitor memory usage with large conversations
    • Consider implementing conversation pruning
  4. Rate Limiting

    • Implement exponential backoff
    • Monitor response headers for rate limit info
    • Consider using multiple API keys for high-volume deployments

Logging

Enable debug logging by adding to application.properties:

logging.level.com.mcp=DEBUG

API Endpoints

Chat Endpoints

  1. Simple Chat
POST /api/chat/simple
Content-Type: text/plain

Your message here
  1. Advanced Chat
POST /api/chat
Content-Type: application/json

{
    "message": "Your message here",
    "temperature": 0.7,    // Controls randomness (0.0 to 1.0)
    "maxTokens": 1000      // Maximum response length
}

Response Format

{
    "role": "assistant",
    "content": "The response from the LLM..."
}

Environment Variables

VariableDescriptionRequiredDefault
LLM_PROVIDERProvider to use (OLLAMA, OPENAI, etc.)YesOLLAMA
OLLAMA_HOSTHostname for Ollama serviceNoollama
OPENAI_API_KEYOpenAI API keyOnly for OpenAI-
ANTHROPIC_API_KEYAnthropic API keyOnly for Anthropic-
TOGETHER_API_KEYTogether.ai API keyOnly for Together.ai-

Quick Start

1

Clone the repository

git clone https://github.com/compiledwithproblems/mcp
2

Install dependencies

cd mcp
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownercompiledwithproblems
Repomcp
LanguageKotlin
License-
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation