
memory alpha
一个为LLM代理提供代码上下文检索和索引更新功能的Model Context Protocol (MCP) 服务器。
Repository Info
About This Server
一个为LLM代理提供代码上下文检索和索引更新功能的Model Context Protocol (MCP) 服务器。
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
Memory Alpha - Model Context Protocol Server
Memory Alpha is a Model Context Protocol (MCP) server that gives LLM agents the ability to retrieve relevant code context and update the code index, using the FastMCP library.
Features
- query_context: Retrieve bounded "evidence-packs" of code chunks that best answer a prompt
- index_update: Push incremental code changes to keep the underlying index fresh
- Built with FastMCP for minimal boilerplate and automatic SSE support
- Optimized for LLM agent interactions
Installation
Prerequisites
- Python 3.10 or higher
- FastMCP (installed from GitHub)
- Ollama for embeddings
- Qdrant for vector storage
Setup
# Clone the repository
git clone https://github.com/your-org/memory-alpha.git
cd memory-alpha
# Create a virtual environment
uv venv
# Activate the virtual environment
source .venv/bin/activate
# Install dependencies
uv pip install git+https://github.com/jlowin/fastmcp.git
uv pip install -e .
# Ensure Ollama is running and has the required model
./ensure_ollama.py
Usage
Starting the server
# HTTP/SSE mode
./run.py
# stdio mode (for use with modelcontextprotocol/inspector)
./run_stdio.py
# Or using fastmcp directly
fastmcp run --port 58765 --transport sse server.py # HTTP/SSE mode
fastmcp run --transport stdio server.py # stdio mode
The HTTP server will start on http://0.0.0.0:58765 by default.
Validating the server
The project includes several validation scripts:
# Validate both HTTP and stdio modes
./validate_all.py
# Validate only HTTP mode
./simple_validation.py --start
# Validate only stdio mode
./validate_stdio.py
API Tools
- query_context: Retrieve relevant code chunks for a prompt
- index_update: Update the code index with new file content
- resource://health: Health check endpoint
- resource://docs/chunking: Documentation about the chunking process
Command Line Tools
When installed as a package, Memory Alpha provides several command-line tools:
- memory-alpha: Start the MCP server
- memory-alpha-ensure-ollama: Check if Ollama is running and has the required model
- memory-alpha-debug-settings: Display all current settings and their sources
These tools can be used with command-line arguments. For example:
# Start the server on a specific port
memory-alpha --port 8000
# Check for a different Ollama model
memory-alpha-ensure-ollama --model mxbai-embed-large-v2
# Show settings while overriding environment variables
OLLAMA_URL=http://other-server:11434 memory-alpha-debug-settings
Run any command with --help to see all available options.
Example queries
# Using curl to query the health endpoint
curl http://localhost:9876/resources/resource://health
# Using the MCP Inspector to test the server
npx @modelcontextprotocol/inspector --cli http://localhost:9876 --method tools/list
Project Structure
src/memory_alpha/server.py: Main MCP server implementationsrc/memory_alpha/settings.py: Settings configuration using pydantic_settingssrc/memory_alpha/params.py: Input parameter schemas.env: Environment configuration (see below)
Configuration
Memory Alpha uses environment variables for configuration, which can be set directly or through a .env file.
-
Copy the example configuration file:
cp .env.example .env -
Edit the
.envfile with your settings:# Server settings QDRANT_URL=http://localhost:6333 OLLAMA_URL=http://localhost:11434 # Model settings EMBED_MODEL=mxbai-embed-large # Collection names COLLECTION_PREFIX=production_ -
Install Ollama from ollama.ai and pull the embedding model:
ollama pull mxbai-embed-largeAlternatively, if you've installed the package, use the bundled command:
memory-alpha-ensure-ollamaOr run the script directly:
./ensure_ollama.py
All settings have sensible defaults and will only be overridden by environment variables if provided.
-
To view your current settings configuration, use one of the following:
If you've installed the package:
memory-alpha-debug-settingsOr run the script directly:
python debug_settings.pyThis will show all settings and their sources (default, environment variable, or .env file).
You can also set environment variables directly when running the command:
QDRANT_URL=http://another-server:6333 memory-alpha-debug-settings
Development
Running tests
Memory Alpha tests require:
- A running Qdrant server (default: http://localhost:6333)
- A running Ollama server with mxbai-embed-large (default: http://localhost:11434)
To check if the required services are running:
./tests/check_services.py
To run all tests:
./run_tests.py
Or use pytest directly:
# Run all tests
pytest
# Run only integration tests
pytest -m integration
# Run a specific test file
pytest tests/test_memory.py
Linting and type checking
ruff check .
mypy .
Quick Start
Clone the repository
git clone https://github.com/aleksclark/memory-alphaInstall dependencies
cd memory-alpha
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.