
perplexity mcp
基于 Python 的 Perplexity 模型上下文协议 (MCP) 服务器实现。
Repository Info
About This Server
基于 Python 的 Perplexity 模型上下文协议 (MCP) 服务器实现。
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
Perplexity MCP Server
This is a Python implementation of the Model Context Protocol (MCP) server for Perplexity. The server is built using FastAPI and follows the MCP specification.
Features
- Implements the core MCP endpoints:
/v1/models- List available models registered in the local MCP server/v1/models/{model_id}- Get model information for a locally registered model/v1/models/{model_id}/chat- Chat with a model - this endpoint actually calls the Perplexity API/v1/models/{model_id}/complete- Text completion (demo implementation)
- Additional utility endpoints:
/perplexity-models- Lists the known models available in the Perplexity API/api-key-test- Test if your Perplexity API key is properly configured/server-info- Get information about the server/health- Health check endpoint
- Ready for deployment
Running the Server
Locally
- Create a
.envfile in the project root with your Perplexity API key:PERPLEXITY_API_KEY=your_actual_key_here - Install dependencies:
pip install -r requirements.txt - Start the server:
python main.py
The server will start on http://0.0.0.0:5000 (or the next available port).
Docker Deployment
- Build the Docker image:
docker build -t perplexity-mcp . - Run the container, passing your
.envfile:docker run --env-file .env -p 5003:5000 perplexity-mcp- The API will be available at http://localhost:5003
- You can use any available port on your host (replace 5003 as needed)
Note:
.envis included in.gitignoreand.dockerignoreto keep your API key secure.- Never commit your
.envfile to GitHub.
Render Deployment
Render can automatically build and deploy this project from GitHub using the included Dockerfile.
- Push your code to GitHub.
- In the Render dashboard, create a new Web Service:
- Select "Docker" as the environment.
- Connect your GitHub repo and select the branch (e.g.,
main). - Set your environment variable
PERPLEXITY_API_KEYin the Render dashboard (Settings > Environment). - Choose your desired port (default is 5000).
- Deploy! Render will build and run your Docker container.
Security: Do not rely on .env in the repo for Render. Always set secrets in the Render dashboard.
API Documentation
Once the server is running, you can access the API documentation at /docs (e.g., http://0.0.0.0:5000/docs).
Model Registry
The server includes a simple in-memory model registry. In a production environment, you'd want to replace this with a more sophisticated registry system.
Adding Your Own Models
To add your own models, update the MODEL_REGISTRY dictionary in main.py with your model configurations.
Supported Models
The server supports the following Perplexity models:
sonar- Perplexity's flagship model with strong reasoningsonar-small- Smaller, faster version of Sonarsonar-medium- Medium-sized version of Sonarsonar-pro- Pro version of Sonar with enhanced capabilitiessonar-deep-research- Specialized for in-depth research taskssonar-reasoning-pro- Advanced reasoning capabilities with enhanced logiccodellama-70b- Specialized for code generationmixtral-8x7b- From Mistral AI, good for general tasksmistral-7b- Fast and efficient model from Mistral AI
To use a specific model, simply call /v1/models/{model_id}/chat with the model ID.
Supported Model Parameters
The chat endpoint supports the following parameters from the Perplexity API:
max_tokens: Maximum number of tokens to generatetemperature: Sampling temperature between 0 and 2top_p: Nucleus sampling parameter between 0 and 1top_k: Top-k sampling parameterpresence_penalty: Presence penalty between -2 and 2frequency_penalty: Frequency penalty between -2 and 2stop: Stop sequences that cause the model to stop generatingrepetition_penalty: Repetition penalty for token generationlogprobs: Whether to return log probabilities of the output tokensstream: Whether to stream the response
Quick Start
Clone the repository
git clone https://github.com/howardjong/perplexity-mcpInstall dependencies
cd perplexity-mcp
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.