
mcp experiments
Repo for Model Context Protocol (MCP) related notes and experimentation code!
Repository Info
About This Server
Repo for Model Context Protocol (MCP) related notes and experimentation code!
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
📊 MCP Experiments
This is a lightweight repo for experimenting with tool-augmented LLM interactions using Model Context Protocol (MCP)–style architecture patterns.
Currently, the code implements a stateless command execution interface where a language model can request shell commands to be run inside a persistent Docker container. Each command specifies the desired working directory, and the execution is fully isolated (no shell memory, environment persistence, or session tracking).
Project Structure
mcp-experiments/
├── mcp-stdio/ # Tool definitions and REPL-like loop with OpenAI client
├── notes/ # Idea sketches and miscellaneous logs
├── .vscode/ # Optional editor config
├── Makefile # Common container lifecycle commands
Usage
1. Build and start the container
make build # Build the container image (llm-lite)
make run # Run it in the background as 'shared-box'
2. Launch the REPL loop
make repl
This starts a local loop using the
o4-miniOpenAI model. The model will be prompted with tool definitions that allow it to invoke shell commands using arun_in_container(command, workdir)interface. Each tool call is executed usingdocker exec.
3. Attach manually (optional)
make exec
You can manually inspect or test the container by attaching a live shell session.
⚙️ Tool Schema: run_in_container
{
"name": "run_in_container",
"description": "Execute a bash command in the container at a specific directory.",
"parameters": {
"command": "string", // e.g., 'ls -la', 'python script.py'
"workdir": "string" // e.g., '/home/sandbox', '/opt/app'
}
}
The tool returns:
stdout: Standard output of the commandstderr: Standard error outputexit_code: Process exit code (0 for success)
💡 Notes
- This is not a persistent shell or stateful REPL. Each tool call is independent and must include the full desired context.
- Intended as groundwork for building richer agent workflows with long-term memory or session awareness.
- Currently uses OpenAI's
o4-minimodel with tool calling enabled. - MCP-specific protocols are not implemented yet, but the architecture is designed to support them.
🛠️ Requirements
- Python 3.10+
- Docker
- OpenAI Python client (
pip install openai) - A valid
OPENAI_API_KEYin your environment
Quick Start
Clone the repository
git clone https://github.com/phyde19/mcp-experimentsInstall dependencies
cd mcp-experiments
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.