phyde19
MCP Serverphyde19public

mcp experiments

Repo for Model Context Protocol (MCP) related notes and experimentation code!

Repository Info

0
Stars
0
Forks
0
Watchers
0
Issues
Python
Language
-
License

About This Server

Repo for Model Context Protocol (MCP) related notes and experimentation code!

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

📊 MCP Experiments

This is a lightweight repo for experimenting with tool-augmented LLM interactions using Model Context Protocol (MCP)–style architecture patterns.

Currently, the code implements a stateless command execution interface where a language model can request shell commands to be run inside a persistent Docker container. Each command specifies the desired working directory, and the execution is fully isolated (no shell memory, environment persistence, or session tracking).

Project Structure

mcp-experiments/
├── mcp-stdio/         # Tool definitions and REPL-like loop with OpenAI client
├── notes/             # Idea sketches and miscellaneous logs
├── .vscode/           # Optional editor config
├── Makefile           # Common container lifecycle commands

Usage

1. Build and start the container

make build   # Build the container image (llm-lite)
make run     # Run it in the background as 'shared-box'

2. Launch the REPL loop

make repl

This starts a local loop using the o4-mini OpenAI model. The model will be prompted with tool definitions that allow it to invoke shell commands using a run_in_container(command, workdir) interface. Each tool call is executed using docker exec.

3. Attach manually (optional)

make exec

You can manually inspect or test the container by attaching a live shell session.


⚙️ Tool Schema: run_in_container

{
  "name": "run_in_container",
  "description": "Execute a bash command in the container at a specific directory.",
  "parameters": {
    "command": "string", // e.g., 'ls -la', 'python script.py'
    "workdir": "string"  // e.g., '/home/sandbox', '/opt/app'
  }
}

The tool returns:

  • stdout: Standard output of the command
  • stderr: Standard error output
  • exit_code: Process exit code (0 for success)

💡 Notes

  • This is not a persistent shell or stateful REPL. Each tool call is independent and must include the full desired context.
  • Intended as groundwork for building richer agent workflows with long-term memory or session awareness.
  • Currently uses OpenAI's o4-mini model with tool calling enabled.
  • MCP-specific protocols are not implemented yet, but the architecture is designed to support them.

🛠️ Requirements

  • Python 3.10+
  • Docker
  • OpenAI Python client (pip install openai)
  • A valid OPENAI_API_KEY in your environment

Quick Start

1

Clone the repository

git clone https://github.com/phyde19/mcp-experiments
2

Install dependencies

cd mcp-experiments
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerphyde19
Repomcp-experiments
LanguagePython
License-
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation