
ollama_strands
AWS Strands Agents with Ollama Examples
Repository Info
About This Server
AWS Strands Agents with Ollama Examples
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
Strands Agents with Ollama Examples
A collection of example scripts demonstrating how to build intelligent agents using the Strands framework with local Ollama language models. These scripts showcase different agent capabilities from basic calculations to real-time AWS documentation queries.
🚀 Overview
This repository contains three progressively advanced examples of Strands Agents:
- Calculator Agent - Basic mathematical computation capabilities
- Interactive Conversational Agent - General-purpose question answering
- AWS Documentation Agent - Real-time AWS documentation queries via MCP
All examples use local Ollama models for privacy and control, with comprehensive documentation and error handling.
📋 Prerequisites
System Requirements
- Python 3.8+
- Ollama installed and running locally
- uvx (for AWS documentation script)
Ollama Setup
- Install Ollama from ollama.ai
- Pull a compatible model:
ollama pull llama2 # or ollama pull mistral - Ensure Ollama is running:
ollama serve
Python Dependencies
All scripts use inline dependency management. Dependencies are automatically installed when running with uv:
uv run script_name.py
⚙️ Configuration
Create a .env file in the repository root:
STRANDS_OLLAMA_HOST=localhost
STRANDS_OLLAMA_MODEL=llama2
Environment Variables:
STRANDS_OLLAMA_HOST: Hostname where Ollama is running (default: localhost)STRANDS_OLLAMA_MODEL: Name of the Ollama model to use
🧮 Script 1: Calculator Agent
File: calculator_agent.py
A Strands Agent equipped with calculator tools for mathematical computations.
Features
- Integration with calculator tool
- Mathematical problem solving
- Step-by-step calculation explanations
Usage
uv run calculator_agent.py
Example Query
The script automatically asks: "What is the square root of 1764?"
Expected output: Detailed explanation showing the calculation process and result (42).
Key Components
Agent: Core Strands agent with calculator toolOllamaModel: Local language model integrationcalculator: Pre-built mathematical computation tool
💬 Script 2: Interactive Conversational Agent
File: interactive_agent.py
A general-purpose conversational agent for open-ended discussions and questions.
Features
- Interactive command-line interface
- General knowledge question answering
- No specialized tools - pure conversation
- Default topic about Agentic AI
Usage
uv run interactive_agent.py
Example Interaction
Enter a topic to query the LLM about: What is machine learning?
Or press Enter for the default topic: "Tell me about Agentic AI"
Key Components
Agent: Basic conversational agent without toolsOllamaModel: Local language model for responses- Interactive user input with sensible defaults
📚 Script 3: AWS Documentation Agent
File: aws_mcp_agent.py
An advanced agent that provides real-time access to official AWS documentation using Model Context Protocol (MCP).
Features
- Real-time AWS documentation queries
- Official AWS Labs MCP server integration
- Markdown-formatted responses
- Up-to-date service information
Prerequisites
- uvx installed:
curl -LsSf https://astral.sh/uv/install.sh | sh - Internet connection for MCP server
Usage
uv run aws_mcp_agent.py
Example Queries
Ask a question about aws documentation: How do I configure S3 buckets for static websites?
Or press Enter for default: "Tell me about Amazon Bedrock and how to use it with Python, provide the output in Markdown format"
Key Components
Agent: Strands agent with AWS documentation toolsMCPClient: Model Context Protocol clientstdio_client: Communication with AWS documentation server- Real-time documentation access via
awslabs.aws-documentation-mcp-server
🔧 Architecture
Common Components
All scripts share these core architectural elements:
# Environment setup
load_dotenv()
ollama_host = f"http://{os.getenv('STRANDS_OLLAMA_HOST')}:11434"
# Model initialization
ollama_model = OllamaModel(
host=ollama_host,
model_id=os.getenv('STRANDS_OLLAMA_MODEL')
)
# Agent creation
agent = Agent(tools=[...], model=ollama_model)
Progression of Complexity
- Calculator Agent: Basic tool integration
- Interactive Agent: User interaction patterns
- AWS MCP Agent: External service integration via MCP
🛠️ Development
Running Scripts
Each script can be executed directly:
# Using uv (recommended)
uv run script_name.py
# Or with traditional Python (after installing dependencies)
python script_name.py
Error Handling
All scripts include comprehensive error handling for:
- Missing environment variables
- Ollama connection issues
- MCP server connectivity (AWS script)
- Invalid user inputs
Code Structure
Each script follows a consistent pattern:
- Environment setup and validation
- Model/client initialization
- User interaction (where applicable)
- Query processing and response handling
- Graceful error management
📖 Learning Path
Beginner: Calculator Agent
Start here to understand:
- Basic Strands Agent setup
- Tool integration concepts
- Ollama model configuration
Intermediate: Interactive Agent
Builds upon basics with:
- User interaction patterns
- Input validation and defaults
- Conversational AI without tools
Advanced: AWS MCP Agent
Demonstrates complex integrations:
- Model Context Protocol usage
- External service integration
- Real-time data access
- Context manager patterns
🔍 Troubleshooting
Common Issues
Ollama Connection Errors:
# Check if Ollama is running
curl http://localhost:11434/api/version
# Start Ollama if needed
ollama serve
Missing Model Errors:
# List available models
ollama list
# Pull required model
ollama pull llama2
AWS MCP Server Issues (AWS script only):
# Ensure uvx is installed
uvx --version
# Test MCP server availability
uvx awslabs.aws-documentation-mcp-server@latest --help
Environment Variable Issues:
- Verify
.envfile exists and contains required variables - Check file permissions and syntax
- Ensure no extra spaces or quotes around values
Debug Mode
Add debugging to any script by modifying the agent call:
response = agent(query, debug=True) # If supported
Or add verbose logging:
import logging
logging.basicConfig(level=logging.DEBUG)
📝 Contributing
- Fork the repository
- Create a feature branch
- Add comprehensive docstrings to new functions
- Include error handling and examples
- Test with multiple Ollama models
- Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Strands for the agent framework
- Ollama for local language model hosting
- AWS Labs for the MCP documentation server
- Astral for uvx and uv tools
🔗 Related Resources
- Strands Documentation
- Ollama Documentation
- Model Context Protocol
- AWS Documentation MCP Server
Quick Start
Clone the repository
git clone https://github.com/rapidarchitect/ollama_strandsInstall dependencies
cd ollama_strands
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.