jezweb
MCP Serverjezwebpublic

n8n

n8n Workflows Agents Templates and Ideas

Repository Info

2
Stars
1
Forks
2
Watchers
0
Issues
TypeScript
Language
-
License

About This Server

n8n Workflows Agents Templates and Ideas

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

🤖 n8n AI Agent Microservices: Building Modular AI with MCP

Welcome! This repository documents the journey and patterns for building modular AI agents as callable microservices using n8n, exposed via the Multi-Agent Collaboration Protocol (MCP). The goal is to simplify client-side development, optimize token usage, and create reusable, maintainable AI capabilities.

✨ Overview

In modern AI application development, especially with agentic systems, managing complexity, tool orchestration, and LLM context can become challenging and costly. This project explores a powerful pattern:

  • n8n as the Agent Backend: Leverage n8n's visual workflow builder, robust credential management, and extensive node ecosystem to create the core logic of your AI agents.
  • MCP for Inter-Agent Communication: Use n8n's MCP Server Trigger to expose an entire AI agent (or a specific capability) as a single, callable tool over an SSE (Server-Sent Events) endpoint.
  • Simplified Client Interaction: External clients (e.g., coding environments like Smithery's Roo code, VS Code extensions, other AI agents, or traditional applications) can interact with these n8n-hosted agents by sending simple, natural language queries.

This approach shifts the heavy lifting of tool selection, multi-step reasoning, and API interactions to the n8n server, making clients leaner and more focused.

🚀 Core Concepts

  • Main Workflow (MCP Server):
    • Uses the Context7 MCP Server Trigger to create a public SSE endpoint.
    • Connects a Tool Workflow node to this trigger. This node defines the single tool the MCP server offers to clients (e.g., call_my_ai_agent).
  • Sub-Workflow (AI Agent Logic):
    • The Tool Workflow node in the main workflow calls this sub-workflow.
    • This is where your actual AI agent resides, typically including:
      • An Execute Workflow Trigger (or Context7 Workflow Start) to receive inputs (like a user's query).
      • An AI Agent node (e.g., Langchain Agent in n8n) configured with an LLM (like Google Gemini, Claude, or OpenAI models).
      • A clear System Prompt guiding the agent's behavior and tool usage.
      • One or more Tools connected to the agent (these can be n8n's built-in nodes, HTTP requests, or MCP Client nodes calling other services like Context7).
      • Memory (e.g., Buffer Window Memory, often scoped using {{ $execution.id }} for stateless per-request context).

💡 Key Benefits

  • Reduced Client-Side Complexity: Clients send high-level requests; n8n handles the intricate orchestration.
  • Optimized Token Usage: Detailed system prompts, tool descriptions, and intermediate conversational steps are managed within n8n, significantly reducing the token load on the client-side LLM (if the client is also an LLM).
  • Centralized Logic & Maintenance: Update your agent's LLM, tools, or prompts in one place (n8n) without needing to redeploy or modify clients.
  • Modularity & Reusability: Build a suite of specialized AI "microservice" agents in n8n, each callable as a simple tool. This is foundational for creating sophisticated, multi-agent systems.
  • Cost-Effectiveness: Use efficient LLMs (e.g., Gemini Flash, Claude Haiku) for n8n agent orchestration, potentially reducing overall operational costs.
  • Leverage n8n's Power: Utilize n8n's visual builder, error handling, scheduling, and vast integration library.

📚 Case Study: Context7 Documentation Agent

A practical example of this pattern is an n8n agent that fetches software library documentation using Context7:

  1. Client Request: {"input": {"query": "What is flexbox in Tailwind CSS?"}} to the n8n MCP SSE endpoint.
  2. Main n8n Workflow:
    • Context7 MCP Server Trigger receives the request.
    • call_context7_ai_agent (Tool Workflow node) is invoked.
  3. AI Agent Sub-Workflow:
    • Context7 Workflow Start receives the query.
    • Context7 AI Agent (powered by Gemini Flash):
      • System Prompt: Instructs it to first resolve library names, then get docs.
      • Tools:
        • context7-resolve-library-id (MCP Client): Converts "Tailwind CSS" to its Context7 ID.
        • context7-get-library-docs (MCP Client): Fetches docs for "flexbox" using the resolved ID.
      • Memory: Simple Memory using {{ $execution.id }}.
    • The agent processes the query, calls the tools sequentially, and formulates an answer.
  4. Response: The answer is streamed back to the client via SSE.

🔗 Link to n8n Workflow JSON for Context7 Agent 📺 Link to YouTube Video Explaining This Setup (Using your actual video link)

🛠️ Building Your Own n8n Agent Microservice: A Plan

To create your own agent using this pattern, consider the following steps:

  1. Define Agent's Purpose: Clearly state what the agent will do and what problem it solves.
  2. Identify Inputs & Outputs: What information does it need from the client? What will it return?
  3. Select LLM: Choose an LLM suitable for orchestration and the complexity of the task.
  4. Design Internal Tools:
    • List the specific actions the agent needs to perform (e.g., API calls, database lookups, calculations).
    • For each action, create a corresponding "tool" in the n8n sub-workflow (using HTTP Request, MCP Client, or other n8n nodes).
    • Define clear descriptions and parameter schemas for each tool.
  5. Craft the System Prompt: Write a detailed system prompt for the AI Agent node, instructing it on its role, how to use its tools, and the desired output format.
  6. Configure Memory: Decide if conversational memory is needed and how it should be scoped.
  7. Structure n8n Workflows:
    • Sub-Workflow (Agent Core): Execute Workflow Trigger -> AI Agent (with LLM, Tools, Memory) -> (Handle Output).
    • Main Workflow (MCP Server): Context7 MCP Server Trigger -> Tool Workflow (points to sub-workflow, defines client-facing tool name, description, and input schema).
  8. Develop Client Configuration: Define how clients will discover and call your n8n MCP endpoint (see example below).
  9. Test Thoroughly: Test the sub-workflow independently, then the entire flow via the MCP endpoint using a client like Postman, a Python script, or Roo code.

📄 See Also: PLANNING_AN_N8N_AGENT.md (You'll create this file next)

📝 Key Learnings & Best Practices

  • Clarity is Crucial: System prompts and tool descriptions for the AI agent must be precise and unambiguous.
  • Explicit Parameter Passing: Data doesn't automatically flow between the main workflow and sub-workflow; it must be explicitly mapped in the Tool Workflow node.
  • JSON for Tool Parameters: When MCP Client tools (or other tools needing structured input) are driven by an AI Agent and set to "Defined automatically by the model", ensure the parameter description prompts the AI to generate the entire JSON object, including examples.
  • Session IDs for Memory: Use {{ $execution.id }} in the Agent and Memory nodes for reliable, per-request context isolation.
  • JSON Validation: Always validate JSON manually or with tools when facing "invalid JSON" errors, especially when copy-pasting or dealing with dynamic content. Look out for stray characters or incorrect escaping.
  • Security: While long, random URLs offer basic obscurity, consider implementing proper authentication (like Bearer Tokens configured in the n8n MCP Trigger) for production environments.

💡 Using Templates for AI-Assisted Workflow Generation

To accelerate the creation of new n8n agent microservices, this repository will include templates that can be used as starting points or as prompts for AI assistants (like this one!) to help generate:

  • n8n node configurations.
  • System prompts.
  • Tool descriptions.
  • And more!

See the /templates directory for examples. (You'll create these next)

🤝 Contributing & Feedback

This is an evolving exploration. Feedback, contributions, and examples of your own n8n agent microservices are highly welcome! Please open an issue or a pull request.

📜 License

This project and its documentation are licensed under the MIT License.

Quick Start

1

Clone the repository

git clone https://github.com/jezweb/n8n
2

Install dependencies

cd n8n
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerjezweb
Repon8n
LanguageTypeScript
License-
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation