langdb
MCP Serverlangdbpublic

langdb samples

提供 LangDB 的综合示例和框架集成,展示其在多种场景中的能力。

Repository Info

3
Stars
1
Forks
3
Watchers
1
Issues
TypeScript
Language
-
License

About This Server

提供 LangDB 的综合示例和框架集成,展示其在多种场景中的能力。

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

LangDB Samples

This repository provides comprehensive examples and integrations for LangDB, demonstrating its capabilities across various frameworks and use cases.

Getting Started

Follow these steps to set up and run the examples in this repository.

1. Set Up Your LangDB Credentials

To use these examples, you need credentials from LangDB. This involves signing up, creating a project, and generating an API key.

  • Sign Up: If you don't have an account, sign up on the LangDB platform.
  • Create a Project: Once logged in, create a new project. Each project has a unique Project ID.
  • Generate an API Key: Navigate to the settings and generate a new API key. This will be your API Key.

2. Configure Your Environment

Most examples in this repository use a .env file to manage environment variables. You'll need to set the following:

# Create a .env file in the root of the example you want to run
# examples/crewai/report-writing-agent/.env

LANGDB_API_BASE_URL="https://api.us-east-1.langdb.ai"
LANGDB_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx"
LANGDB_PROJECT_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
  • LANGDB_API_BASE_URL: The base endpoint for the LangDB AI Gateway.
  • LANGDB_API_KEY: The API key you generated in the previous step.
  • LANGDB_PROJECT_ID: The Project ID from your LangDB project.

3. Review Example-Specific Requirements

Many examples, especially those in the "Featured Samples" section, require additional setup in the LangDB UI. Always check the README.md file inside the specific example's directory for detailed instructions.

Common requirements include:

  • Configuring Models: Some examples use Virtual Models to attach tools or apply specific routing logic. To configure them, navigate to your project on the LangDB platform and select the Models tab. There, you can either select from a list of pre-configured models or create a new Virtual Model by clicking the "New Model" button.
  • Configuring MCPs: Advanced examples may use MCPs to connect to external services. To set one up, navigate to your project and select the MCP Servers tab. There, you can select from a list of managed MCPs to compose a new Virtual MCP Server for your application.

4. Install Dependencies and Run the Example

Once you've completed the environment and any example-specific setup, you can install the dependencies and run the application.

# Navigate to an example directory
cd examples/openai/travel-agent

# Install required packages
pip install -r requirements.txt

# Run the application
python app.py

Here are some of our most recent and powerful examples:

CrewAI: Report Writing Agent

!CrewAI Report Writing Agent Trace

Public Thread: https://app.langdb.ai/sharing/threads/3becbfed-a1be-ae84-ea3c-4942867a3e22

  • Path: examples/crewai/report-writing-agent
  • Architecture: A multi-agent system using CrewAI that researches a topic and writes a comprehensive report. It consists of a researcher, analyst, and writer agent working in sequence.

To integrate with LangDB, you first call pylangdb.crewai.init() and then manually configure your LLM instances to send requests through the LangDB gateway.

# main.py
import os
from pylangdb.crewai import init
from crewai import LLM
from dotenv import load_dotenv

load_dotenv()
init()

# Configure the LLM instance to use LangDB credentials
# and pass any additional tracing headers.
api_key = os.environ.get("LANGDB_API_KEY")
api_base = os.environ.get("LANGDB_API_BASE_URL")
project_id = os.environ.get("LANGDB_PROJECT_ID")

# Base LLM configuration
llm = LLM(
    model="openai/gpt-4o-mini",
    api_key=api_key,
    base_url=api_base,
    extra_headers={
        "x-project-id": project_id
    }
)

OpenAI SDK: Travel Agent

!OpenAI Travel Agent Trace

Public Thread: https://app.langdb.ai/sharing/threads/43cfa16f-042e-44ca-ad21-06f52afeca39

  • Path: examples/openai/travel-agent
  • Architecture: A multi-agent workflow using the OpenAI Agents SDK. It features a 4-agent pipeline (Query Router, Booking Specialist, Travel Recommendation Specialist, and Reply Agent) to handle complex travel queries. LangDB provides end-to-end tracing, dynamic tool integration via Virtual Models, and centralized model management.

1. Initialize Tracing

In your application entry point (app.py), call pylangdb.openai.init() before any other imports. This single line patches the OpenAI library to automatically trace all subsequent operations.

# app.py
from dotenv import load_dotenv
from pylangdb.openai import init

# Load environment variables and initialize tracing
load_dotenv()
init()

2. Configure the Client and Agents

pylangdb automatically configures the AsyncOpenAI client using your environment variables. You can then define agents using LangDB Virtual Models, which allows you to attach tools (like web search) and guardrails in the LangDB UI without changing your code.

# app.py
from openai import AsyncOpenAI
from agents import Agent, set_default_openai_client, OpenAIChatCompletionsModel

# Client is automatically configured by pylangdb.init()
client = AsyncOpenAI()
set_default_openai_client(client, use_for_tracing=True)

def get_model(model_name):
    return OpenAIChatCompletionsModel(model=model_name, openai_client=client)

# Define agents using virtual models from LangDB
travel_recommendation_agent = Agent(
    name="Travel Recommendation Specialist",
    model=get_model("langdb/travel-recommender") # A virtual model with search tools
)

query_router_agent = Agent(
    name="Query Router",
    model=get_model("langdb/query-router"), # A virtual model for routing
    tools=[travel_recommendation_agent.as_tool()]
)

3. Run the Workflow with Tracing

To link all steps of a session into a single trace, generate a unique group_id and pass it to the Runner.

# app.py
import uuid
from agents import Runner, RunConfig

# A unique group_id links all steps in this session's trace
group_id = str(uuid.uuid4())

response = await Runner.run(
    query_router_agent,
    input="I want to go to a sunny beach destination in December.",
    run_config=RunConfig(group_id=group_id)
)

Google ADK: Web Search Agent

!Google ADK Web Search Agent Trace

Public Thread: https://app.langdb.ai/sharing/threads/b6ddc154-33a0-403f-948c-9a559d93445a

  • Path: examples/google-adk/web-search-agent
  • Architecture: A two-step sequential agent system (Critic Agent, Reviser Agent) built with Google ADK. The Critic Agent conducts web searches and analyzes information, and the Reviser Agent synthesizes the findings into a structured answer.

Initialize LangDB tracing before importing any google.adk modules.

# web-search/agent.py
from pylangdb.adk import init

# Initialize LangDB tracing before importing any ADK modules
init()

from google.adk.agents import SequentialAgent
# ... rest of the agent setup

Google ADK: Travel Concierge

!Google ADK Travel Concierge Trace

Public Thread: https://app.langdb.ai/sharing/threads/8425e068-77de-4f41-8aa9-d1111fc7d2b7

  • Path: examples/google-adk/travel-concierge
  • Architecture: A hierarchical agent system where a main root_agent orchestrates a team of specialized sub-agents (Inspiration, Planning, Booking, etc.) to handle a complete travel journey.

Initialize LangDB tracing before importing any google.adk modules.

# travel_concierge/agent.py
from pylangdb.adk import init

# Initialize LangDB tracing before importing any ADK modules
init()

from google.adk.agents import Agent
# ... rest of the agent setup

All Examples

Basic Integration

FrameworkExamplePath
OpenAI APISimple Integrationexamples/basic.py

Framework Integrations

FrameworkExamplePath
LangChainBasic Integrationexamples/langchain/langchain-basic
LangChainMulti-agent Setupexamples/langchain/langchain-multi-agent
LangChainRAG-agent Setupexamples/langchain/langchain-rag-bot
LangChainLangGraph Tracingexamples/langchain/langgraph-tracing
CrewAIBasic Implementationexamples/crewai/crewai-basic
CrewAIMulti-agent Orchestrationexamples/crewai/crewai-multi-agent
CrewAIReport Writing Agentexamples/crewai/report-writing-agent
CrewAIBasic Tracingexamples/crewai/crewai-tracing
AgnoBasic Tracingexamples/agno/agno-basic
LlamaIndexBasic Integrationexamples/llamaindex/llamaindex-basic
Google ADKWeb Search Agentexamples/google-adk/web-search-agent
Google ADKTravel Conciergeexamples/google-adk/travel-concierge
Google ADKMulti-tool Agentexamples/google-adk/multi-tool-agent
OpenAI Agents SDKCustomer Support Agentexamples/openai/customer-support
OpenAI Agents SDKTravel Agentexamples/openai/travel-agent
OpenAI Agents SDKBasic Tracingexamples/openai/openai-agents-tracing
Mem0Memory System Integrationexamples/mem0
Vercel AI SDKJavaScript/Node.js Implementationexamples/vercel
SupabaseDatabase Integrationexamples/supabase
RasaConversational AI Integrationexamples/rasa

Feature Examples

FeatureExamplePath
RoutingBasic Setupexamples/routing/routing-basic
RoutingMulti-agent Setupexamples/routing/routing-multi-agent
EvaluationModel Evaluation & Cost Analysisexamples/evaluation

MCP Examples

ExampleDescriptionPath
MCP SupportModel Provider Integrationexamples/mcp/mcp-support.ipynb
Cafe DashboardNext.js with MCP Integrationexamples/mcp/cafe-dashboard
Server Actions DemoNext.js Server Actions with MCPexamples/mcp/nextjs-server-actions-demo
SvelteKit IntegrationSvelteKit MCP Sampleexamples/mcp/sveltekit-mcp-sample
SmitheryMCP Smithery Sampleexamples/mcp/smithery

Key Features

🚀 High Performance

  • Built in Rust for maximum speed and reliability
  • Seamless integration with any framework (Langchain, Vercel AI SDK, CrewAI, etc.)
  • Integrate with any MCP servers (https://docs.langdb.ai/ai-gateway/features/mcp-support)

📊 Enterprise Ready

  • Comprehensive usage analytics and cost tracking
  • Rate limiting and cost control
  • Advanced routing, load balancing and failover
  • Evaluations

🔒 Data Control

  • Full ownership of your LLM usage data
  • Detailed logging and tracing

Looking for More? Try Our Hosted & Enterprise Solutions

🌟 Hosted Version

  • Get started in minutes with our fully managed solution
  • Zero infrastructure management
  • Automatic updates and maintenance
  • Pay-as-you-go pricing

💼 Enterprise Version

  • Enhanced features for large-scale deployments
  • Advanced team management and access controls
  • Custom security guardrails and compliance features
  • Intuitive monitoring dashboard
  • Priority support and SLA guarantees
  • Custom deployment options

Contact our team to learn more about enterprise solutions.

Built for Developers

LangDB's AI Gateway is designed with developers in mind, focusing on providing a practical and streamlined experience for integrating LLMs into your workflows. Whether you're building a new AI-powered application or enhancing existing systems, LangDB makes it easier to manage and scale your LLM implementations.

Support

For more information and support, visit our documentation.

Quick Start

1

Clone the repository

git clone https://github.com/langdb/langdb-samples
2

Install dependencies

cd langdb-samples
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerlangdb
Repolangdb-samples
LanguageTypeScript
License-
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation