
galactic streamhub
A Multimodal, Multi-Agent AI Assistant
Repository Info
About This Server
A Multimodal, Multi-Agent AI Assistant
Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.
Documentation
AVA - Galactic StreamHub: A Multimodal, Multi-Agent AI Assistant
Hackathon Project for the Agent Development Kit Hackathon with Google Cloud #adkhackathon
Welcome to the Galactic StreamHub, powered by AVA (Advanced Visual Assistant)! This project showcases a sophisticated multi-agent AI system built using Google's Agent Development Kit (ADK). AVA can interact via text, voice, and live video, understand complex user goals, perceive the user's environment, and orchestrate tasks using specialized tools and delegated agents.
[https://galatic-streamhub-140457946058.us-central1.run.app/]
[https://devpost.com/software/galactic-streamhub]
[https://medium.com/@James_Masciano/ava-building-a-glimpse-of-i-o-2025s-agentic-multimodal-future-with-google-s-adk-for-bddbaac17d3c]
!Galactic StreamHub UI
Table of Contents
- Features
- Architecture Overview
- Tech Stack
- Setup & Installation
- Prerequisites
- Clone Repository
- Environment Configuration
- Running the Application Locally
- Running MCP Servers
- Deployment (Example: Google Cloud Run)
- Project Structure
- Key Learnings & Workarounds
- Future Enhancements
- Contributing
- License
- Acknowledgements
Features
- Multimodal Interaction: Communicate with AVA via text, voice, and live video stream.
- Visual Understanding: AVA can analyze objects and elements from your live webcam feed.
- Multi-Agent System:
- A Root Agent (multimodal Gemini Flash) acts as the primary interface.
- A ProactiveContextOrchestratorAgent (custom
BaseAgent) manages the core logic for proactive and reactive assistance. It delegates to:- EnvironmentalMonitorAgent (
LlmAgent): Analyzes visual context and user hints to identify proactive opportunities. - ContextualPrecomputationAgent (
LlmAgent): If a proactive opportunity is identified, this agent formulates suggestions and pre-fetches relevant information using tools. - ReactiveTaskDelegatorAgent (
LlmAgent): Handles explicit user tasks or executes actions based on accepted proactive suggestions. - PubMedRAGAgent (
LlmAgent): Orchestrates biomedical research by querying a local PubMed database, performing live web searches, querying the ClinicalTrials.gov API, and querying the OpenFDA API for drug adverse events. It synthesizes this information and can add new web-found articles to the local knowledge base. - VisualizationAgent (
LlmAgent): Creates various data visualizations (e.g., bar charts, line graphs) using data retrieved from other agents.
- EnvironmentalMonitorAgent (
- Proactive Assistance: AVA can anticipate user needs based on visual context and general queries, offering timely suggestions.
- Tool Integration (MCP): Leverages Model Context Protocol (MCP) tools for:
- Cocktail recipes
- Weather information
- Google Maps functionalities (geocoding, place search).
- Agent as a Tool: A dedicated
GoogleSearchAgent(wrapped as anAgentTool) is available to other agents for general information retrieval. - Knowledge Augmentation: Can ingest new biomedical articles found via web search into its local knowledge base (MongoDB & BigQuery).
- Real-time Streaming: Bidirectional streaming of audio and text using WebSockets.
- Dynamic UI: A futuristic, dark-themed web interface with 3D animated elements and a space-themed background.
- Comprehensive Accessibility Suite: Provides workflows for visual (scene description, OCR), auditory (speech sentiment, sound recognition), and cognitive (text simplification) assistance.
- Emotional Intelligence & Empathetic Response: AVA analyzes facial expressions, vocal tone, and language to understand the user's emotional state, detect incongruence, and adapt its responses to be more helpful and empathetic.
- Secure by Design: Integrates Google Cloud's Model Armor to proactively sanitize user prompts against injection, jailbreaking, and other potential attacks.
Architecture Overview
Privacy & Security by Design
AVA's architecture was built with user privacy as a core principle. We believe that for a personal AI companion to be truly trusted, it must handle sensitive data responsibly.
-
Local-First Emotional Analysis: The most sensitive biometric data—your facial expressions and vocal tone—are processed locally on the device running the server. The
EmotionalSynthesizerAgentuses local libraries (DeepFace,transformers) for this analysis. This means your raw video and audio data for emotional context are not sent to external cloud services, ensuring a high degree of privacy. -
Secure Authenticated Sessions: All interactions with AVA are protected by Firebase Authentication. The backend verifies a secure token for every WebSocket connection, ensuring that only the authenticated user can access their session. The user's unique Firebase ID (
uid) is used to namespace all session data and memory, preventing data crossover. -
Purpose-Driven Cloud Interaction: Data is only sent to external cloud APIs when a specific, user-initiated tool requires it. For example, an image frame is sent to the Google Vision API only when the user explicitly asks AVA to read text via the OCR tool. The system avoids continuous, indiscriminate data streaming to third-party services.
-
Decoupled Transient Data: Real-time data, like the most recent audio chunk for vocal analysis, is held in a transient, in-memory store (
shared_state.py) that is cleared when the session ends. This minimizes the data footprint and separates ephemeral data from long-term conversational memory. -
Proactive Prompt Sanitization: All user input is routed through Google Cloud's Model Armor before being processed by the LLM. This acts as a critical security gateway, using specialized templates to detect and block malicious inputs like prompt injection and jailbreaking attempts, protecting the integrity of the agent's operations.
AVA employs a multi-agent architecture orchestrated by the Google Agent Development Kit:
-
Client (Web Browser): Provides the user interface for text, voice (Web Audio API), and video (WebRTC/HTML5 Media) input. Communicates with the backend via WebSockets.
-
Backend (FastAPI Server):
- Manages WebSocket connections.
- Hosts the ADK
RunnerandSessionService. - Initializes and manages MCPToolsets for external services.
- Defines and orchestrates the ADK agents.
-
Root Agent (
mcp_streaming_assistant):- An
LlmAgent(Gemini Flash Multimodal). - Receives multimodal input from the client.
- Performs initial analysis and decides to either:
- Use one of its directly available MCP tools (via
MCPToolset). - Delegate to the
ProactiveContextOrchestratortool (which wraps theProactiveContextOrchestratorAgent).
- Use one of its directly available MCP tools (via
- An
-
ProactiveContextOrchestratorAgent & its Sub-Agents:
- This custom agent orchestrates the proactive/reactive flow.
EnvironmentalMonitorAgent: Identifies contextual keywords from visual input and user hints.ContextualPrecomputationAgent: If a proactive context is identified, this agent formulates a suggestion and can use tools (including theGoogleSearchAgentToolor MCP tools like CocktailDB) to pre-fetch data.ReactiveTaskDelegatorAgent: Handles direct user requests or tasks following an accepted proactive suggestion. It can use MCP tools or theGoogleSearchAgentToolas needed.- If General Research Synthesis: Delegates to
MasterResearchSynthesizer. This agent sequentially: - Emotional Intelligence Suite:
EmotionalSynthesizerAgent: A parallel agent that runs specialists for facial, vocal, and text emotion analysis. It computes an "incongruence score" to detect when a user's words don't match their non-verbal cues, enabling AVA to respond more empathetically and effectively.
- Research & Synthesis Workflow:
- Runs
DataGatheringAndConnectionAgent: This involves parallel searches byResearchOrchestratorAgent(across PubMed, web, local clinical trials DB, OpenFDA), followed byKeyInsightExtractorAgentandTrialConnectorAgent. - Runs
ParallelSynthesisAgent: This involves parallel work byTextSynthesizerAgent(generates text, can propose ingestion viaIngestionRouterAgentTool),ChartProducerAgent(usesVisualizationAgentTool), andImageEvidenceProducerAgent(usesMultimodalEvidenceAgentToolto find and prepare medical image URLs). - Runs
FinalReportAggregatorAgent: Combines all synthesized parts into the final response.
- Runs
VisualizationAgent: If the user requests a visualization, this agent is triggered to create a chart from the provided data.
-
Specialized Search Agents:
GoogleSearchAgentTool: AnLlmAgentfor general web searches, available as a tool.ClinicalTrialsSearchAgent: AnLlmAgentfor querying the live ClinicalTrials.gov API.OpenFDASearchAgent: AnLlmAgentfor querying the live OpenFDA API for drug adverse events.
-
MCP Servers: External processes (Stdio servers for CocktailDB, Weather, Google Maps) that provide tool functionalities via the Model Context Protocol.
-
Research Workflow:
- The
MasterResearchSynthesizeris a sequential agent that manages a deep research pipeline. It coordinates parallel data gathering, insight extraction, and the generation of a final report composed of text, charts, and images from multiple synthesis agents.
- The
-
Accessibility Workflow:
- The
AccessibilityOrchestratorAgentis triggered by accessibility-related queries (e.g., "what do you see?", "read this label"). - It analyzes the user's intent and delegates to one of two specialist agents:
SceneDescriberAgent: Uses the list of visually identified items from the session state to construct a natural-language description of the environment.TextReaderAgent: Uses a dedicated tool (perform_ocr_on_last_frame) that calls the Google Cloud Vision API to accurately extract and read text from the live video feed.
- The
-
Firebase Authentication:
- The frontend uses the Firebase client SDK to handle Google Sign-In and retrieve a secure ID token.
- The backend uses the Firebase Admin SDK to verify this token upon a WebSocket connection attempt, ensuring only authenticated users can interact with the agent. The user's Firebase UID is used as the session ID.
-
Auditory Accessibility Workflow:
- The
AuditoryAssistanceOrchestratorAgentis triggered by auditory-related queries (e.g., "how do I sound?", "what's that noise?"). - It analyzes the user's intent and delegates to one of two specialist agents:
AudioSentimentAgent: Uses the Google Cloud Natural Language API to analyze the sentiment of the user's speech.SoundRecognitionAgent: Identifies and reports significant ambient sounds from the user's environment.
- The
-
Cognitive Accessibility Workflow:
- The
CognitiveAssistanceOrchestratorAgentis triggered when a user asks to simplify text. - The root agent first uses the
set_text_for_simplificationtool to store the target text. - The orchestrator then delegates to the
TextSimplificationAgent, which rephrases the stored text in simple, clear language.
- The
Firebase Authentication
Galactic StreamHub now implements robust user authentication using Firebase, ensuring that only verified users can connect to the agent backend and interact with AVA.
Frontend (Client-Side - static/js/app.js)
- Firebase Initialization:
- The Firebase app is initialized using your project's
firebaseConfig(ensure your actual API key and other details are securely managed, e.g., via environment variables for deployment, though the providedapp.jshas placeholders).
- The Firebase app is initialized using your project's
- Google Sign-In:
- Users can sign in using their Google accounts via a "Sign In with Google" button.
- Firebase Authentication (
signInWithPopupwithGoogleAuthProvider) handles the OAuth flow.
- Authentication State Management:
- An
onAuthStateChangedlistener monitors the user's sign-in status. - If Signed In:
- The user's display name and avatar are shown.
- An ID token is retrieved from Firebase using
user.getIdToken(true). This token is a secure credential. - The WebSocket connection to the backend (
/ws) is established, and the ID token is passed as a query parameter (?token=<ID_TOKEN>). - UI elements (message input, send button, audio/video controls) are enabled.
- If Signed Out:
- A login gate is displayed, prompting the user to sign in.
- UI elements are disabled.
- Any existing WebSocket connection is closed.
- An
- Sign-Out:
- A "Sign Out" button allows users to end their session.
Backend (Server-Side - main.py)
- Firebase Admin SDK Initialization:
- The Firebase Admin SDK is initialized during the FastAPI application's startup (
app_lifespan). - It uses Application Default Credentials, which is suitable for Google Cloud environments (e.g., Cloud Run, Cloud Workstations). For local development, ensure you've run
gcloud auth application-default login. - The
projectIdis configured during initialization.
- The Firebase Admin SDK is initialized during the FastAPI application's startup (
- Token Verification in WebSocket Endpoint:
- The
/wsWebSocket endpoint now requires atokenquery parameter. - When a client attempts to connect, the backend uses
auth.verify_id_token(token)to verify the received ID token with Firebase's authentication servers. - If Verification Succeeds:
- The connection is accepted.
- The
uid(unique user ID) extracted from the decoded token is used as thesession_idfor the ADK agent session. This securely links the agent's activity to the authenticated user.
- If Verification Fails (e.g., token is invalid, expired, or tampered with):
- An error is logged.
- The WebSocket connection is refused (closed with code 1011).
- The
Security Benefits
- Secure Access: Only authenticated users can establish a WebSocket connection and interact with the agent.
- User Identification: The backend can reliably identify users based on their Firebase UID, allowing for potential future features like personalized experiences or data storage.
- Standardized Authentication: Leverages Google's robust and secure Firebase Authentication platform.
Visual Accessibility 👓
To better serve visually impaired users, AVA includes a dedicated accessibility workflow. It can:
- Describe the Scene: Provide a natural, narrative description of the user's surroundings based on what the camera sees.
- Read Text Aloud: Accurately read text from objects in the real world (like labels or documents) using Google's Cloud Vision API for high-precision Optical Character Recognition (OCR).
Auditory Accessibility 👂
To enhance auditory awareness and provide valuable feedback, AVA includes a dedicated auditory accessibility workflow. It can:
- Analyze Speech Sentiment: Provide real-time feedback on vocal tone by analyzing transcribed speech with the Google Cloud Natural Language API.
- Recognize Ambient Sounds: Identify and report significant background noises (like a doorbell or alarm) to improve the user's situational awareness.
Cognitive Accessibility
To assist users who may benefit from simplified information, AVA includes a cognitive accessibility workflow.
Text Simplification
- Functionality: Users can request to simplify a block of text by asking "can you make this easier to read?" or "explain this to me simply".
- Underlying Technology: The
CognitiveAssistanceOrchestratorAgentmanages this task. The root agent first captures the text to be simplified using a dedicated tool. The orchestrator then delegates to aTextSimplificationAgentwhich uses Gemini to rephrase the content in clear, easy-to-understand language, removing jargon and complex sentence structures. - User Benefit: Makes complex information more accessible, aiding comprehension for a wider range of users.
!Architecture Diagram
Tech Stack
- AI Framework: Google Agent Development Kit (ADK) for Python
- LLM: Google Gemini (Flash for streaming/multimodal, Pro for some agent logic during development)
- Backend: Python, FastAPI, Uvicorn
- Frontend: HTML, CSS, JavaScript (with Vanta.js for background animation)
- Real-time Communication: WebSockets
- External Tools: MCP (Model Context Protocol) for CocktailDB, Weather, Google Maps
- Deployment: Google Cloud Run, Docker
- Dependency Management/Runner (Local):
uv
Setup & Installation
Prerequisites
- Python 3.11 or 3.12 (Python 3.13 is not yet supported by all ML dependencies)
uv(recommended for faster virtual environment and package management:pip install uv)- Node.js and
npx(if you intend to run the Google Maps MCP server locally via ADK's StdioServerParameters). - Access to Google Cloud Platform project with:
- Vertex AI API enabled.
- Secret Manager API enabled (if using it for API keys).
- Google Cloud CLI (
gcloud) configured and authenticated. - A
.envfile configured with your Google Cloud Project ID, location, and potentially API keys (see.env.example).
Clone Repository
git clone [https://github.com/surfiniaburger/galactic-streamhub]
cd [galactic-streamhub]
Environment Configuration
-
Create a virtual environment (using
uv):uv venv source .venv/bin/activate # On macOS/Linux # .venv\Scripts\activate.bat # On Windows CMD # .venv\Scripts\Activate.ps1 # On Windows PowerShell -
Install dependencies:
uv sync -
Set up your
.envfile: Copy.env.exampleto.envand fill in your details:# For Vertex AI GOOGLE_GENAI_USE_VERTEXAI=TRUE GOOGLE_CLOUD_PROJECT="your-gcp-project-id" GOOGLE_CLOUD_LOCATION="us-central1" # Or your preferred region # For Google Maps API Key via Secret Manager (optional, used by main.py) # GOOGLE_MAPS_API_KEY_SECRET_NAME="your-secret-name-for-maps-key" # If NOT using Secret Manager, and your Maps MCP server expects GOOGLE_MAPS_API_KEY directly: # GOOGLE_MAPS_API_KEY="your-actual-maps-api-key"Note: The
main.pyis configured to fetch the Maps API key from Secret Manager. If you provide it directly asGOOGLE_MAPS_API_KEY, you'll need to adjustmain.pyor ensure the Maps MCP server consumes this variable.
Running the Application Locally
-
Ensure MCP Servers are Ready:
- The
main.pyapplication will attempt to start the Weather and Cocktail MCP servers usingStdioServerParameters. Ensuremcp_server/weather_server.pyandmcp_server/cocktail.pyare executable and their dependencies are met. - The Google Maps MCP server (if configured in
main.py) will also be started byStdioServerParameters(requiresnpx).
- The
-
Start the FastAPI Application: Navigate to the project root directory (where
main.pyis located) in your terminal and run:uv run uvicorn main:app --reloadSynthesize the latest research on the diagnosis and treatment of non-small cell lung cancer. Find connections to ongoing clinical trials and show me a visual example of a lung nodule from a CT scan. The application will typically be available at
http://127.0.0.1:8000. -
Access the Web Interface: Open your browser and go to
http://127.0.0.1:8000.
Agent Evaluation
This project uses the ADK's evaluation framework (adk eval) to test the performance and correctness of the agents against predefined conversation scenarios (eval sets). This ensures that as the agent's logic evolves, its behavior remains consistent and correct.
Running Evaluations
This project contains evaluation sets for multiple agents to test specific capabilities in isolation.
To evaluate the primary main_agent's conversational and tool-use abilities:
adk eval main_agent main_agent/cocktailsEval.evalset.json
This command will:
- Load the
root_agentdefined in themain_agentmodule. - Run the conversation turns defined in
main_agent/cocktailsEval.evalset.json. - Compare the agent's actual tool calls and final responses against the expected "golden" responses defined in the eval set.
- Generate a detailed result file in
main_agent/.adk/eval_history/.
The evaluation results help track metrics like tool_trajectory_avg_score (did the agent call the right tools?) and response_match_score (how similar was the agent's text response to the expected one?).
Running MCP Servers
This application is configured to start the Weather, Cocktail, and (if API key is available) Google Maps MCP servers as subprocesses using ADK's StdioServerParameters.
- Weather Server:
mcp_server/weather_server.py - Cocktail Server:
mcp_server/cocktail.py - Google Maps Server: Uses
npx -y @modelcontextprotocol/server-google-maps(requires Node.js/npx and aGOOGLE_MAPS_API_KEYenvironment variable available to it, whichmain.pyattempts to provide from Secret Manager).
If you encounter issues with these starting automatically, you may need to run them manually in separate terminals before starting main.py and adjust main.py to connect to them as existing services if necessary.
Deployment (Example: Google Cloud Run)
To deploy this application to Google Cloud Run:
-
Create a
Dockerfile(if not already present):# Use an official Python runtime as a parent image FROM python:3.11-slim # Set the working directory in the container WORKDIR /app # Install uv first for faster dependency installation RUN pip install uv # Copy the requirements file into the container at /app COPY requirements.txt . # Install any needed packages specified in requirements.txt using uv RUN uv pip install --system --no-cache-dir -r requirements.txt # Copy the rest of the application code into the container at /app COPY . . # Make port 8000 (or the port Uvicorn runs on) available # Cloud Run will automatically use the PORT environment variable. # Uvicorn by default runs on 8000 if PORT is not set. # Let's ensure Uvicorn uses the PORT env var provided by Cloud Run. # EXPOSE 8080 # Or whatever PORT Cloud Run provides. This line is actually not strictly needed for Cloud Run. # Command to run the application using Uvicorn, listening on the port specified by Cloud Run's PORT env var. CMD ["uv", "run", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "${PORT:-8080}"] -
Set Environment Variables for Deployment: In your Cloud Shell or local terminal (with
gcloudCLI configured):export SERVICE_NAME='galatic-streamhub' # Or your preferred service name export LOCATION='us-central1' # Or your preferred region export PROJECT_ID='silver-455021' # Replace with your Project ID -
Deploy to Cloud Run: Ensure you are in the project's root directory (where the
Dockerfileis).gcloud run deploy $SERVICE_NAME \ --source . \ --region $LOCATION \ --project $PROJECT_ID \ --memory 4Gi \ --cpu 2 \ --concurrency 80 \ --allow-unauthenticated \ --set-env-vars="GOOGLE_CLOUD_PROJECT=$PROJECT_ID,GOOGLE_MAPS_API_KEY_SECRET_NAME=your-secret-name-for-maps-key" # Add other necessary env vars # Ensure the service account for Cloud Run has access to Secret Manager if using it.- Adjust
--memory,--cpu,--concurrencyas needed. --allow-unauthenticatedmakes the service publicly accessible. Remove if you need authentication.- Use
--set-env-varsto pass environment variables required by your application (like the Secret Manager name for the Maps API key). - The service account running the Cloud Run instance will need permissions to access Secret Manager secrets if you're using that feature.
Or simply run
- Adjust
gcloud run deploy $SERVICE_NAME \
--source . \
--region $LOCATION \
--project $PROJECT_ID \
--memory 4G \
--allow-unauthenticated
gcloud run services describe galatic-streamhub --format export --region us-central1 --project silver-455021 > galatic-streamhub-cloudrun.yaml
kubectl apply -f kubernetes-deployment.yaml -n default
kubectl get pods -n default -w
kubectl rollout status deployment/galatic-streamhub -n default
kubectl logs deployment/galatic-streamhub -n default -f
On successful deployment, you will be provided a URL to the Cloud Run service
Project Structure
.
├── .env.example # Example environment variables
├── .venv/ # Virtual environment (if created with uv venv)
├── Dockerfile # For containerization
├── README.md # This file
├── agent_config.py # Defines ADK agents, tools, and their configurations
├── proactive_agents.py # Defines the ProactiveContextOrchestratorAgent and its sub-agents
├── google_search_agent/
├── pubmed_pipeline.py # Logic for querying PubMed and ingesting articles
├── ingest_clinical_trials.py # Logic for ingesting clinical trial data
├── ingest_multimodal_data.py # Logic for finding similar medical images # Directory for the Google Search agent
│ └── agent.py
├── openfda_pipeline.py # Logic for querying OpenFDA API
├── clinical_trials_pipeline.py # Logic for querying ClinicalTrials.gov API
├── main.py # FastAPI application, WebSocket endpoints, ADK runner setup
├── mcp_server/ # Directory for local MCP server scripts
│ ├── cocktail.py # MCP server for cocktail recipes
│ └── weather_server.py
├── tools/ # Directory for agent tools
│ └── chart_tool.py # Tool
for creating data visualizations
└── web_utils.py # Utility for fetching web article text
├── requirements.txt # Python dependencies
└── static/ # Frontend assets
├── css/
│ └── style.css
├── js/
│ ├── app.js # Main client-side JavaScript
│ ├── audio-player.js # AudioWorklet for playback
│ └── audio-recorder.js # AudioWorklet for recording
└── index.html # Main HTML page
Key Learnings & Workarounds
During development, especially when integrating various ADK components for a streaming multimodal experience, a few workarounds and insights were key:
- Patching ADK Tools (
.funcattribute):- Both
MCPToolandAgentToolinstances needed to be "patched" by adding a.funcattribute that pointed to their respectiverun_asyncmethods. This was necessary because the ADK's internal tool execution flow (ingoogle/adk/flows/llm_flows/functions.py) appeared to expect this attribute for certain tool types in streaming scenarios.
- Both
AgentToolArgument Handling (KeyError: 'request'):- The
AgentTool.run_asyncmethod (when wrapping an agent likeProactiveContextOrchestratorAgent) expects its input arguments from the LLM to be bundled under a single keyargs['request']. - These findings and workarounds were detailed in GitHub Issue #1084 on google/adk-python.
- To resolve this, the Root Agent was instructed to call the
ProactiveContextOrchestratortool by passing a single argument namedrequest, whose value is a JSON string containing the actual parameters (user_goal,seen_items). The orchestrator then accesses these from session state, which the Root Agent populates.
- The
- Secure WebSockets (
wss://): Ensured client-side JavaScript useswss://when the application is deployed over HTTPS to prevent mixed content errors. - Model Selection for Stability & Capability: Using
gemini-2.0-flashfor most agents for speed and cost-effectiveness, while ensuring theEnvironmentalMonitorAgentuses a multimodal model if it's directly processing image data (though current flow has Root Agent do primary visual analysis).
Future Enhancements
- More sophisticated visual understanding (e.g., fine-grained object attribute recognition).
- More robust proactive trigger logic (e.g., using confidence scores or a dedicated decision-making agent).
- Enhanced error handling and feedback in the UI.
- Persistent session storage for longer-term memory.
- Integration of more diverse tools and agents.
- Refined Vanta.js background with more dynamic elements (e.g., subtle nebulae).
Contributing
This project was developed for the ADK Hackathon. While contributions are not actively sought at this moment, feel free to fork the repository, explore, and adapt the concepts for your own projects! If you find bugs or have significant improvement suggestions related to the ADK usage patterns demonstrated, feel free to raise an issue on this GitHub repo or (if applicable) on the official google/adk-python repository.
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Acknowledgements
- The Google Agent Development Kit team for providing the framework.
- The FastAPI and Uvicorn communities.
- Vanta.js for the cool animated background.
- Participants and organizers of the #adkhackathon.
The lord is my strength.
Quick Start
Clone the repository
git clone https://github.com/surfiniaburger/galactic-streamhubInstall dependencies
cd galactic-streamhub
npm installFollow the documentation
Check the repository's README.md file for specific installation and usage instructions.
Repository Details
Recommended MCP Servers
Discord MCP
Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.
Knit MCP
Connect AI agents to 200+ SaaS applications and automate workflows.
Apify MCP Server
Deploy and interact with Apify actors for web scraping and data extraction.
BrowserStack MCP
BrowserStack MCP Server for automated testing across multiple browsers.
Zapier MCP
A Zapier server that provides automation capabilities for various apps.