washyu
MCP Serverwashyupublic

mcp_virtual_assistant_avatar

集成MCP虚拟助手动画系统,支持AI控制动画头像,增强用户交互体验。

Repository Info

0
Stars
0
Forks
0
Watchers
0
Issues
JavaScript
Language
-
License

About This Server

集成MCP虚拟助手动画系统,支持AI控制动画头像,增强用户交互体验。

Model Context Protocol (MCP) - This server can be integrated with AI applications to provide additional context and capabilities, enabling enhanced AI interactions and functionality.

Documentation

MCP Avatar Animation System Integration Guide

This guide explains how to integrate the MCP (Master Control Program) avatar animation system with any AI client. The system allows AIs to control an animated avatar during conversations with users, expressing emotions and reactions that enhance the interaction experience.

System Components

The MCP Avatar Animation System consists of four main components:

  1. MCP Server - The central server that manages connections and routes animation commands
  2. AI Client - Integration code that allows AI systems to control the avatar
  3. Avatar Renderer - The frontend component that displays and animates the avatar
  4. NLP Module - Natural language processing to help the AI determine appropriate animations

System Architecture

┌─────────────┐     WebSocket    ┌─────────────┐      WebSocket      ┌─────────────┐
│             │     Protocol     │             │      Protocol       │             │
│   AI Client ├────────────────►│  MCP Server ├───────────────────►│   Renderer   │
│             │                  │             │                     │             │
└─────────────┘                  └─────────────┘                     └─────────────┘
       │                                                                    │
       │                                                                    │
       │                                                                    │
       │                              ┌─────────────┐                       │
       │                              │             │                       │
       └─────────────────────────────►│ NLP Module  │◄──────────────────────┘
                                      │             │
                                      └─────────────┘

Installation

  1. Set up the MCP Server

    • Install Node.js and npm
    • Create a new directory for the server
    • Copy the mcp-server.js file to this directory
    • Create a package.json file with the required dependencies:
      {
        "name": "mcp-avatar-server",
        "version": "1.0.0",
        "main": "mcp-server.js",
        "dependencies": {
          "express": "^4.17.1",
          "ws": "^8.2.3"
        },
        "scripts": {
          "start": "node mcp-server.js"
        }
      }
      
    • Run npm install to install dependencies
    • Create a public directory for the renderer files
  2. Set up the Renderer

    • Create the following files in the public directory:
      • index.html (HTML structure)
      • styles.css (CSS styles)
      • renderer.js (JavaScript for the renderer)
    • These files should contain the code provided in the avatar-renderer.js artifact
  3. Integrate the AI Client

    • Copy the ai-client.js file to your AI system
    • Install required dependencies for your AI client
    • Optionally, copy the nlp-module.js for advanced interaction capabilities
  4. Integrate the Broadcast Bridge

    • Update the MCP server with the code from broadcast-bridge.js
    • This enables communication between the server and multiple renderers

MCP Protocol

The MCP Server uses a WebSocket-based protocol with JSON messages. Here's a quick reference:

Authentication

{
  "command": "authenticate",
  "apiKey": "your_api_key"
}

Animation Control

{
  "command": "animate",
  "animation": "happy",
  "duration": 5000
}

Custom Message

{
  "command": "customMessage",
  "message": "Hello! I'm processing your request."
}

Status Check

{
  "command": "status"
}

Reset Avatar

{
  "command": "reset"
}

Available Animations

The system provides these standard animations:

AnimationDescription
happyDisplay a happy expression with raised eyebrows and smile
sadDisplay a sad expression with lowered eyebrows and frown
thinkingShow a thinking expression with one raised eyebrow and slight head tilt
surprisedDisplay a surprised expression with raised eyebrows and open mouth
speakingAnimate the avatar as if it is speaking (mouth movements)
listeningShow the avatar in a listening pose with slight head tilt and attentive expression
idleDefault idle animation with occasional blinking and subtle movements
greetingWave hand and smile as a greeting animation
confusedDisplay confusion with furrowed brows and head tilt
acknowledgingSimple nod animation to acknowledge user input

Integrating with Different AI Systems

For LLM-based AI Systems

For large language model AIs like GPT-4, Claude, LLaMA, etc., the integration works best by:

  1. Setting up the MCP Server as a service the AI can connect to
  2. Using the AI Client code in a wrapper around the AI's API
  3. Analyzing both user input and AI responses with the NLP Module
  4. Using the classification to determine appropriate animations

Example integration with a generic LLM API:

const AvatarAIClient = require('./ai-client');
const AvatarNLPModule = require('./nlp-module');
const { callLLMAPI } = require('./your-llm-api-wrapper');

// Setup avatar client
const avatarClient = new AvatarAIClient('ws://your-mcp-server.com', 'your_api_key');
avatarClient.connect();

// Setup NLP module
const nlpModule = new AvatarNLPModule(avatarClient);

// Handle user input
async function handleUserMessage(userMessage) {
  // Process user input for animation
  nlpModule.processUserInput(userMessage);
  
  // Call the LLM API to get a response
  const aiResponse = await callLLMAPI(userMessage);
  
  // Process AI response for animation
  nlpModule.processAIResponse(aiResponse);
  
  return aiResponse;
}

For Custom AI Systems

For custom AI systems, you can integrate more deeply by:

  1. Using the AI Client as a direct component of your AI system
  2. Triggering animations based on internal AI state transitions
  3. Using context-aware animation selection based on conversation flow
  4. Implementing custom animation sequences for specific scenarios

Customizing the Avatar

The avatar's appearance and animations can be customized by:

  1. Modifying the CSS in styles.css to change the avatar's appearance
  2. Adding new animation classes for additional expressions
  3. Updating the animation handling in renderer.js
  4. Expanding the available animations list in the MCP server

Advanced Usage

Animation Sequences

You can create more complex animation sequences by chaining animations:

// In your AI client code
async function performGreeting() {
  await avatarClient.animate('greeting', 2000);
  await avatarClient.animate('happy', 1000);
  await avatarClient.animate('speaking');
  avatarClient.setCustomMessage('Hello! How can I help you today?');
}

Context-Aware Animations

For more sophisticated interactions, track conversation context:

// In your AI system
const conversationContext = {
  topic: 'technical_support',
  userFrustrationLevel: 0,
  problemSolved: false
};

// Adjust animations based on context
function updateAnimation() {
  if (conversationContext.userFrustrationLevel > 3) {
    avatarClient.animate('acknowledging');
    avatarClient.setCustomMessage('I understand this is frustrating. Let's try a different approach.');
  } else if (conversationContext.problemSolved) {
    avatarClient.animate('happy');
    avatarClient.setCustomMessage('Great! I'm glad we solved your problem!');
  }
}

Troubleshooting

  • Connection Issues: Ensure the WebSocket server URL is correct and accessible
  • Animation Not Showing: Check browser console for errors in the renderer
  • Server Errors: Check the MCP server logs for connection problems
  • Missing Animations: Verify the animation name is in the available animations list

Security Considerations

  • The MCP Server should validate API keys properly in production
  • Use secure WebSocket connections (WSS) in production environments
  • Implement rate limiting to prevent animation command flooding
  • Sanitize user input before displaying in custom messages

Performance Optimization

  • Use lightweight CSS animations for better performance on low-end devices
  • Batch animation commands when possible to reduce WebSocket traffic
  • Use animation timeouts to avoid unnecessary state changes
  • Consider using requestAnimationFrame for smoother animations

By following this guide, you can integrate expressive avatar animations into any AI system, enhancing the user experience and making AI interactions more engaging and human-like.

Quick Start

1

Clone the repository

git clone https://github.com/washyu/mcp_virtual_assistant_avatar
2

Install dependencies

cd mcp_virtual_assistant_avatar
npm install
3

Follow the documentation

Check the repository's README.md file for specific installation and usage instructions.

Repository Details

Ownerwashyu
Repomcp_virtual_assistant_avatar
LanguageJavaScript
License-
Last fetched8/10/2025

Recommended MCP Servers

💬

Discord MCP

Enable AI assistants to seamlessly interact with Discord servers, channels, and messages.

integrationsdiscordchat
🔗

Knit MCP

Connect AI agents to 200+ SaaS applications and automate workflows.

integrationsautomationsaas
🕷️

Apify MCP Server

Deploy and interact with Apify actors for web scraping and data extraction.

apifycrawlerdata
🌐

BrowserStack MCP

BrowserStack MCP Server for automated testing across multiple browsers.

testingqabrowsers

Zapier MCP

A Zapier server that provides automation capabilities for various apps.

zapierautomation