Skip to content

LLM Chat Interface#

Learn how to create an AI-powered chat interface using n8n with large language models for intelligent conversations and automated responses.

🎯 What You'll Build#

An AI chat interface that: - Handles user messages via web interface or messaging platforms - Integrates with various LLM providers (OpenAI, Gemini, etc.) - Maintains conversation context and history - Provides intelligent, contextual responses - Supports custom prompts and system instructions

πŸ“‹ Requirements#

  • API access to LLM provider (OpenAI, Google Gemini, etc.)
  • n8n instance running
  • Web server or messaging platform for user interface
  • Basic understanding of APIs and webhooks

πŸ”§ Workflow Overview#

Key Components#

  1. Message Trigger - Receives user messages
  2. Context Manager - Maintains conversation history
  3. LLM Integration - Calls AI model for responses
  4. Response Processor - Formats and delivers responses
  5. History Storage - Saves conversation logs

πŸ“ Step-by-Step Guide#

1. Choose Your LLM Provider#

OpenAI (GPT)#

  1. Get API Key from OpenAI platform
  2. Set up billing and review pricing
  3. Choose model (GPT-3.5-turbo, GPT-4, etc.)

Google Gemini#

  1. Get API Key from Google AI Studio
  2. Enable Gemini API in Google Cloud Console
  3. Configure quota limits and monitoring

2. Set Up Message Trigger#

Web Form Interface#

  1. Add Form Trigger node
  2. Configure form fields: - User message (required) - User ID/session (hidden field) - Context options (optional)

Telegram Bot#

  1. Add Telegram Trigger node
  2. Configure bot token
  3. Set up message handling

Webhook Endpoint#

  1. Add Webhook node
  2. Configure unique endpoint path
  3. Set response mode

3. Configure LLM Integration#

OpenAI Integration#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// OpenAI API Call Configuration
const openaiConfig = {
  model: "gpt-3.5-turbo",
  messages: [
    {
      role: "system",
      content: "You are a helpful AI assistant. Be concise and helpful."
    },
    {
      role: "user",
      content: userMessage
    }
  ],
  temperature: 0.7,
  max_tokens: 500
};

Gemini Integration#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Gemini API Call Configuration
const geminiConfig = {
  contents: [{
    parts: [{
      text: userMessage
    }]
  }],
  generationConfig: {
    temperature: 0.7,
    maxOutputTokens: 500
  }
};

4. Manage Conversation Context#

Session-based Context#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// Maintain conversation history
function updateContext(userMessage, aiResponse, context) {
  if (!context) {
    context = {
      conversation: [],
      user_id: userId,
      session_start: new Date().toISOString()
    };
  }

  // Add messages to conversation history
  context.conversation.push({
    role: "user",
    content: userMessage,
    timestamp: new Date().toISOString()
  });

  context.conversation.push({
    role: "assistant",
    content: aiResponse,
    timestamp: new Date().toISOString()
  });

  // Limit conversation history (keep last 10 exchanges)
  if (context.conversation.length > 20) {
    context.conversation = context.conversation.slice(-20);
  }

  return context;
}

Context Injection#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Add relevant context to each message
function buildPromptWithContext(userMessage, context) {
  const systemPrompt = `You are a helpful AI assistant.
  Current conversation context:
  ${context.conversation.slice(-6).map(msg => `${msg.role}: ${msg.content}`).join('\n')}

  Please respond to the latest user message while maintaining conversation continuity.`;

  return {
    system: systemPrompt,
    user: userMessage
  };
}

πŸ€– Advanced Chat Features#

Custom Instructions and Personas#

  1. System Prompts

    1
    2
    3
    4
    5
    6
    7
    8
    const personas = {
      professional: "You are a professional business assistant. Be formal, concise, and helpful.",
      creative: "You are a creative writing assistant. Be imaginative, inspiring, and supportive.",
      technical: "You are a technical expert. Provide detailed, accurate information with examples.",
      casual: "You are a friendly chat companion. Be casual, engaging, and humorous."
    };
    
    const selectedPersona = personas[context.persona] || personas.professional;
    

  2. Dynamic Context Switching

    1
    2
    3
    4
    5
    6
    7
    // Detect conversation intent and adjust persona
    if (userMessage.toLowerCase().includes('help me code')) {
      context.persona = 'technical';
    } else if (userMessage.toLowerCase().includes('story') ||
               userMessage.toLowerCase().includes('creative')) {
      context.persona = 'creative';
    }
    

Multi-turn Conversations#

  1. Follow-up Questions

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    // Generate follow-up questions based on context
    function generateFollowUp(response, context) {
      const followUpPatterns = {
        'code': [
          "Would you like me to explain any part of this code?",
          "Do you need help implementing this in a specific language?"
        ],
        'explanation': [
          "Would you like more details on any particular aspect?",
          "Do you have any specific questions about this topic?"
        ]
      };
    
      const category = categorizeResponse(response);
      const suggestions = followUpPatterns[category] || [];
      return suggestions[Math.floor(Math.random() * suggestions.length)];
    }
    

  2. Conversation State Management

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    // Track conversation state and intent
    function updateConversationState(message, context) {
      const intent = detectIntent(message);
    
      context.state = {
        ...context.state,
        current_intent: intent,
       _pending_follow_up: needsFollowUp(intent),
        last_topic: extractTopic(message),
        conversation_depth: (context.state?.conversation_depth || 0) + 1
      };
    
      return context;
    }
    

πŸ”Œ Integration Examples#

Customer Support Chatbot#

  1. Knowledge Base Integration

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    // Search knowledge base before responding
    async function getKnowledgeBaseAnswer(query) {
      const searchResults = await searchKnowledgeBase(query);
    
      if (searchResults.length > 0 && searchResults[0].relevance > 0.8) {
        return {
          use_kb: true,
          answer: searchResults[0].content,
          confidence: searchResults[0].relevance
        };
      }
    
      return { use_kb: false };
    }
    
    // Enhance LLM prompt with knowledge base info
    const kbInfo = await getKnowledgeBaseAnswer(userMessage);
    const enhancedPrompt = kbInfo.use_kb
      ? `Based on our knowledge base: ${kbInfo.answer}\n\nUser question: ${userMessage}`
      : userMessage;
    

  2. Human Handoff

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    // Detect when human intervention is needed
    function needsHumanHandoff(userMessage, aiResponse, context) {
      const handoffTriggers = [
        'speak to human',
        'real person',
        'representative',
        'complaint',
        'angry',
        'frustrated'
      ];
    
      return handoffTriggers.some(trigger =>
        userMessage.toLowerCase().includes(trigger)
      ) || context.state?.conversation_depth > 10;
    }
    

Educational Tutor#

  1. Adaptive Learning

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    // Track learning progress and adapt responses
    function trackLearningProgress(response, topic, difficulty) {
      const progress = {
        topic: topic,
        difficulty: difficulty,
        comprehension_score: analyzeComprehension(response),
        timestamp: new Date().toISOString()
      };
    
      // Adjust future responses based on progress
      if (progress.comprehension_score < 0.6) {
        return {
          simplify: true,
          provide_examples: true,
          check_understanding: true
        };
      } else if (progress.comprehension_score > 0.8) {
        return {
          increase_difficulty: true,
          introduce_advanced_concepts: true
        };
      }
    }
    

  2. Interactive Exercises

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    // Generate practice exercises based on topic
    function generateExercise(topic, difficulty) {
      const exercisePrompt = `
      Create a ${difficulty} level exercise about ${topic}.
      Include:
      1. A clear problem statement
      2. Step-by-step hints
      3. The correct solution
      4. Explanation of key concepts
      `;
    
      return generateLLMResponse(exercisePrompt);
    }
    

πŸ“Š Analytics and Monitoring#

Conversation Analytics#

  1. User Engagement Metrics

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    // Track conversation metrics
    function trackConversationMetrics(conversation) {
      return {
        duration: calculateConversationDuration(conversation),
        message_count: conversation.conversation.length,
        user_satisfaction: analyzeUserSatisfaction(conversation),
        topic_distribution: analyzeTopics(conversation),
        response_quality: scoreResponseQuality(conversation)
      };
    }
    

  2. Performance Monitoring

    1
    2
    3
    4
    5
    6
    7
    8
    // Monitor LLM performance
    const performanceMetrics = {
      response_time: Date.now() - requestStartTime,
      token_usage: response.usage.total_tokens,
      model_used: model,
      api_cost: calculateCost(response.usage),
      error_rate: trackErrors()
    };
    

Quality Assurance#

  1. Response Quality Scoring
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    // Evaluate AI response quality
    function scoreResponseQuality(userMessage, aiResponse) {
      const factors = {
        relevance: calculateRelevance(userMessage, aiResponse),
        completeness: checkCompleteness(userMessage, aiResponse),
        clarity: assessClarity(aiResponse),
        accuracy: verifyAccuracy(aiResponse),
        helpfulness: measureHelpfulness(aiResponse)
      };
    
      return {
        overall_score: Object.values(factors).reduce((a, b) => a + b) / Object.keys(factors).length,
        ...factors
      };
    }
    

🎨 User Interface Options#

Web Chat Interface#

  1. HTML Chat Widget

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    <div id="chat-container">
      <div id="chat-messages"></div>
      <div id="chat-input-container">
        <input type="text" id="user-input" placeholder="Type your message...">
        <button onclick="sendMessage()">Send</button>
      </div>
    </div>
    
    <script>
      async function sendMessage() {
        const input = document.getElementById('user-input');
        const message = input.value.trim();
    
        if (message) {
          // Display user message
          addMessage('user', message);
    
          // Send to n8n webhook
          const response = await fetch('/webhook/chat', {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify({ message: message })
          });
    
          const aiResponse = await response.json();
          addMessage('assistant', aiResponse.message);
    
          input.value = '';
        }
      }
    </script>
    

  2. Advanced Features - Typing indicators - Message timestamps - File upload support - Conversation history - User preferences

Mobile App Integration#

  1. React Native Chat
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    const ChatScreen = () => {
      const [messages, setMessages] = useState([]);
      const [inputText, setInputText] = useState('');
    
      const sendMessage = async () => {
        const userMessage = { text: inputText, user: true };
        setMessages([...messages, userMessage]);
    
        try {
          const response = await fetch(n8nWebhookUrl, {
            method: 'POST',
            body: JSON.stringify({ message: inputText })
          });
    
          const aiResponse = await response.json();
          setMessages(prev => [...prev, { text: aiResponse.message, user: false }]);
        } catch (error) {
          console.error('Error sending message:', error);
        }
    
        setInputText('');
      };
    
      return (
        <FlatList
          data={messages}
          renderItem={({ item }) => (
            <MessageBubble message={item.text} isUser={item.user} />
          )}
          keyExtractor={(item, index) => index.toString()}
          ListFooterComponent={
            <View style={styles.inputContainer}>
              <TextInput
                value={inputText}
                onChangeText={setInputText}
                placeholder="Type a message..."
              />
              <Button title="Send" onPress={sendMessage} />
            </View>
          }
        />
      );
    };
    

πŸ§ͺ Testing and Optimization#

Response Quality Testing#

  1. Test Scenarios

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    const testCases = [
      {
        input: "Explain quantum computing in simple terms",
        expected_topics: ["quantum", "computing", "simple explanation"],
        min_length: 100,
        max_length: 500
      },
      {
        input: "Write a Python function to sort a list",
        expected_code: true,
        expected_language: "python"
      }
    ];
    

  2. Performance Testing - Response time benchmarks - Token usage optimization - Cost analysis per interaction - Scalability testing

πŸ” Troubleshooting#

Common Issues#

API Rate Limits - Implement proper rate limiting - Use request queuing - Monitor usage metrics - Upgrade API plan if needed

Poor Response Quality - Refine system prompts - Add better context management - Implement feedback mechanisms - Use temperature tuning

High Latency - Optimize API calls - Use streaming responses - Implement caching - Consider edge deployment

Context Loss - Improve conversation state management - Use vector databases for long-term memory - Implement context compression - Regular context cleanup

πŸ›‘οΈ Security and Privacy#

Data Protection#

  1. User Privacy - Anonymize user data - Implement data retention policies - Secure data transmission - Obtain proper consent

  2. Content Filtering

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    // Implement content moderation
    function moderateContent(text) {
      const prohibitedContent = [
        'hate speech',
        'violence',
        'illegal activities',
        'personal data requests'
      ];
    
      return prohibitedContent.some(content =>
        text.toLowerCase().includes(content)
      );
    }
    

API Security#

  1. Access Control - Secure API key management - Implement authentication - Rate limiting per user - Monitor for abuse

  2. Input Validation - Sanitize user inputs - Validate message length - Filter malicious content - Prevent prompt injection

πŸŽ“ Advanced Features#

Multimodal Capabilities#

  1. Image Analysis - Process uploaded images - Generate image descriptions - Answer questions about images - Create visual content

  2. Voice Integration - Speech-to-text input - Text-to-speech responses - Voice command recognition - Real-time voice chat

Personalization#

  1. User Profiling

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    // Build user profiles for personalization
    function updateUserProfile(userId, message, response) {
      const profile = getUserProfile(userId);
    
      profile.interests = updateInterests(profile.interests, message);
      profile.communication_style = analyzeStyle(message, response);
      profile.knowledge_level = assessKnowledgeLevel(response);
      profile.preferred_topics = extractTopics(message);
    
      saveUserProfile(userId, profile);
    }
    

  2. Adaptive Responses - Learn user preferences - Adjust response style - Remember past interactions - Predict user needs


Related Tutorials: - Form Submission - Learn about form handling - Email Integration - Email notification setup

Resources: - OpenAI API Documentation - Google Gemini API - n8n AI Integration Guide