Lý thuyết
30 phút
Bài 1/3

AI Nodes Overview

Tổng quan về các AI nodes trong n8n - OpenAI, Claude, local LLMs

🧠 AI Nodes trong n8n

n8n cung cấp nhiều AI nodes để integrate LLMs vào workflows. Bài này giới thiệu các nodes và cách sử dụng.

Available AI Nodes

Diagram
graph TD
    AI[AI in n8n] --> LLM[LLM Nodes]
    AI --> Agent[Agent Nodes]
    AI --> Memory[Memory Nodes]
    AI --> Tools[Tool Nodes]
    
    LLM --> OpenAI
    LLM --> Claude[Anthropic Claude]
    LLM --> Ollama[Ollama/Local]
    
    Agent --> Basic[Basic Agent]
    Agent --> Conv[Conversational]
    Agent --> Tools2[Tools Agent]

LLM Nodes

1. OpenAI Node

Setup:

  1. Add OpenAI node
  2. Connect OpenAI credential (API key)
  3. Select model

Configuration:

JavaScript
1// Model options
2- gpt-4o (latest, best)
3- gpt-4o-mini (fast, cheap)
4- gpt-4-turbo (powerful)
5- gpt-3.5-turbo (budget)
6
7// Parameters
8Temperature: 0.7 // 0-2, creativity
9Max Tokens: 1000 // Response length
10Top P: 1 // Nucleus sampling

Example Usage:

JavaScript
1// Input
2{
3 "messages": [
4 { "role": "system", "content": "You are a helpful assistant" },
5 { "role": "user", "content": "Explain AI in simple terms" }
6 ]
7}
8
9// Output
10{
11 "content": "AI, or Artificial Intelligence, is like teaching computers to think...",
12 "model": "gpt-4o-mini",
13 "usage": { "total_tokens": 150 }
14}

2. Anthropic Claude Node

Setup:

  1. Add Anthropic node
  2. Add Claude API key
  3. Configure model

Models:

  • claude-3-5-sonnet (best balance)
  • claude-3-opus (most powerful)
  • claude-3-haiku (fastest)

Configuration:

JavaScript
1Model: claude-3-5-sonnet-20241022
2Max Tokens: 4096
3Temperature: 0.7
4System Prompt: "You are a Vietnamese customer support agent..."

3. Ollama (Local LLMs)

Chạy LLMs locally:

Setup:

  1. Install Ollama
  2. Pull model: ollama pull llama2
  3. Configure n8n Ollama node

Popular Models:

  • llama2 (general purpose)
  • codellama (code generation)
  • mistral (efficient)
  • phi (small but capable)
JavaScript
1// Ollama node config
2Base URL: http://localhost:11434
3Model: llama2
4Keep Alive: 5m

First AI Workflow

Simple Text Generation

Diagram
graph LR
    T[Manual Trigger] --> I[Input Data]
    I --> O[OpenAI Node]
    O --> R[Output Result]

Step 1: Manual Trigger

  • Click to start workflow

Step 2: Set Node (Input)

JavaScript
1{
2 "prompt": "Write a product description for: {{ $json.product_name }}",
3 "product_name": "Vietnamese Coffee Filter Set"
4}

Step 3: OpenAI Node

JavaScript
1// Messages configuration
2[
3 {
4 "role": "system",
5 "content": "You are a marketing copywriter specializing in Vietnamese products."
6 },
7 {
8 "role": "user",
9 "content": "{{ $json.prompt }}"
10 }
11]

Step 4: Output

  • Generated product description

Prompt Templates

Using Expressions

JavaScript
1// Dynamic prompts với n8n expressions
2System: "You are an assistant for {{ $env.COMPANY_NAME }}"
3User: "Help customer {{ $json.customer_name }} with: {{ $json.question }}"

Template Node

JavaScript
1// Code node for complex prompts
2const customerData = $input.first().json;
3
4const prompt = `
5You are a customer support agent.
6
7Customer Information:
8- Name: ${customerData.name}
9- Account Type: ${customerData.accountType}
10- Previous Issues: ${customerData.issueCount}
11
12Current Request:
13${customerData.message}
14
15Guidelines:
16- Be polite and professional
17- Refer to customer by name
18- If VIP customer (Premium account), offer priority support
19`;
20
21return [{ json: { prompt } }];

Handling Responses

Extract Content

JavaScript
1// OpenAI output structure
2{
3 "message": {
4 "content": "Generated text here...",
5 "role": "assistant"
6 },
7 "usage": {
8 "prompt_tokens": 50,
9 "completion_tokens": 100,
10 "total_tokens": 150
11 }
12}
13
14// Access in next node
15{{ $json.message.content }}

Parse Structured Output

JavaScript
1// Request JSON output
2System: "Always respond in JSON format with keys: summary, sentiment, action_items"
3
4// Parse response
5const response = $input.first().json.message.content;
6
7try {
8 const parsed = JSON.parse(response);
9 return [{ json: parsed }];
10} catch (e) {
11 return [{ json: { error: "Failed to parse JSON", raw: response } }];
12}

Cost Tracking

Monitor Token Usage

JavaScript
1// After OpenAI node
2const usage = $input.first().json.usage;
3
4// Calculate cost (GPT-4o-mini pricing)
5const inputCost = (usage.prompt_tokens / 1000) * 0.00015;
6const outputCost = (usage.completion_tokens / 1000) * 0.0006;
7const totalCost = inputCost + outputCost;
8
9// Log to database or tracking service
10return [{
11 json: {
12 ...input.first().json,
13 cost: {
14 input_tokens: usage.prompt_tokens,
15 output_tokens: usage.completion_tokens,
16 total_cost_usd: totalCost.toFixed(6)
17 }
18 }
19}];

Cost Optimization

JavaScript
1// Strategy 1: Use smaller models for simple tasks
2const task = $json.task_type;
3const model = task === "simple_classification"
4 ? "gpt-3.5-turbo"
5 : "gpt-4o-mini";
6
7// Strategy 2: Limit max tokens
8Max Tokens: 500 // Don't allow long responses if not needed
9
10// Strategy 3: Cache similar requests
11// Check Redis/database for cached response before calling API

Error Handling

JavaScript
1// Wrap AI calls in error handling
2try {
3 const result = await openAICall();
4 return [{ json: { success: true, result } }];
5} catch (error) {
6 if (error.code === 'rate_limit_exceeded') {
7 // Wait and retry
8 await sleep(5000);
9 return [{ json: { retry: true } }];
10 }
11
12 if (error.code === 'context_length_exceeded') {
13 // Truncate input
14 return [{ json: { error: "Input too long", suggestion: "Shorten prompt" } }];
15 }
16
17 return [{ json: { error: error.message } }];
18}

Best Practices

AI Node Tips
  1. Start simple - Test prompts trước khi complexify
  2. Use system prompts - Set context rõ ràng
  3. Validate outputs - Check và parse responses
  4. Track costs - Monitor token usage
  5. Handle errors - Rate limits, timeouts
  6. Cache when possible - Same input = same output

Bài tập thực hành

Hands-on Exercise

Build Content Generator Workflow:

  1. Input: Blog topic
  2. OpenAI: Generate outline
  3. OpenAI: Write each section
  4. Combine into full article
  5. Track total tokens/cost

Target: Automated blog writing pipeline


Tiếp theo

Bài tiếp theo: OpenAI Setup - Deep dive vào OpenAI configuration.