🧠 AI Nodes trong n8n
n8n cung cấp nhiều AI nodes để integrate LLMs vào workflows. Bài này giới thiệu các nodes và cách sử dụng.
Available AI Nodes
Diagram
graph TD
AI[AI in n8n] --> LLM[LLM Nodes]
AI --> Agent[Agent Nodes]
AI --> Memory[Memory Nodes]
AI --> Tools[Tool Nodes]
LLM --> OpenAI
LLM --> Claude[Anthropic Claude]
LLM --> Ollama[Ollama/Local]
Agent --> Basic[Basic Agent]
Agent --> Conv[Conversational]
Agent --> Tools2[Tools Agent]LLM Nodes
1. OpenAI Node
Setup:
- Add OpenAI node
- Connect OpenAI credential (API key)
- Select model
Configuration:
JavaScript
1// Model options2- gpt-4o (latest, best)3- gpt-4o-mini (fast, cheap)4- gpt-4-turbo (powerful)5- gpt-3.5-turbo (budget)67// Parameters8Temperature: 0.7 // 0-2, creativity9Max Tokens: 1000 // Response length10Top P: 1 // Nucleus samplingExample Usage:
JavaScript
1// Input2{3 "messages": [4 { "role": "system", "content": "You are a helpful assistant" },5 { "role": "user", "content": "Explain AI in simple terms" }6 ]7}89// Output10{11 "content": "AI, or Artificial Intelligence, is like teaching computers to think...",12 "model": "gpt-4o-mini",13 "usage": { "total_tokens": 150 }14}2. Anthropic Claude Node
Setup:
- Add Anthropic node
- Add Claude API key
- Configure model
Models:
- claude-3-5-sonnet (best balance)
- claude-3-opus (most powerful)
- claude-3-haiku (fastest)
Configuration:
JavaScript
1Model: claude-3-5-sonnet-202410222Max Tokens: 40963Temperature: 0.74System Prompt: "You are a Vietnamese customer support agent..."3. Ollama (Local LLMs)
Chạy LLMs locally:
Setup:
- Install Ollama
- Pull model:
ollama pull llama2 - Configure n8n Ollama node
Popular Models:
- llama2 (general purpose)
- codellama (code generation)
- mistral (efficient)
- phi (small but capable)
JavaScript
1// Ollama node config2Base URL: http://localhost:114343Model: llama24Keep Alive: 5mFirst AI Workflow
Simple Text Generation
Diagram
graph LR
T[Manual Trigger] --> I[Input Data]
I --> O[OpenAI Node]
O --> R[Output Result]Step 1: Manual Trigger
- Click to start workflow
Step 2: Set Node (Input)
JavaScript
1{2 "prompt": "Write a product description for: {{ $json.product_name }}",3 "product_name": "Vietnamese Coffee Filter Set"4}Step 3: OpenAI Node
JavaScript
1// Messages configuration2[3 {4 "role": "system",5 "content": "You are a marketing copywriter specializing in Vietnamese products."6 },7 {8 "role": "user", 9 "content": "{{ $json.prompt }}"10 }11]Step 4: Output
- Generated product description
Prompt Templates
Using Expressions
JavaScript
1// Dynamic prompts với n8n expressions2System: "You are an assistant for {{ $env.COMPANY_NAME }}"3User: "Help customer {{ $json.customer_name }} with: {{ $json.question }}"Template Node
JavaScript
1// Code node for complex prompts2const customerData = $input.first().json;34const prompt = `5You are a customer support agent.67Customer Information:8- Name: ${customerData.name}9- Account Type: ${customerData.accountType}10- Previous Issues: ${customerData.issueCount}1112Current Request:13${customerData.message}1415Guidelines:16- Be polite and professional17- Refer to customer by name18- If VIP customer (Premium account), offer priority support19`;2021return [{ json: { prompt } }];Handling Responses
Extract Content
JavaScript
1// OpenAI output structure2{3 "message": {4 "content": "Generated text here...",5 "role": "assistant"6 },7 "usage": {8 "prompt_tokens": 50,9 "completion_tokens": 100,10 "total_tokens": 15011 }12}1314// Access in next node15{{ $json.message.content }}Parse Structured Output
JavaScript
1// Request JSON output2System: "Always respond in JSON format with keys: summary, sentiment, action_items"34// Parse response5const response = $input.first().json.message.content;67try {8 const parsed = JSON.parse(response);9 return [{ json: parsed }];10} catch (e) {11 return [{ json: { error: "Failed to parse JSON", raw: response } }];12}Cost Tracking
Monitor Token Usage
JavaScript
1// After OpenAI node2const usage = $input.first().json.usage;34// Calculate cost (GPT-4o-mini pricing)5const inputCost = (usage.prompt_tokens / 1000) * 0.00015;6const outputCost = (usage.completion_tokens / 1000) * 0.0006;7const totalCost = inputCost + outputCost;89// Log to database or tracking service10return [{11 json: {12 ...input.first().json,13 cost: {14 input_tokens: usage.prompt_tokens,15 output_tokens: usage.completion_tokens,16 total_cost_usd: totalCost.toFixed(6)17 }18 }19}];Cost Optimization
JavaScript
1// Strategy 1: Use smaller models for simple tasks2const task = $json.task_type;3const model = task === "simple_classification" 4 ? "gpt-3.5-turbo" 5 : "gpt-4o-mini";67// Strategy 2: Limit max tokens8Max Tokens: 500 // Don't allow long responses if not needed910// Strategy 3: Cache similar requests11// Check Redis/database for cached response before calling APIError Handling
JavaScript
1// Wrap AI calls in error handling2try {3 const result = await openAICall();4 return [{ json: { success: true, result } }];5} catch (error) {6 if (error.code === 'rate_limit_exceeded') {7 // Wait and retry8 await sleep(5000);9 return [{ json: { retry: true } }];10 }11 12 if (error.code === 'context_length_exceeded') {13 // Truncate input14 return [{ json: { error: "Input too long", suggestion: "Shorten prompt" } }];15 }16 17 return [{ json: { error: error.message } }];18}Best Practices
AI Node Tips
- Start simple - Test prompts trước khi complexify
- Use system prompts - Set context rõ ràng
- Validate outputs - Check và parse responses
- Track costs - Monitor token usage
- Handle errors - Rate limits, timeouts
- Cache when possible - Same input = same output
Bài tập thực hành
Hands-on Exercise
Build Content Generator Workflow:
- Input: Blog topic
- OpenAI: Generate outline
- OpenAI: Write each section
- Combine into full article
- Track total tokens/cost
Target: Automated blog writing pipeline
Tiếp theo
Bài tiếp theo: OpenAI Setup - Deep dive vào OpenAI configuration.
