🗄️ Database Setup & Optimization
PostgreSQL là database recommended cho production n8n. Bài này covers setup chi tiết, backup strategies, và performance optimization.
Why PostgreSQL?
Comparison:
Text
1SQLITE POSTGRESQL2────────────────────────────────────────────3Performance Good for dev Production-ready4Concurrency Limited Excellent5Scalability Single file Horizontal OK6Backup Copy file Advanced tools7Replication None Built-in8Max Size ~140TB Unlimited9Best For Development Production10 11VERDICT: Always use PostgreSQL for production!PostgreSQL Installation
Via Docker (Recommended):
yaml
1# docker-compose.yml2services:3 postgres:4 image: postgres:15-alpine5 container_name: n8n-postgres6 restart: unless-stopped7 environment:8 POSTGRES_USER: n8n9 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}10 POSTGRES_DB: n8n11 volumes:12 - postgres_data:/var/lib/postgresql/data13 - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro14 ports:15 - "5432:5432" # Remove in production16 healthcheck:17 test: ["CMD-SHELL", "pg_isready -U n8n"]18 interval: 10s19 timeout: 5s20 retries: 521 22volumes:23 postgres_data:Initialization Script:
SQL
1-- init.sql2-- Runs on first startup only34-- Create extension for UUID support5CREATE EXTENSION IF NOT EXISTS "uuid-ossp";67-- Create additional indexes for performance8-- (n8n creates its own, but these help)910-- Optional: Create read-only user for reporting11CREATE USER n8n_readonly WITH PASSWORD 'readonly_password';12GRANT CONNECT ON DATABASE n8n TO n8n_readonly;13GRANT USAGE ON SCHEMA public TO n8n_readonly;14GRANT SELECT ON ALL TABLES IN SCHEMA public TO n8n_readonly;15ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO n8n_readonly;Via Package Manager (Bare Metal):
Bash
1# Ubuntu/Debian2sudo apt update3sudo apt install postgresql postgresql-contrib4 5# Start service6sudo systemctl start postgresql7sudo systemctl enable postgresql8 9# Create user and database10sudo -u postgres psql << EOF11CREATE USER n8n WITH PASSWORD 'secure_password';12CREATE DATABASE n8n OWNER n8n;13GRANT ALL PRIVILEGES ON DATABASE n8n TO n8n;14EOFPostgreSQL Configuration
Recommended postgresql.conf Settings:
ini
1# /var/lib/postgresql/data/postgresql.conf2 3# Memory Settings (Adjust based on RAM)4# For 4GB RAM server:5shared_buffers = 1GB # 25% of RAM6effective_cache_size = 3GB # 75% of RAM7maintenance_work_mem = 256MB8work_mem = 64MB9 10# Connections11max_connections = 10012 13# Write Ahead Log14wal_buffers = 64MB15min_wal_size = 1GB16max_wal_size = 4GB17checkpoint_completion_target = 0.918 19# Query Planner20random_page_cost = 1.1 # For SSD21effective_io_concurrency = 200 # For SSD22 23# Logging24log_min_duration_statement = 1000 # Log slow queries (ms)25log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d 'Docker PostgreSQL Config:
yaml
1services:2 postgres:3 image: postgres:15-alpine4 command:5 - "postgres"6 - "-c"7 - "shared_buffers=256MB"8 - "-c"9 - "effective_cache_size=768MB"10 - "-c"11 - "maintenance_work_mem=128MB"12 - "-c"13 - "checkpoint_completion_target=0.9"14 - "-c"15 - "wal_buffers=16MB"16 - "-c"17 - "default_statistics_target=100"18 - "-c"19 - "random_page_cost=1.1"20 - "-c"21 - "effective_io_concurrency=200"22 - "-c"23 - "min_wal_size=1GB"24 - "-c"25 - "max_wal_size=4GB"26 # ... rest of confign8n Database Schema
Main Tables:
SQL
1-- Core tables created by n8n:23workflow_entity -- Workflow definitions4├── id5├── name6├── active7├── nodes (JSON)8├── connections (JSON)9└── settings1011credentials_entity -- Encrypted credentials12├── id13├── name14├── type15├── data (encrypted)16└── nodesAccess1718execution_entity -- Execution history19├── id20├── workflowId21├── finished22├── mode23├── startedAt24├── stoppedAt25├── workflowData (JSON)26└── data (JSON, can be large!)2728webhook_entity -- Registered webhooks29├── webhookPath30├── method31├── node32└── workflowIdDatabase Size Monitoring:
SQL
1-- Check total database size2SELECT pg_size_pretty(pg_database_size('n8n'));34-- Check table sizes5SELECT 6 relname AS table_name,7 pg_size_pretty(pg_total_relation_size(relid)) AS total_size,8 pg_size_pretty(pg_relation_size(relid)) AS data_size,9 pg_size_pretty(pg_indexes_size(relid)) AS index_size10FROM pg_catalog.pg_statio_user_tables11ORDER BY pg_total_relation_size(relid) DESC;1213-- execution_entity is usually largest!Backup Strategies
Method 1: pg_dump (Logical):
Bash
1#!/bin/bash2# backup.sh3 4DATE=$(date +%Y%m%d_%H%M%S)5BACKUP_DIR=/backups6DB_HOST=postgres7DB_USER=n8n8DB_NAME=n8n9 10# Create backup11PGPASSWORD=$POSTGRES_PASSWORD pg_dump \12 -h $DB_HOST \13 -U $DB_USER \14 -d $DB_NAME \15 -F c \16 -f "$BACKUP_DIR/n8n_$DATE.dump"17 18# Keep only last 7 days19find $BACKUP_DIR -name "n8n_*.dump" -mtime +7 -delete20 21echo "Backup complete: n8n_$DATE.dump"Method 2: pg_basebackup (Physical):
Bash
1#!/bin/bash2# For full cluster backup (larger, faster restore)3 4DATE=$(date +%Y%m%d_%H%M%S)5BACKUP_DIR=/backups/base6 7pg_basebackup \8 -h postgres \9 -U replication_user \10 -D "$BACKUP_DIR/n8n_$DATE" \11 -Ft \12 -z \13 -PMethod 3: Docker Volume Backup:
Bash
1#!/bin/bash2# Backup entire volume3 4# Stop n8n (optional, for consistency)5docker compose stop n8n6 7# Backup volume8docker run --rm \9 -v n8n_postgres_data:/source:ro \10 -v $(pwd)/backups:/backup \11 alpine tar czf /backup/postgres_$(date +%Y%m%d).tar.gz -C /source .12 13# Restart14docker compose start n8nAutomated Backup với Cron:
Bash
1# crontab -e2 3# Daily backup at 2 AM40 2 * * * /opt/n8n/backup.sh >> /var/log/n8n-backup.log 2>&15 6# Weekly full backup at 3 AM Sunday70 3 * * 0 /opt/n8n/full-backup.sh >> /var/log/n8n-backup.log 2>&1Docker Backup Service:
yaml
1services:2 backup:3 image: postgres:15-alpine4 volumes:5 - ./backups:/backups6 - ./backup.sh:/backup.sh:ro7 environment:8 - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}9 entrypoint: /bin/sh10 command: >11 -c "while true; do12 sleep 86400;13 /backup.sh;14 done"15 depends_on:16 - postgresRestore Procedures
From pg_dump:
Bash
1# Restore to existing database2PGPASSWORD=$POSTGRES_PASSWORD pg_restore \3 -h postgres \4 -U n8n \5 -d n8n \6 -c \7 --if-exists \8 /backups/n8n_20240115.dump9 10# Or to new database11createdb -h postgres -U n8n n8n_restored12pg_restore -h postgres -U n8n -d n8n_restored /backups/n8n_20240115.dumpFrom Volume Backup:
Bash
1# Stop services2docker compose down3 4# Remove old volume5docker volume rm n8n_postgres_data6 7# Create new volume8docker volume create n8n_postgres_data9 10# Restore11docker run --rm \12 -v n8n_postgres_data:/target \13 -v $(pwd)/backups:/backup:ro \14 alpine sh -c "cd /target && tar xzf /backup/postgres_20240115.tar.gz"15 16# Start services17docker compose up -dPerformance Optimization
Execution Data Pruning:
Bash
1# n8n environment variables2EXECUTIONS_DATA_PRUNE=true3EXECUTIONS_DATA_MAX_AGE=168 # 7 days in hours4EXECUTIONS_DATA_SAVE_ON_ERROR=all5EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Saves space!6EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=trueManual Cleanup:
SQL
1-- Delete old executions (BE CAREFUL!)2DELETE FROM execution_entity 3WHERE "stoppedAt" < NOW() - INTERVAL '30 days';45-- Vacuum to reclaim space6VACUUM FULL execution_entity;78-- Reindex9REINDEX TABLE execution_entity;Performance Queries:
SQL
1-- Check slow queries (enable pg_stat_statements)2SELECT 3 query,4 calls,5 mean_exec_time,6 total_exec_time7FROM pg_stat_statements8ORDER BY total_exec_time DESC9LIMIT 10;1011-- Check index usage12SELECT 13 schemaname,14 relname,15 indexrelname,16 idx_scan,17 idx_tup_read,18 idx_tup_fetch19FROM pg_stat_user_indexes20WHERE schemaname = 'public'21ORDER BY idx_scan DESC;2223-- Check table bloat24SELECT 25 schemaname,26 tablename,27 pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS size28FROM pg_tables29WHERE schemaname = 'public'30ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC;Adding Indexes:
SQL
1-- Custom indexes if needed (check before adding)2CREATE INDEX IF NOT EXISTS idx_execution_started 3ON execution_entity ("startedAt" DESC);45CREATE INDEX IF NOT EXISTS idx_execution_workflow 6ON execution_entity ("workflowId", "startedAt" DESC);78CREATE INDEX IF NOT EXISTS idx_execution_status 9ON execution_entity ("finished", "startedAt" DESC);Monitoring
Health Check Script:
Bash
1#!/bin/bash2# db-health.sh3 4PGPASSWORD=$POSTGRES_PASSWORD psql -h postgres -U n8n -d n8n << EOF5-- Connection count6SELECT count(*) as active_connections FROM pg_stat_activity;7 8-- Database size9SELECT pg_size_pretty(pg_database_size('n8n')) as db_size;10 11-- Execution count last 24h12SELECT count(*) as executions_24h 13FROM execution_entity 14WHERE "startedAt" > NOW() - INTERVAL '24 hours';15 16-- Oldest execution17SELECT MIN("startedAt") as oldest_execution FROM execution_entity;18EOFPrometheus Metrics:
yaml
1# Add postgres_exporter2services:3 postgres-exporter:4 image: prometheuscommunity/postgres-exporter5 environment:6 DATA_SOURCE_NAME: "postgresql://n8n:${POSTGRES_PASSWORD}@postgres:5432/n8n?sslmode=disable"7 ports:8 - "9187:9187"High Availability
Simple Replication Setup:
yaml
1# docker-compose.ha.yml2services:3 postgres-primary:4 image: postgres:15-alpine5 environment:6 POSTGRES_USER: n8n7 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}8 POSTGRES_DB: n8n9 command:10 - "postgres"11 - "-c"12 - "wal_level=replica"13 - "-c"14 - "max_wal_senders=3"15 - "-c"16 - "max_replication_slots=3"17 volumes:18 - postgres_primary:/var/lib/postgresql/data19 20 postgres-replica:21 image: postgres:15-alpine22 environment:23 POSTGRES_USER: n8n24 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}25 depends_on:26 - postgres-primary27 # Setup replication (more config needed)Bài Tập Thực Hành
Database Challenge
Setup production database:
- Deploy PostgreSQL với Docker Compose
- Configure performance settings
- Create backup script
- Test backup/restore cycle
- Setup automated daily backup
- Monitor database size
Never lose your workflows! 💾
Key Takeaways
Remember
- 🗄️ PostgreSQL for production, SQLite for dev only
- 💾 Backup daily - Test restore regularly
- 🧹 Prune executions - They grow fast!
- 📊 Monitor size - execution_entity grows quickly
- ⚡ Tune settings - Based on your RAM
Tiếp Theo
Bài tiếp theo: SSL & Domain Setup - Secure n8n với HTTPS và custom domain.
