Bài 8: Performance Tuning NGINX

Bài học về Performance Tuning trong Nginx - tối ưu worker processes và connections, keepalive, buffers, timeouts, gzip compression, sendfile, tcp_nopush/nodelay, open file cache. Hướng dẫn monitoring, benchmarking và best practices để maximize performance cho high-traffic production environments.

15 min read
Bài 8: Performance Tuning NGINX

1. Worker Processes và Worker Connections

1.1. Worker Processes

Worker processes là các processes xử lý actual connections và requests.

Cấu hình cơ bản:

# nginx.conf

# Set số worker processes
worker_processes auto;  # Recommended: tự động detect CPU cores

# Or manual:
# worker_processes 4;   # 4 worker processes
# worker_processes 8;   # 8 worker processes

events {
    worker_connections 1024;
}

http {
    # ...
}

Worker processes explained:

Master Process
├── Worker Process 1 → Handles connections
├── Worker Process 2 → Handles connections
├── Worker Process 3 → Handles connections
└── Worker Process 4 → Handles connections

Best practice:
worker_processes = số CPU cores

Check CPU cores:

# Linux
nproc
# Or
lscpu | grep "^CPU(s):"
# Or
cat /proc/cpuinfo | grep processor | wc -l

# macOS
sysctl -n hw.ncpu

Example configurations:

# Server với 4 CPU cores
worker_processes 4;

# Server với 8 CPU cores
worker_processes 8;

# Auto-detect (recommended)
worker_processes auto;

1.2. Worker Connections

Worker connections xác định số connections mỗi worker có thể handle.

events {
    # Connections per worker
    worker_connections 1024;  # Default
    
    # Or higher for high-traffic sites
    # worker_connections 2048;
    # worker_connections 4096;
}

# Total connections = worker_processes × worker_connections
# Example: 4 workers × 1024 connections = 4,096 total connections

Calculate total capacity:

Total connections = worker_processes × worker_connections

Examples:
- 4 workers × 1024 = 4,096 connections
- 8 workers × 2048 = 16,384 connections
- 16 workers × 4096 = 65,536 connections

Practical example:

# High-traffic configuration
worker_processes auto;  # 8 cores = 8 workers

events {
    worker_connections 4096;
    # Total: 8 × 4096 = 32,768 connections
    
    # Use epoll on Linux (efficient event method)
    use epoll;
    
    # Accept multiple connections at once
    multi_accept on;
}

1.3. Event Methods

events {
    # Linux - epoll (recommended)
    use epoll;
    
    # FreeBSD - kqueue
    # use kqueue;
    
    # macOS - kqueue
    # use kqueue;
    
    # Windows - không cần specify
}

Event method comparison:

Linux:
- epoll: Efficient, scalable (recommended)
- poll: Basic, less efficient
- select: Oldest, least efficient

BSD/macOS:
- kqueue: Most efficient

Windows:
- Uses IOCP automatically

1.4. Multi Accept

events {
    worker_connections 4096;
    
    # Accept multiple connections at once
    multi_accept on;  # Default: off
}

# on: Worker accepts all new connections at once
# off: Worker accepts one connection at a time

# Recommendation: Enable for high-traffic sites

1.5. System Limits

Check system limits:

# Current limits
ulimit -n

# System-wide limit
cat /proc/sys/fs/file-max

# Per-user limit
cat /etc/security/limits.conf

Increase limits:

# Temporary (current session)
ulimit -n 65536

# Permanent - edit /etc/security/limits.conf
sudo nano /etc/security/limits.conf

# Add:
nginx soft nofile 65536
nginx hard nofile 65536
* soft nofile 65536
* hard nofile 65536

System-wide file limit:

# Edit /etc/sysctl.conf
sudo nano /etc/sysctl.conf

# Add:
fs.file-max = 2097152

# Apply changes
sudo sysctl -p

Nginx systemd service limits:

# Edit systemd service
sudo systemctl edit nginx

# Add:
[Service]
LimitNOFILE=65536

# Reload systemd
sudo systemctl daemon-reload
sudo systemctl restart nginx

1.6. Complete Worker Configuration

# Main context
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;  # Max open files per worker
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

# Worker priority (nice value: -20 to 19, lower = higher priority)
# worker_priority -10;  # Higher priority (use with caution)

# CPU affinity (bind workers to specific cores)
# worker_cpu_affinity auto;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    # HTTP configuration...
}

2. Keepalive Connections

Keepalive connections giữ connections mở để reuse, giảm overhead của establishing new connections.

2.1. Client Keepalive

http {
    # Keepalive timeout (seconds)
    keepalive_timeout 65;  # Default: 75s
    
    # Max requests per connection
    keepalive_requests 100;  # Default: 100
    
    # Disable keepalive cho specific browsers (legacy)
    # keepalive_disable msie6;
    
    server {
        listen 80;
        # Inherits keepalive settings
    }
}

Keepalive timeout values:

# Short timeout (conserve resources)
keepalive_timeout 30;

# Medium timeout (balanced)
keepalive_timeout 65;

# Long timeout (persistent connections)
keepalive_timeout 120;

# Disable keepalive
keepalive_timeout 0;

Keepalive requests:

# Allow 100 requests per connection
keepalive_requests 100;

# Higher value for API servers
keepalive_requests 1000;

# Lower value to force connection refresh
keepalive_requests 50;

2.2. Upstream Keepalive

Keepalive connections to backend servers.

upstream backend {
    server backend1.example.com:8080;
    server backend2.example.com:8080;
    
    # Keep 32 idle connections to upstream
    keepalive 32;
    
    # Keepalive timeout
    keepalive_timeout 60s;
    
    # Max requests per connection
    keepalive_requests 100;
}

server {
    listen 80;
    
    location / {
        proxy_pass http://backend;
        
        # Required for upstream keepalive
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        
        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Upstream keepalive sizing:

# Small pool (low traffic)
keepalive 8;

# Medium pool (moderate traffic)
keepalive 32;

# Large pool (high traffic)
keepalive 128;

# Very large pool (very high traffic)
keepalive 256;

# Calculation:
# keepalive = (peak requests per second) / (requests per connection)
# Example: 1000 rps / 100 req/conn = 10 connections needed

2.3. Complete Keepalive Configuration

http {
    # Client keepalive
    keepalive_timeout 65;
    keepalive_requests 100;
    
    # Upstream with keepalive
    upstream api_backend {
        server api1.example.com:8080;
        server api2.example.com:8080;
        server api3.example.com:8080;
        
        keepalive 64;
        keepalive_timeout 60s;
        keepalive_requests 1000;
    }
    
    server {
        listen 80;
        server_name example.com;
        
        location /api/ {
            proxy_pass http://api_backend/;
            
            # Enable upstream keepalive
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            
            # Headers
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

3. Buffer và Timeout Optimization

3.1. Client Buffers

http {
    # Client body buffer
    client_body_buffer_size 128k;  # Default: 8k|16k
    client_max_body_size 20M;      # Default: 1m
    
    # Client header buffer
    client_header_buffer_size 1k;  # Default: 1k
    large_client_header_buffers 4 8k;  # Default: 4 8k
    
    # Client body in temp file
    client_body_temp_path /var/cache/nginx/client_temp 1 2;
}

Buffer sizes explained:

# Small files/requests
client_body_buffer_size 16k;
client_max_body_size 1M;

# Medium files/requests (recommended)
client_body_buffer_size 128k;
client_max_body_size 20M;

# Large files/uploads
client_body_buffer_size 256k;
client_max_body_size 100M;

# Very large files (video uploads)
client_body_buffer_size 512k;
client_max_body_size 1G;

3.2. Proxy Buffers

http {
    server {
        location / {
            proxy_pass http://backend;
            
            # Enable buffering
            proxy_buffering on;  # Default: on
            
            # Buffer size for response headers
            proxy_buffer_size 4k;  # Default: 4k|8k
            
            # Number and size of buffers for response body
            proxy_buffers 8 4k;  # Default: 8 4k|8k
            
            # Max size of buffers busy sending to client
            proxy_busy_buffers_size 8k;  # Default: 8k|16k
            
            # Max size of data buffered from upstream
            proxy_max_temp_file_size 1024m;  # Default: 1024m
            
            # Size of chunks when writing to temp file
            proxy_temp_file_write_size 8k;  # Default: 8k|16k
        }
    }
}

Buffer sizing recommendations:

# Small responses (API, JSON)
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;

# Medium responses (HTML pages)
proxy_buffer_size 8k;
proxy_buffers 16 8k;
proxy_busy_buffers_size 16k;

# Large responses (images, files)
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 32k;

# Disable buffering for streaming
proxy_buffering off;

3.3. FastCGI Buffers

http {
    server {
        location ~ \.php$ {
            fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
            
            # FastCGI buffering
            fastcgi_buffering on;
            
            # Buffer size for headers
            fastcgi_buffer_size 16k;  # Default: 4k|8k
            
            # Number and size of buffers
            fastcgi_buffers 16 16k;  # Default: 8 4k|8k
            
            # Busy buffers
            fastcgi_busy_buffers_size 32k;
            
            # Temp file settings
            fastcgi_max_temp_file_size 1024m;
            fastcgi_temp_file_write_size 16k;
        }
    }
}

3.4. Timeouts

http {
    # Client timeouts
    client_body_timeout 12s;     # Default: 60s
    client_header_timeout 12s;   # Default: 60s
    send_timeout 10s;            # Default: 60s
    
    # Keepalive timeout
    keepalive_timeout 65s;       # Default: 75s
    
    server {
        location / {
            proxy_pass http://backend;
            
            # Proxy timeouts
            proxy_connect_timeout 60s;   # Default: 60s
            proxy_send_timeout 60s;      # Default: 60s
            proxy_read_timeout 60s;      # Default: 60s
        }
        
        location ~ \.php$ {
            fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
            
            # FastCGI timeouts
            fastcgi_connect_timeout 60s;
            fastcgi_send_timeout 60s;
            fastcgi_read_timeout 60s;
        }
    }
}

Timeout recommendations:

# Fast API (quick responses)
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;

# Normal web application
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

# Long-running processes
proxy_connect_timeout 10s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;

# File uploads
client_body_timeout 300s;
proxy_read_timeout 300s;

3.5. Complete Buffer Configuration

http {
    # Client settings
    client_body_buffer_size 128k;
    client_max_body_size 20M;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
    client_body_timeout 60s;
    client_header_timeout 60s;
    send_timeout 60s;
    
    # Keepalive
    keepalive_timeout 65s;
    keepalive_requests 100;
    
    # Upstream with optimized buffers
    upstream backend {
        server backend1.example.com:8080;
        server backend2.example.com:8080;
        keepalive 32;
    }
    
    server {
        listen 80;
        server_name example.com;
        
        location / {
            proxy_pass http://backend;
            
            # HTTP version for keepalive
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            
            # Buffering
            proxy_buffering on;
            proxy_buffer_size 8k;
            proxy_buffers 16 8k;
            proxy_busy_buffers_size 16k;
            
            # Timeouts
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
            
            # Headers
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        
        # PHP with optimized buffers
        location ~ \.php$ {
            fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            
            fastcgi_buffering on;
            fastcgi_buffer_size 16k;
            fastcgi_buffers 16 16k;
            fastcgi_busy_buffers_size 32k;
            
            fastcgi_connect_timeout 60s;
            fastcgi_send_timeout 180s;
            fastcgi_read_timeout 180s;
        }
    }
}

4. Gzip Compression

Gzip compression giảm bandwidth usage và tăng tốc độ page load.

4.1. Basic Gzip Configuration

http {
    # Enable gzip
    gzip on;
    
    # Compression level (1-9, 6 is recommended balance)
    gzip_comp_level 6;
    
    # Minimum file size to compress
    gzip_min_length 1000;  # bytes
    
    # Compress for all clients
    gzip_proxied any;
    
    # Add Vary: Accept-Encoding header
    gzip_vary on;
    
    # Disable for IE6
    gzip_disable "msie6";
}

4.2. Gzip Types

http {
    gzip on;
    gzip_comp_level 6;
    
    # Specify MIME types to compress
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/xml+rss
        application/xhtml+xml
        application/x-font-ttf
        application/x-font-opentype
        application/vnd.ms-fontobject
        image/svg+xml
        image/x-icon
        application/rss+xml
        application/atom+xml;
    
    # text/html is always compressed by default
}

IMPORTANT: Không compress images đã compressed (jpg, png, gif):

# DON'T compress these (already compressed)
# image/jpeg
# image/png
# image/gif
# video/mp4
# application/zip

4.3. Compression Levels

# Level 1 - Fastest, least compression
gzip_comp_level 1;

# Level 4 - Good balance (fast)
gzip_comp_level 4;

# Level 6 - Recommended balance
gzip_comp_level 6;

# Level 9 - Maximum compression (slowest, high CPU)
gzip_comp_level 9;

# Benchmark:
# Level 1: ~70% compression, very fast
# Level 6: ~80% compression, balanced
# Level 9: ~82% compression, slow (not worth it)

4.4. Gzip Buffers

http {
    gzip on;
    
    # Buffers for compression
    gzip_buffers 16 8k;  # 16 buffers of 8k each
    
    # HTTP version (1.1 for proxied content)
    gzip_http_version 1.1;
}

4.5. Complete Gzip Configuration

http {
    # Gzip settings
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 1000;
    gzip_disable "msie6";
    gzip_http_version 1.1;
    gzip_buffers 16 8k;
    
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        text/x-component
        application/json
        application/javascript
        application/x-javascript
        application/xml
        application/xml+rss
        application/xhtml+xml
        application/rss+xml
        application/atom+xml
        application/vnd.ms-fontobject
        application/x-font-ttf
        application/x-font-opentype
        font/truetype
        font/opentype
        image/svg+xml
        image/x-icon;
    
    server {
        listen 80;
        server_name example.com;
        
        location / {
            root /var/www/html;
        }
        
        # Static files already compressed - don't compress
        location ~* \.(jpg|jpeg|png|gif|ico|mp4|pdf|zip)$ {
            gzip off;
            expires 1y;
            add_header Cache-Control "public, immutable";
        }
    }
}

4.6. Pre-compressed Files (gzip_static)

# Serve pre-compressed .gz files if available
http {
    server {
        listen 80;
        root /var/www/html;
        
        location / {
            # Try .gz file first, then original
            gzip_static on;  # Requires ngx_http_gzip_static_module
        }
    }
}

# Pre-compress files:
# gzip -k file.css    # Creates file.css.gz
# gzip -k file.js     # Creates file.js.gz

Build script để pre-compress:

#!/bin/bash
# precompress.sh - Pre-compress static assets

WWW_DIR="/var/www/html"

find "$WWW_DIR" -type f \( -name '*.css' -o -name '*.js' -o -name '*.html' -o -name '*.xml' \) -exec gzip -k -9 {} \;

echo "Pre-compression complete!"

5. Sendfile và tcp_nopush

5.1. Sendfile

Sendfile cho phép Nginx gửi files trực tiếp từ disk đến network socket mà không copy qua user space.

http {
    # Enable sendfile (highly recommended)
    sendfile on;
    
    # Or disable (testing/debugging)
    # sendfile off;
}

How sendfile works:

Without sendfile:
Disk → Kernel → User Space (Nginx) → Kernel → Network
(2 context switches, data copied twice)

With sendfile:
Disk → Kernel → Network
(Direct transfer, no extra copies)

5.2. tcp_nopush

tcp_nopush sends HTTP response headers và file content trong cùng packet.

http {
    sendfile on;
    tcp_nopush on;  # Use with sendfile
    
    # tcp_nopush works only when sendfile is on
}

tcp_nopush benefits:

Without tcp_nopush:
Packet 1: HTTP headers
Packet 2: File chunk 1
Packet 3: File chunk 2
...

With tcp_nopush:
Packet 1: HTTP headers + File chunk 1
Packet 2: File chunk 2
...
(Fewer packets, better efficiency)

5.3. tcp_nodelay

tcp_nodelay disables Nagle's algorithm (good for keepalive connections).

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;  # Enable for keepalive
    
    keepalive_timeout 65;
}

When to use:

# Static files - use tcp_nopush
location /static/ {
    sendfile on;
    tcp_nopush on;
}

# Dynamic content / API - use tcp_nodelay
location /api/ {
    tcp_nodelay on;
    proxy_pass http://backend;
}

# Both for general use (recommended)
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
}

5.4. Complete Sendfile Configuration

http {
    # File transfer optimization
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    
    # Sendfile max chunk
    sendfile_max_chunk 512k;  # Default: 0 (unlimited)
    
    server {
        listen 80;
        server_name example.com;
        
        # Static files
        location /static/ {
            root /var/www;
            
            # Optimize for static files
            sendfile on;
            tcp_nopush on;
            
            expires 1y;
            add_header Cache-Control "public, immutable";
        }
        
        # Dynamic content
        location / {
            proxy_pass http://backend;
            
            # Optimize for dynamic content
            tcp_nodelay on;
        }
    }
}

6. Open File Cache

Open file cache lưu file descriptors và metadata, giảm số lần open/close files.

6.1. Basic Open File Cache

http {
    # Enable open file cache
    open_file_cache max=10000 inactive=60s;
    
    # Validate cache every 30s
    open_file_cache_valid 30s;
    
    # Minimum uses before caching
    open_file_cache_min_uses 2;
    
    # Cache errors (file not found)
    open_file_cache_errors on;
}

6.2. Cache Parameters

http {
    # max=N: Maximum cached entries
    # inactive=T: Remove entry if not accessed for T time
    open_file_cache max=10000 inactive=60s;
    
    # Revalidate cache every 30s
    open_file_cache_valid 30s;
    
    # Cache entry after 2 uses
    open_file_cache_min_uses 2;
    
    # Cache file not found errors
    open_file_cache_errors on;
}

Parameter explanation:

# Small cache (low traffic)
open_file_cache max=1000 inactive=30s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;

# Medium cache (moderate traffic)
open_file_cache max=10000 inactive=60s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;

# Large cache (high traffic)
open_file_cache max=50000 inactive=120s;
open_file_cache_valid 60s;
open_file_cache_min_uses 1;

6.3. What Gets Cached?

Open file cache stores:
- File descriptors (handles)
- File sizes
- Modification times
- Directory existence
- File lookup errors (not found)

Does NOT cache:
- File content (use proxy_cache for that)

6.4. Complete Open File Cache Configuration

http {
    # Open file cache
    open_file_cache max=10000 inactive=60s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    
    server {
        listen 80;
        server_name example.com;
        root /var/www/html;
        
        # Static files benefit most from open file cache
        location /static/ {
            # Inherits open_file_cache from http
            expires 1y;
            add_header Cache-Control "public, immutable";
        }
        
        location / {
            try_files $uri $uri/ =404;
        }
    }
}

7. Complete Performance Configuration

7.1. Optimal nginx.conf

# /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    # Logging
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
    
    access_log /var/log/nginx/access.log main;
    
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    
    # Timeouts
    client_body_timeout 12s;
    client_header_timeout 12s;
    send_timeout 10s;
    keepalive_timeout 65s;
    keepalive_requests 100;
    
    # Buffer sizes
    client_body_buffer_size 128k;
    client_max_body_size 20M;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
    
    # Open file cache
    open_file_cache max=10000 inactive=60s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    
    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 1000;
    gzip_disable "msie6";
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/xml+rss
        font/truetype
        font/opentype
        image/svg+xml;
    
    # Rate limiting zones
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    
    # Include configs
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

7.2. Optimized Site Configuration

# /etc/nginx/sites-available/example.com

# Upstream with keepalive
upstream backend {
    least_conn;
    
    server backend1.example.com:8080 max_fails=3 fail_timeout=30s;
    server backend2.example.com:8080 max_fails=3 fail_timeout=30s;
    server backend3.example.com:8080 max_fails=3 fail_timeout=30s;
    
    keepalive 64;
    keepalive_timeout 60s;
    keepalive_requests 1000;
}

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    return 301 https://example.com$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com;
    
    root /var/www/example.com/public;
    index index.html index.htm;
    
    # SSL configuration
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_stapling on;
    ssl_stapling_verify on;
    
    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    
    # Rate limiting
    limit_req zone=general burst=20 nodelay;
    limit_conn addr 10;
    
    # Main location
    location / {
        try_files $uri $uri/ =404;
    }
    
    # API backend with optimized settings
    location /api/ {
        proxy_pass http://backend/;
        
        # HTTP version
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        
        # Headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Buffering
        proxy_buffering on;
        proxy_buffer_size 8k;
        proxy_buffers 16 8k;
        proxy_busy_buffers_size 16k;
        
        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
        
        # Caching
        proxy_cache api_cache;
        proxy_cache_valid 200 5m;
        proxy_cache_use_stale error timeout updating;
        proxy_cache_lock on;
        
        add_header X-Cache-Status $upstream_cache_status;
    }
    
    # Static assets - highly optimized
    location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
        
        # Open file cache helps here
        sendfile on;
        tcp_nopush on;
    }
    
    location ~* \.(css|js)$ {
        expires 1M;
        add_header Cache-Control "public";
        access_log off;
        
        sendfile on;
        tcp_nopush on;
        gzip_static on;  # If pre-compressed files exist
    }
    
    location ~* \.(woff|woff2|ttf|eot)$ {
        expires 1y;
        add_header Cache-Control "public";
        add_header Access-Control-Allow-Origin "*";
        access_log off;
    }
    
    # Deny hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
}

8. Monitoring và Benchmarking

8.1. Nginx Status Module

server {
    listen 8080;
    server_name localhost;
    
    location /nginx_status {
        stub_status;
        access_log off;
        allow 127.0.0.1;
        allow 10.0.0.0/8;
        deny all;
    }
}

Check status:

curl http://localhost:8080/nginx_status

# Output:
# Active connections: 291
# server accepts handled requests
#  16630948 16630948 31070465
# Reading: 6 Writing: 179 Waiting: 106

Explanation:

  • Active connections: Current open connections
  • server accepts: Total accepted connections
  • handled: Total handled connections
  • requests: Total requests
  • Reading: Reading request headers
  • Writing: Writing response to clients
  • Waiting: Keep-alive connections (idle)

8.2. Benchmarking Tools

Apache Bench (ab):

# Simple benchmark
ab -n 1000 -c 10 http://example.com/

# With keepalive
ab -n 1000 -c 10 -k http://example.com/

# POST request
ab -n 1000 -c 10 -p data.json -T application/json http://example.com/api/

# Parameters:
# -n: Total requests
# -c: Concurrent requests
# -k: Enable keepalive
# -p: POST data file
# -T: Content-Type header

wrk (Modern alternative):

# Install wrk
sudo apt install wrk

# Basic benchmark
wrk -t4 -c100 -d30s http://example.com/

# With custom script
wrk -t4 -c100 -d30s -s post.lua http://example.com/api/

# Parameters:
# -t: Threads
# -c: Connections
# -d: Duration
# -s: Lua script

siege:

# Install siege
sudo apt install siege

# Benchmark
siege -c 10 -t 30s http://example.com/

# From URL file
siege -c 10 -t 30s -f urls.txt

# Parameters:
# -c: Concurrent users
# -t: Duration
# -f: URL file

8.3. Performance Metrics

Monitor script:

#!/bin/bash
# monitor_nginx.sh

while true; do
    clear
    echo "==================================="
    echo "Nginx Performance Monitor"
    echo "==================================="
    echo "Time: $(date)"
    echo ""
    
    # Nginx status
    echo "Nginx Status:"
    curl -s http://localhost:8080/nginx_status
    echo ""
    
    # Worker processes
    echo "Worker Processes:"
    ps aux | grep nginx | grep -v grep
    echo ""
    
    # Memory usage
    echo "Memory Usage:"
    ps aux | grep nginx | awk '{sum+=$6} END {print "Total: " sum/1024 " MB"}'
    echo ""
    
    # Open files
    echo "Open Files:"
    lsof -i :80 | wc -l
    echo ""
    
    # Connections
    echo "TCP Connections:"
    netstat -an | grep :80 | wc -l
    echo ""
    
    sleep 5
done

8.4. System-level Monitoring

# CPU usage
top -b -n 1 | grep nginx

# Memory usage
ps aux | grep nginx

# Network connections
netstat -an | grep :80 | wc -l

# Open files per process
lsof -p $(pgrep nginx | head -1) | wc -l

# System load
uptime

# Disk I/O
iostat -x 1

# Network I/O
iftop

8.5. Log Analysis

Parse access log for metrics:

#!/bin/bash
# analyze_logs.sh

LOG_FILE="/var/log/nginx/access.log"

echo "Nginx Log Analysis"
echo "=================="

# Total requests
echo "Total requests: $(wc -l < $LOG_FILE)"

# Requests per second (last minute)
echo "Requests/sec (last minute):"
awk -v date="$(date -d '1 minute ago' '+%d/%b/%Y:%H:%M')" \
    '$4 > "["date' $LOG_FILE | wc -l

# Top 10 URLs
echo -e "\nTop 10 URLs:"
awk '{print $7}' $LOG_FILE | sort | uniq -c | sort -rn | head -10

# Top 10 IPs
echo -e "\nTop 10 IPs:"
awk '{print $1}' $LOG_FILE | sort | uniq -c | sort -rn | head -10

# Status code distribution
echo -e "\nStatus Codes:"
awk '{print $9}' $LOG_FILE | sort | uniq -c | sort -rn

# Average response time (if logged)
echo -e "\nAverage Response Time:"
awk '{print $NF}' $LOG_FILE | \
    awk '{sum+=$1; count++} END {print sum/count " seconds"}'

9. Troubleshooting Performance Issues

9.1. High CPU Usage

Diagnosis:

# Check worker CPU
top -b -n 1 | grep nginx

# Check processes
ps aux | grep nginx | grep -v grep

# Detailed CPU per worker
pidstat -p $(pgrep nginx | tr '\n' ',') 1

Common causes:

  1. Too many workers
  2. High compression level
  3. Complex regex/rewrite rules
  4. SSL/TLS overhead

Solutions:

# Reduce workers if too many
worker_processes auto;  # Instead of manual high number

# Lower compression level
gzip_comp_level 4;  # Instead of 9

# Optimize regex
location ~ \.php$ {  # Simple regex
    # Instead of: location ~* ^/([a-z]+)/([0-9]+)\.php$
}

# SSL session cache
ssl_session_cache shared:SSL:10m;

9.2. High Memory Usage

Diagnosis:

# Memory per process
ps aux | grep nginx

# Total memory
ps aux | grep nginx | awk '{sum+=$6} END {print sum/1024 " MB"}'

# Check buffers
sudo nginx -T | grep buffer

Solutions:

# Reduce buffer sizes
client_body_buffer_size 128k;  # Instead of 1M
proxy_buffers 8 4k;  # Instead of 32 16k

# Reduce workers if needed
worker_processes 4;  # Instead of auto on 32-core machine

# Limit connections per worker
worker_connections 2048;  # Instead of 10000

9.3. Too Many Open Files

Diagnosis:

# Check limit
ulimit -n

# Check usage
lsof -p $(pgrep nginx | head -1) | wc -l

# Check system limit
cat /proc/sys/fs/file-max

Solutions:

# Increase limits
sudo nano /etc/security/limits.conf
# Add:
nginx soft nofile 65536
nginx hard nofile 65536

# Nginx config
worker_rlimit_nofile 65535;

9.4. Slow Response Times

Diagnosis:

# Check upstream response times
tail -f /var/log/nginx/access.log | grep -oP 'upstream_response_time=\K[^ ]+'

# Test backend directly
time curl http://backend:8080/

# Check network
ping backend
traceroute backend

Solutions:

# Increase timeouts if backend is slow
proxy_read_timeout 180s;
proxy_connect_timeout 10s;

# Enable caching
proxy_cache my_cache;
proxy_cache_valid 200 10m;

# Use stale content
proxy_cache_use_stale error timeout updating;

9.5. Connection Refused Errors

Diagnosis:

# Check worker connections
curl http://localhost:8080/nginx_status

# Check limits
ulimit -n

# Check backend connectivity
telnet backend 8080

Solutions:

# Increase worker connections
events {
    worker_connections 4096;  # Increase from 1024
}

# Increase system limits
# See "Too Many Open Files" section

# Add more workers
worker_processes auto;

10. Best Practices Summary

10.1. Worker Configuration

# Optimal settings
worker_processes auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
}

10.2. Keepalive

# Client keepalive
keepalive_timeout 65s;
keepalive_requests 100;

# Upstream keepalive
upstream backend {
    server backend1:8080;
    keepalive 64;
}

# Enable in proxy
proxy_http_version 1.1;
proxy_set_header Connection "";

10.3. Buffers and Timeouts

# Reasonable defaults
client_body_buffer_size 128k;
client_max_body_size 20M;

proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 16 8k;

proxy_connect_timeout 60s;
proxy_read_timeout 60s;

10.4. Compression

gzip on;
gzip_vary on;
gzip_comp_level 6;
gzip_min_length 1000;
gzip_types text/plain text/css application/json application/javascript;

10.5. File Operations

sendfile on;
tcp_nopush on;
tcp_nodelay on;

open_file_cache max=10000 inactive=60s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;

10.6. Monitoring

# Enable stub_status
server {
    listen 8080;
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        deny all;
    }
}

# Regular monitoring
# - CPU and memory usage
# - Connection counts
# - Response times
# - Error rates

11. Bài tập Thực hành

Bài tập 1: Worker Optimization

  1. Check số CPU cores trên server
  2. Configure worker_processes và worker_connections
  3. Benchmark before và after
  4. Compare results

Bài tập 2: Buffer Tuning

  1. Setup backend application
  2. Test với default buffers
  3. Optimize buffer sizes
  4. Measure performance improvement

Bài tập 3: Compression Testing

  1. Disable gzip
  2. Benchmark response size và time
  3. Enable gzip với level 6
  4. Compare compression ratio và speed

Bài tập 4: Keepalive Impact

  1. Test với keepalive off
  2. Enable keepalive
  3. Add upstream keepalive
  4. Benchmark connection overhead

Bài tập 5: Complete Optimization

  1. Start with default Nginx config
  2. Apply all optimizations từ bài học
  3. Run comprehensive benchmarks
  4. Document performance gains

Bài tập 6: Load Testing

  1. Setup test environment
  2. Use ab hoặc wrk để load test
  3. Monitor system resources
  4. Identify bottlenecks
  5. Apply optimizations
  6. Re-test

Tổng kết

Trong bài này, bạn đã học:

  • ✅ Worker processes và connections optimization
  • ✅ Keepalive connections configuration
  • ✅ Buffer và timeout tuning
  • ✅ Gzip compression setup
  • ✅ Sendfile, tcp_nopush, tcp_nodelay
  • ✅ Open file cache configuration
  • ✅ Performance monitoring và benchmarking
  • ✅ Troubleshooting common issues

Performance gains có thể đạt được:

  • 2-5x throughput increase với proper worker config
  • 20-50% bandwidth reduction với gzip
  • 30-70% faster file serving với sendfile + cache
  • Significant latency reduction với keepalive

Bài tiếp theo: Chúng ta sẽ tìm hiểu về Security - rate limiting, IP blocking, authentication, WAF integration, DDoS protection và secure headers để bảo vệ Nginx server trong production environment.

Nginx PerformanceTuning optimization WorkerProcesses Connections Buffers Timeouts GzipCompression Sendfile Caching

Đánh dấu hoàn thành (Bài 8: Performance Tuning NGINX)