Skip to main content
DevOps

A Practical Guide to Nginx: Configuration, SSL, and Debugging

Whether you’re deploying a Node.js API, a Python backend, or a static React app, Nginx is likely part of your stack. This guide covers the practical configurations that work in production environments—along with common pitfalls and how to avoid them.

This isn’t a comprehensive manual. It’s a focused reference covering real-world configs, common mistakes, and essential debugging commands.

Why Nginx Instead of Apache?

Both are capable web servers, but Nginx handles modern workloads more efficiently in most cases.

Apache spawns a new process or thread per request. This works fine for moderate traffic, but under heavy load (thousands of concurrent connections), memory usage spikes significantly.

Nginx uses an event-driven architecture. A single process handles thousands of connections concurrently with minimal memory overhead.

For most modern deployments—especially when proxying to Node, Python, or Go backends—Nginx is the better fit.

Installation (The Easy Part)

On Ubuntu/Debian:

sudo apt update
sudo apt install nginx

Check if it’s running:

systemctl status nginx

If you see active (running), hit your server’s IP in a browser. You should see the “Welcome to Nginx” page.

First gotcha: If you see “Connection refused,” check your firewall:

sudo ufw allow 'Nginx Full'
sudo ufw status

Understanding the Config Structure

Here’s the directory structure to keep in mind:

/etc/nginx/
├── nginx.conf              # Global settings (workers, logging). Rarely touch this.
├── sites-available/        # Where you WRITE configs
├── sites-enabled/          # Where Nginx READS configs (symlinks)
└── snippets/               # Reusable config chunks

The workflow:

  1. Create config in sites-available/
  2. Symlink to sites-enabled/
  3. Test and reload

Common mistake: Editing files directly in sites-enabled/. Always edit in sites-available/ and symlink. This keeps configurations clean and reversible.

Use Case 1: Serving a Static Site

You’ve built a React/Vue app. It’s sitting in /var/www/myapp/dist. Here’s the config:

Create /etc/nginx/sites-available/myapp:

server {
    listen 80;
    server_name myapp.com www.myapp.com;

    root /var/www/myapp/dist;
    index index.html;

    # Handle client-side routing (React Router, Vue Router)
    location / {
        try_files $uri $uri/ /index.html;
    }

    # Cache static assets aggressively
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
    }

    # Don't log favicon requests (reduces noise)
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
}

What’s happening:

  • try_files $uri $uri/ /index.html — This is crucial for SPAs. If the URL doesn’t match a real file, serve index.html and let your JS router handle it. Without this, refreshing /dashboard gives you a 404.
  • The caching block tells browsers to cache assets for a year. Your bundler adds hashes to filenames anyway, so this is safe.

Use Case 2: Reverse Proxy for Node/Python/Go

This is the most common Nginx configuration in modern stacks.

You have a Node.js app running on port 3000. You don’t want users hitting example.com:3000. You want clean URLs on port 80/443 with Nginx handling the front door.

Create /etc/nginx/sites-available/api:

server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        
        # WebSocket support
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        
        # Pass real client info to your app
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_cache_bypass $http_upgrade;
        
        # Timeout settings (adjust based on your app)
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Why those headers matter:

Without X-Real-IP and X-Forwarded-For, your application logs will show 127.0.0.1 for every request. Useless for debugging or rate limiting.

Trailing slash gotcha:

# These behave DIFFERENTLY:
proxy_pass http://127.0.0.1:3000;   # Request: /api/users → Backend: /api/users
proxy_pass http://127.0.0.1:3000/;  # Request: /api/users → Backend: /users

That trailing slash strips the matched location prefix. This subtle difference is a common source of debugging headaches.

Use Case 3: File Uploads (Don’t Skip This)

Default Nginx config limits uploads to 1MB. Your users will see a cryptic 413 Request Entity Too Large error.

Add this to your server block:

server {
    # ... other config ...
    
    client_max_body_size 50M;  # Adjust as needed
    
    # For large uploads, also consider:
    client_body_timeout 120s;
    client_body_buffer_size 128k;
}

This is a common cause of production issues when file uploads larger than 1MB aren’t tested during development.

SSL Setup with Certbot

HTTPS is essential for production deployments. Here’s a quick setup using Certbot:

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

Certbot automatically:

  • Gets certificates from Let’s Encrypt
  • Modifies your Nginx config to use them
  • Sets up auto-renewal

Verify auto-renewal works:

sudo certbot renew --dry-run

What your config looks like after Certbot:

server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;  # Force HTTPS
}

server {
    listen 443 ssl;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    # ... rest of your config ...
}

Enabling Your Config (The Symlink Dance)

# Create symlink
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/

# ALWAYS test before reloading
sudo nginx -t

# If syntax is OK, reload (not restart - keeps connections alive)
sudo systemctl reload nginx

Remove the default site (common source of conflicts):

sudo rm /etc/nginx/sites-enabled/default

Forgetting to remove the default site is a frequent cause of “why is my site showing the wrong content” issues.

When Things Break: Debugging Commands

Essential commands for troubleshooting Nginx issues:

# Check config syntax
sudo nginx -t

# Dump the FULL computed config (super useful)
sudo nginx -T

# Watch error logs in real-time
sudo tail -f /var/log/nginx/error.log

# Watch access logs
sudo tail -f /var/log/nginx/access.log

# Check what's listening on what port
sudo ss -tlnp | grep nginx

# Check Nginx process status
ps aux | grep nginx

Common Errors and Fixes

502 Bad Gateway

Your backend isn’t running or Nginx can’t reach it.

# Is your app actually running?
sudo ss -tlnp | grep 3000

# Check if it's a socket permission issue
ls -la /var/run/yourapp.sock

504 Gateway Timeout

The backend is responding too slowly. Increase timeouts:

proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;

Note: This is a temporary fix. Investigate and optimize the slow backend response.

403 Forbidden

Usually permissions. Nginx runs as www-data:

# Check ownership
ls -la /var/www/myapp

# Fix it
sudo chown -R www-data:www-data /var/www/myapp
sudo chmod -R 755 /var/www/myapp

Connection Refused

# Is Nginx running?
systemctl status nginx

# Firewall blocking?
sudo ufw status

# SELinux issues? (CentOS/RHEL)
sudo setsebool -P httpd_can_network_connect 1

Quick Reference: Useful Snippets

Redirect www to non-www (or vice versa)

server {
    listen 80;
    server_name www.example.com;
    return 301 https://example.com$request_uri;
}

Custom 404 Page

error_page 404 /404.html;
location = /404.html {
    root /var/www/myapp/errors;
    internal;
}

Block Bad Bots

if ($http_user_agent ~* (scrapy|curl|wget|python-requests)) {
    return 403;
}

Basic Rate Limiting

In nginx.conf (http block):

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

In your server block:

location /api/ {
    limit_req zone=one burst=20 nodelay;
    proxy_pass http://127.0.0.1:3000;
}

Gzip Compression

gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/json application/xml+rss;

Wrapping Up

Nginx isn’t complicated—it just has a lot of surface area. The core use cases are straightforward:

  1. Static site? Use root and try_files
  2. Backend app? Use proxy_pass
  3. Always test with nginx -t before reloading
  4. Always set up SSL with Certbot
  5. When things break, tail -f /var/log/nginx/error.log is your best debugging tool

The configs above cover the majority of production deployments. Use them as a starting point, adapt them to your needs, and test thoroughly.

Questions or suggestions? Drop a comment below.