Technology Encyclopedia Home >How to Set Up Nginx as a Reverse Proxy — Route Traffic to Any Backend

How to Set Up Nginx as a Reverse Proxy — Route Traffic to Any Backend

Here's the situation I kept running into: I'd have three different apps running on the same server — one on port 3000, one on port 5000, one on port 8000. To access any of them, users would need to know the port number. That's fine for development, but it's a mess in production.

Nginx as a reverse proxy solves this cleanly. Traffic comes in on port 443 (HTTPS), and Nginx decides where to send it based on the domain name or URL path. api.yourdomain.com goes to one backend, app.yourdomain.com goes to another. From the outside, it all looks like one tidy server.

This guide covers the full range of configurations I actually use: basic proxying, path-based routing, load balancing, WebSocket support, and response caching.

I run this on Tencent Cloud Lighthouse with Ubuntu 22.04. Nginx as a reverse proxy adds negligible overhead — it's fast enough that you won't notice it in your response times. The reason this setup works well on Lighthouse specifically: you can run multiple backend services on different internal ports, with Nginx routing public traffic to each one by domain or path — all on a single server at a single flat monthly cost. The console-level firewall lets you lock down all backend service ports while keeping only 80/443 public, without complex UFW rule chains.


Table of Contents

  1. Why Use a Reverse Proxy?
  2. Prerequisites
  3. Part 1 — Basic Reverse Proxy Setup
  4. Part 2 — Domain-Based Routing (Multiple Apps)
  5. Part 3 — Path-Based Routing
  6. Part 4 — Load Balancing Across Multiple Backends
  7. Part 5 — WebSocket Support
  8. Part 6 — Proxy Caching
  9. Part 7 — Security Headers and Rate Limiting
  10. Part 8 — HTTPS Termination
  11. The Gotcha: X-Forwarded-For and Real IP
  12. Proxy Configuration Reference

Key Takeaways

  • Always pass Host, X-Real-IP, and X-Forwarded-For headers to the backend
  • WebSocket proxying requires Upgrade and Connection headers plus proxy_http_version 1.1
  • Test every config change with sudo nginx -t before reloading
  • proxy_cache significantly reduces backend load for cacheable responses
  • Trailing slash on proxy_pass affects how URLs are passed to the backend

Frequently Asked Questions {#faq}

What is a reverse proxy and why use Nginx for it?
A reverse proxy receives requests from clients and forwards them to backend services. Nginx handles SSL termination, load balancing, caching, and routing — all before requests reach your application.

What headers should I always pass in an Nginx proxy configuration?
At minimum: proxy_set_header Host $host, proxy_set_header X-Real-IP $remote_addr, and proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for. These let your backend see the real client IP.

How do I proxy WebSocket connections through Nginx?
Add proxy_set_header Upgrade $http_upgrade and proxy_set_header Connection "upgrade" to the location block. Also set proxy_http_version 1.1.

What does proxy_pass http://localhost:3000/ vs proxy_pass http://localhost:3000 do differently?
The trailing slash matters for path handling. Without a trailing slash on proxy_pass, the full URI (including the location prefix) is passed. With a trailing slash, the prefix is stripped. Test both and check your application's URL behavior.

How do I cache static assets at the Nginx proxy level?
Use proxy_cache_path to define a cache zone, then add proxy_cache ZONE_NAME and proxy_cache_valid 200 1d; to your location block.

Why Use a Reverse Proxy? {#why}

Running applications directly on port 80/443 means:

  • Only one app can use each port
  • The app must handle SSL itself
  • No centralized rate limiting or caching

With Nginx as a reverse proxy:

  • Multiple apps share port 80/443 — Nginx routes by domain or path
  • SSL terminates at Nginx — backends talk plain HTTP internally
  • Static files served at full speed — Nginx bypasses the backend entirely
  • Load balancing — distribute traffic across multiple backend instances
  • Caching — cache backend responses to reduce load
  • Security — rate limiting, IP blocking, and header management in one place

Prerequisites {#prerequisites}

Requirement Notes
Cloud server Tencent Cloud Lighthouse Ubuntu 22.04
Nginx installed sudo apt install nginx
At least one backend app Running on a local port (3000, 8000, etc.)

Part 1 — Basic Reverse Proxy Setup {#part-1}

The simplest case: forward all traffic from a domain to a local port.

sudo nano /etc/nginx/sites-available/myapp
server {
    listen 80;
    server_name myapp.com www.myapp.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        # Standard proxy headers — always include these
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support
        proxy_set_header Upgrade           $http_upgrade;
        proxy_set_header Connection        'upgrade';

        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 300s;
    }
}
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx

Part 2 — Domain-Based Routing (Multiple Apps) {#part-2}

Route different domains to different backend apps on the same server:

# App 1: api.myapp.com → Node.js on port 3000
sudo nano /etc/nginx/sites-available/api.myapp.com
server {
    listen 80;
    server_name api.myapp.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include /etc/nginx/proxy_params;
    }
}
# App 2: dashboard.myapp.com → Python on port 5000
sudo nano /etc/nginx/sites-available/dashboard.myapp.com
server {
    listen 80;
    server_name dashboard.myapp.com;

    location / {
        proxy_pass http://127.0.0.1:5000;
        include /etc/nginx/proxy_params;
    }
}

Create a shared proxy_params file to avoid repeating headers:

sudo nano /etc/nginx/proxy_params
proxy_http_version 1.1;
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300s;

Then in each config: include /etc/nginx/proxy_params;


Part 3 — Path-Based Routing {#part-3}

Route different URL paths to different backends — useful for microservices or splitting API from frontend:

server {
    listen 80;
    server_name myapp.com;

    # Frontend (React/Vue static files)
    location / {
        root /var/www/myapp/frontend;
        index index.html;
        try_files $uri $uri/ /index.html;
    }

    # API routes → Node.js backend
    location /api/ {
        proxy_pass http://127.0.0.1:3000/;
        include /etc/nginx/proxy_params;
    }

    # WebSocket endpoint
    location /ws/ {
        proxy_pass http://127.0.0.1:3001/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # Admin panel → separate Python app
    location /admin/ {
        proxy_pass http://127.0.0.1:8000/;
        include /etc/nginx/proxy_params;
    }
}

Note on trailing slashes: proxy_pass http://127.0.0.1:3000/ (with trailing slash) strips the location prefix from the URI before passing. proxy_pass http://127.0.0.1:3000 (no trailing slash) passes the original URI including the location prefix. This matters for path-based routing — usually you want the trailing slash.


Part 4 — Load Balancing Across Multiple Backends {#part-4}

Distribute traffic across multiple instances of the same app:

upstream myapp_backend {
    # Default: round-robin
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
}

server {
    listen 80;
    server_name myapp.com;

    location / {
        proxy_pass http://myapp_backend;
        include /etc/nginx/proxy_params;
    }
}

Load balancing algorithms

upstream myapp_backend {
    # Least connections (best for long-running requests)
    least_conn;

    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
}
upstream myapp_backend {
    # IP hash (sticky sessions — same client always goes to same backend)
    ip_hash;

    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
}

Marking a backend as backup

upstream myapp_backend {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002 backup;  # Only used if primary servers are down
}

Part 5 — WebSocket Support {#part-5}

WebSocket connections require specific headers to upgrade from HTTP to WS:

server {
    listen 80;
    server_name ws.myapp.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        # Required for WebSocket upgrade
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Long timeout for persistent WebSocket connections
        proxy_read_timeout  3600s;
        proxy_send_timeout  3600s;
    }
}

For an app that serves both HTTP and WebSocket on the same port, Nginx automatically handles both — the Upgrade header is only sent for WebSocket requests.


Part 6 — Proxy Caching {#part-6}

Cache backend responses in Nginx to reduce load and improve response times for repeated requests:

Add to /etc/nginx/nginx.conf inside the http {} block:

# Define cache zone: 10MB keys, 100MB data, 60 minutes inactive
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=100m inactive=60m use_temp_path=off;

In your server config:

server {
    listen 80;
    server_name myapp.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include /etc/nginx/proxy_params;

        # Enable caching
        proxy_cache            my_cache;
        proxy_cache_valid      200 302  10m;
        proxy_cache_valid      404      1m;
        proxy_cache_use_stale  error timeout updating http_500 http_502 http_503 http_504;

        # Cache key includes request method and URI
        proxy_cache_key "$request_method$host$request_uri";

        # Add cache status header for debugging
        add_header X-Cache-Status $upstream_cache_status;
    }

    # Don't cache API endpoints with auth
    location /api/user/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_no_cache 1;
        proxy_cache_bypass 1;
    }
}

Create the cache directory:

sudo mkdir -p /var/cache/nginx
sudo chown www-data:www-data /var/cache/nginx

Part 7 — Security Headers and Rate Limiting {#part-7}

Rate limiting

Prevent abuse by limiting request rates per IP:

# In http {} block (nginx.conf):
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

# In server {} block:
location /api/ {
    limit_req zone=api_limit burst=20 nodelay;
    proxy_pass http://127.0.0.1:3000;
}

This allows 10 requests/second with a burst of 20.

Security headers

server {
    # ...

    add_header X-Frame-Options           "SAMEORIGIN" always;
    add_header X-Content-Type-Options    "nosniff" always;
    add_header X-XSS-Protection          "1; mode=block" always;
    add_header Referrer-Policy           "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy   "default-src 'self' https:" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
}

Part 8 — HTTPS Termination {#part-8}

After running Certbot, Nginx handles HTTPS termination and the backend receives plain HTTP:

server {
    listen 443 ssl http2;
    server_name myapp.com;

    ssl_certificate     /etc/letsencrypt/live/myapp.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
    include             /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam         /etc/letsencrypt/ssl-dhparams.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include /etc/nginx/proxy_params;
    }
}

server {
    listen 80;
    server_name myapp.com;
    return 301 https://$host$request_uri;
}

The backend app at 127.0.0.1:3000 doesn't need to handle SSL at all. It receives plain HTTP from Nginx.


The Gotcha: X-Forwarded-For and Real IP {#gotcha}

When an app behind Nginx logs the client IP, it sees 127.0.0.1 (Nginx's local address) instead of the real client IP. This breaks IP-based logging, rate limiting, and geo-detection in your app.

The fix: set the X-Forwarded-For and X-Real-IP headers in Nginx (already in the proxy_params file above), and configure your app to trust them.

For Node.js (Express):

app.set('trust proxy', 1);
// Now req.ip returns the real client IP from X-Forwarded-For

For Python (Flask):

from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1)

Also configure Nginx to log the real IP in access logs:

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';

Proxy Configuration Reference {#reference}

# Complete proxy location block template
location / {
    proxy_pass http://127.0.0.1:3000;

    # HTTP version
    proxy_http_version 1.1;

    # Headers
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # WebSocket support
    proxy_set_header Upgrade    $http_upgrade;
    proxy_set_header Connection 'upgrade';

    # Timeouts
    proxy_connect_timeout 60s;
    proxy_send_timeout    300s;
    proxy_read_timeout    300s;

    # Buffer settings
    proxy_buffering          on;
    proxy_buffer_size        4k;
    proxy_buffers            8 4k;

    # Cache bypass
    proxy_cache_bypass $http_upgrade;
}

Troubleshooting {#troubleshooting}

Issue Likely Cause Fix
Connection refused Service not running or wrong port Check systemctl status SERVICE and verify firewall rules
Permission denied Wrong file ownership or permissions Check file ownership with ls -la and use chown/chmod to fix
502 Bad Gateway Backend service not running Restart the backend service; check logs with journalctl -u SERVICE
SSL certificate error Certificate expired or domain mismatch Run sudo certbot renew and verify domain DNS points to server IP
Service not starting Config error or missing dependency Check logs with journalctl -u SERVICE -n 50 for specific error
Out of disk space Logs or data accumulation Run df -h to identify usage; clean logs or attach CBS storage
High memory usage Too many processes or memory leak Check with htop; consider upgrading instance plan if consistently high
Firewall blocking traffic Port not open in UFW or Lighthouse console Open port in Lighthouse console firewall AND sudo ufw allow PORT

Set up your reverse proxy:
👉 Tencent Cloud Lighthouse — Ubuntu VPS with Nginx
👉 View current pricing and promotions
👉 Explore all active deals and offers