Technology Encyclopedia Home >How to Set Up a Node.js API with PostgreSQL on a VPS — A Complete Production Guide

How to Set Up a Node.js API with PostgreSQL on a VPS — A Complete Production Guide

I built my first Node.js + PostgreSQL API and ran it with node server.js in a terminal. Worked great, right up until I closed the terminal.

That's the start of the "production setup" learning curve. The app needs to keep running after you disconnect. It needs to restart if it crashes. Connections to the database need pooling, not a new connection per request. Environment variables shouldn't be hardcoded. HTTPS needs to be configured. The list keeps growing.

This guide covers all of it in one go: a realistic task management API to demonstrate the concepts, PM2 for process management, a connection pool, Nginx reverse proxy, and HTTPS via Let's Encrypt.

I run this on Tencent Cloud Lighthouse with Ubuntu 22.04. Tip: select the Node.js application image when creating your instance — Node.js is pre-installed and you skip the NVM setup steps. The 2 vCPU / 4 GB RAM plan handles a production Node.js + PostgreSQL stack comfortably. Running both the API and database on the same Lighthouse instance keeps latency near zero (localhost communication), and the snapshot feature captures both the application and database state in one backup operation.


Table of Contents

  1. What We're Building
  2. Prerequisites
  3. Part 1 — Server Setup
  4. Part 2 — Install Node.js
  5. Part 3 — Install and Configure PostgreSQL
  6. Part 4 — Create the Node.js API
  7. Part 5 — Database Connection and Migrations
  8. Part 6 — Run with PM2
  9. Part 7 — Configure Nginx
  10. Part 8 — Enable HTTPS
  11. Part 9 — Auto-Deploy from GitHub
  12. The Gotcha: Database Connection Pooling
  13. API Testing Reference

Key Takeaways

  • Use the Lighthouse Node.js application image to skip NVM installation
  • Always use connection pooling (pg Pool) — never create a new DB connection per request
  • Parameterized queries ($1, $2) prevent SQL injection — never concatenate user input
  • Store database credentials in environment variables, not in code
  • PM2 manages the Node.js process; Nginx handles HTTPS and routing

Frequently Asked Questions {#faq}

Should I run Node.js and PostgreSQL on the same server?
For small to medium applications, yes. Running both on the same server keeps database latency near zero (localhost communication) and simplifies backup. Separate servers are warranted when database load is very high.

What is connection pooling and why does it matter?
Each new database connection has overhead. Connection pooling reuses existing connections instead of creating new ones for each request. pg library's Pool class handles this. Typical pool size: 5–20 connections.

How do I handle database connection failures gracefully?
Use a retry mechanism for connection errors, implement health checks that include a database ping, and use connection pool settings that timeout and retry failed connections.

Should I use an ORM like Prisma or Sequelize, or raw SQL?
For complex applications, an ORM provides type safety and migration management. For simpler APIs, raw SQL with parameterized queries (via pg Pool) offers better control and performance. Both approaches are valid.

How do I run database migrations in production?
Use a migration tool like db-migrate or Prisma Migrate. Run migrations as part of your deployment process, before restarting the application. Always test migrations on a staging environment first.

What We're Building {#what}

Client → HTTPS → Nginx → Node.js API (PM2) → PostgreSQL

A REST API with:

  • Express.js for routing
  • pg (node-postgres) for database access
  • JWT authentication
  • Environment-based configuration
  • PM2 for process management
  • Nginx reverse proxy with HTTPS

Prerequisites {#prerequisites}

Requirement Notes
Cloud server Tencent Cloud Lighthouse Ubuntu 22.04
Node.js 20 LTS Installed via NVM
A domain name For HTTPS; optional for testing

Part 1 — Server Setup {#part-1}

ssh ubuntu@YOUR_SERVER_IP
sudo apt update && sudo apt upgrade -y
sudo apt install -y git curl nginx

sudo ufw allow ssh
sudo ufw allow 'Nginx Full'
sudo ufw enable

Part 2 — Install Node.js {#part-2}

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
export NVM_DIR="$HOME/.nvm" && source "$NVM_DIR/nvm.sh"

nvm install 20
nvm use 20
nvm alias default 20

npm install -g pm2

Part 3 — Install and Configure PostgreSQL {#part-3}

sudo apt install -y postgresql postgresql-contrib
sudo systemctl start postgresql
sudo systemctl enable postgresql

Create the database and application user:

sudo -u postgres psql
-- Create database
CREATE DATABASE taskapi;

-- Create application user with limited privileges
CREATE USER taskapi_user WITH PASSWORD 'use_a_strong_password_here';

-- Grant database access
GRANT ALL PRIVILEGES ON DATABASE taskapi TO taskapi_user;

-- Connect to the database and grant schema privileges
\c taskapi
GRANT ALL ON SCHEMA public TO taskapi_user;

\q

Verify the connection works:

psql -U taskapi_user -d taskapi -h localhost
# Enter password
# Should connect successfully
# \q to exit

Part 4 — Create the Node.js API {#part-4}

mkdir -p ~/apps/taskapi && cd ~/apps/taskapi
npm init -y

# Install dependencies
npm install express pg dotenv bcryptjs jsonwebtoken cors helmet
npm install --save-dev nodemon

Create .env:

nano .env
NODE_ENV=production
PORT=3000

# Database
DB_HOST=localhost
DB_PORT=5432
DB_NAME=taskapi
DB_USER=taskapi_user
DB_PASSWORD=your_password_here

# JWT
JWT_SECRET=generate-a-long-random-secret-here
JWT_EXPIRES_IN=7d
chmod 600 .env

Create server.js:

require('dotenv').config();
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const { Pool } = require('pg');

const app = express();
const PORT = process.env.PORT || 3000;

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());

// Database connection pool
const pool = new Pool({
    host:     process.env.DB_HOST,
    port:     process.env.DB_PORT,
    database: process.env.DB_NAME,
    user:     process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    max: 10,             // Maximum pool size
    idleTimeoutMillis: 30000,
    connectionTimeoutMillis: 2000,
});

// Health check
app.get('/health', async (req, res) => {
    try {
        await pool.query('SELECT 1');
        res.json({ status: 'ok', database: 'connected' });
    } catch (err) {
        res.status(500).json({ status: 'error', database: 'disconnected' });
    }
});

// Tasks routes
app.get('/tasks', async (req, res) => {
    try {
        const result = await pool.query(
            'SELECT * FROM tasks ORDER BY created_at DESC'
        );
        res.json(result.rows);
    } catch (err) {
        res.status(500).json({ error: err.message });
    }
});

app.post('/tasks', async (req, res) => {
    const { title, description } = req.body;
    if (!title) return res.status(400).json({ error: 'Title is required' });

    try {
        const result = await pool.query(
            'INSERT INTO tasks (title, description) VALUES ($1, $2) RETURNING *',
            [title, description]
        );
        res.status(201).json(result.rows[0]);
    } catch (err) {
        res.status(500).json({ error: err.message });
    }
});

app.delete('/tasks/:id', async (req, res) => {
    try {
        const result = await pool.query(
            'DELETE FROM tasks WHERE id = $1 RETURNING *',
            [req.params.id]
        );
        if (result.rows.length === 0) {
            return res.status(404).json({ error: 'Task not found' });
        }
        res.json({ message: 'Task deleted', task: result.rows[0] });
    } catch (err) {
        res.status(500).json({ error: err.message });
    }
});

// Start server
app.listen(PORT, () => {
    console.log(`API running on port ${PORT}`);
});

module.exports = app;

Part 5 — Database Connection and Migrations {#part-5}

Create the initial database schema:

nano ~/apps/taskapi/migrations/001_create_tasks.sql
CREATE TABLE IF NOT EXISTS tasks (
    id          SERIAL PRIMARY KEY,
    title       VARCHAR(255) NOT NULL,
    description TEXT,
    completed   BOOLEAN DEFAULT false,
    created_at  TIMESTAMP DEFAULT NOW(),
    updated_at  TIMESTAMP DEFAULT NOW()
);

-- Index for faster sorting
CREATE INDEX idx_tasks_created_at ON tasks(created_at DESC);

Run the migration:

psql -U taskapi_user -d taskapi -h localhost -f ~/apps/taskapi/migrations/001_create_tasks.sql

Test a database insert:

psql -U taskapi_user -d taskapi -h localhost -c \
  "INSERT INTO tasks (title) VALUES ('Test task') RETURNING *;"

Part 6 — Run with PM2 {#part-6}

cd ~/apps/taskapi
pm2 start server.js --name "taskapi"

# Verify it's running
pm2 status
pm2 logs taskapi

# Test locally
curl http://localhost:3000/health
# {"status":"ok","database":"connected"}

# Set up boot persistence
pm2 startup
# Run the command PM2 outputs
pm2 save

Part 7 — Configure Nginx {#part-7}

sudo nano /etc/nginx/sites-available/taskapi
server {
    listen 80;
    server_name api.yourdomain.com;

    access_log /var/log/nginx/taskapi_access.log;
    error_log  /var/log/nginx/taskapi_error.log;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 60s;
        proxy_read_timeout    60s;
    }

    # Rate limiting for API endpoints
    location /tasks {
        limit_req zone=api_limit burst=20 nodelay;
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Add rate limiting to /etc/nginx/nginx.conf inside http {}:

limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
sudo ln -s /etc/nginx/sites-available/taskapi /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx

Part 8 — Enable HTTPS {#part-8}

sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d api.yourdomain.com

Test the API over HTTPS:

curl https://api.yourdomain.com/health

Part 9 — Auto-Deploy from GitHub {#part-9}

Create ~/deploy-taskapi.sh:

#!/bin/bash
set -e
cd ~/apps/taskapi
git pull origin main
npm install --production
pm2 reload taskapi --update-env
echo "Deploy complete: $(date)"
chmod +x ~/deploy-taskapi.sh

GitHub Actions workflow:

name: Deploy Task API

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy via SSH
        uses: appleboy/ssh-action@v1.0.0
        with:
          host: ${{ secrets.SERVER_IP }}
          username: ubuntu
          key: ${{ secrets.SSH_PRIVATE_KEY }}
          script: ~/deploy-taskapi.sh

The Gotcha: Database Connection Pooling {#gotcha}

A common Node.js + PostgreSQL mistake: creating a new database connection for every request instead of using a connection pool.

Wrong:

// Creates new connection on every request
const client = new Client({ ... });
await client.connect();
const result = await client.query('SELECT ...');
await client.end();

Correct:

// Create pool once at startup, reuse for all requests
const pool = new Pool({ max: 10, ... });
const result = await pool.query('SELECT ...');
// Pool manages connections automatically

The Pool from pg maintains a set of reusable connections. Creating a new Client per request opens and closes a database connection each time — much slower and a potential source of connection exhaustion under load.

Also set max in the Pool config to something reasonable. PostgreSQL defaults to 100 max connections. With max: 10 and 3 Node.js workers, that's up to 30 connections from your app — well within limits.


API Testing Reference {#testing}

# Health check
curl https://api.yourdomain.com/health

# Get all tasks
curl https://api.yourdomain.com/tasks

# Create a task
curl -X POST https://api.yourdomain.com/tasks \
  -H "Content-Type: application/json" \
  -d '{"title": "My first task", "description": "Test the API"}'

# Delete a task
curl -X DELETE https://api.yourdomain.com/tasks/1

Troubleshooting {#troubleshooting}

Issue Likely Cause Fix
Connection refused Service not running or wrong port Check systemctl status SERVICE and verify firewall rules
Permission denied Wrong file ownership or permissions Check file ownership with ls -la and use chown/chmod to fix
502 Bad Gateway Backend service not running Restart the backend service; check logs with journalctl -u SERVICE
SSL certificate error Certificate expired or domain mismatch Run sudo certbot renew and verify domain DNS points to server IP
Service not starting Config error or missing dependency Check logs with journalctl -u SERVICE -n 50 for specific error
Out of disk space Logs or data accumulation Run df -h to identify usage; clean logs or attach CBS storage
High memory usage Too many processes or memory leak Check with htop; consider upgrading instance plan if consistently high
Firewall blocking traffic Port not open in UFW or Lighthouse console Open port in Lighthouse console firewall AND sudo ufw allow PORT

Build your API today:
👉 Tencent Cloud Lighthouse — Node.js + PostgreSQL ready
👉 View current pricing and promotions
👉 Explore all active deals and offers