About two years ago I was running several small projects on separate hosting plans — a WordPress site, a personal API, an Uptime Kuma monitoring dashboard, a Ghost blog. Each had its own monthly cost and its own management overhead.
One weekend I consolidated everything onto a single cloud server using Docker. Total cost: $6/month. All projects running simultaneously, each isolated in its own container.
The key insight is that Docker makes running multiple applications on one server practical. Each app lives in its own container with its own dependencies, so there are no conflicts. Adding a new app means creating a docker-compose.yml file and running one command.
Here's the full setup.
I run this on Tencent Cloud Lighthouse. The fastest path: Lighthouse has a pre-built Docker CE application image. When creating a new instance, select Application Image → Docker CE and Docker is already installed and running when the server provisions — no manual installation needed, no PATH setup, no reboot. The server is ready to
docker runin under 2 minutes. Lighthouse also includes OrcaTerm, a browser-based terminal in the control panel — I use it to manage containers from any device without a local SSH client. The console-level firewall makes it easy to open container ports without editing OS firewall rules.
Key Takeaways
- Select the Docker CE application image on Lighthouse to skip manual installation
restart: unless-stoppedkeeps containers running across server reboots- Use named volumes (not container filesystem) for any data you want to keep
docker compose up -dstarts all services defined indocker-compose.yml- One server running Docker can host 5–10 separate apps without conflicts
Before Docker, deploying a new app to a server meant:
The "works on my machine" problem is real. I once spent four hours debugging a production issue that turned out to be a Node.js version mismatch between my laptop (v18) and the server (v16). With Docker, the container you build locally is the exact same thing that runs on the server. Same OS layer, same runtime, same dependencies. That particular class of problem just disappears.
With Docker Compose:
docker-compose.yml)docker compose up -d)| What | Notes |
|---|---|
| A Tencent Cloud Lighthouse server | Ubuntu 22.04 LTS |
| SSH access or OrcaTerm | The browser terminal is surprisingly useful |
| Basic Linux comfort | cd, nano, running commands |
| A domain (optional) | Only needed for HTTPS with real domain names |
Cost: Lighthouse starts at ~$5–6/month. Check current new-user promotions — there's usually a discount for new accounts.
Sign in, go to Lighthouse → New
Image: Choose one of two paths:
⚡ Fastest (Recommended): Click Application Images → select Docker CE
Docker is already installed and running when the server starts up. Skip Part 2 entirely — go straight to Part 3.
Alternative: Select System Images → Ubuntu 22.04 LTS if you want a clean OS and will install Docker manually in Part 2.
Plan: Depends on how many containers you want to run:
| Plan | RAM | Good for |
|---|---|---|
| Starter | 2 GB | Learning, 2–3 light containers |
| Basic | 4 GB | 3–5 containers, real workloads |
| Standard | 8 GB | Many services, production |
I started on Starter and moved to Basic when I added the PostgreSQL database. The Starter plan genuinely handles 2–3 small apps fine.
Region: Closest to your users
Open these firewall ports (instance → Firewall → Add Rule):
| Port | Protocol | What |
|---|---|---|
| 22 | TCP | SSH |
| 80 | TCP | HTTP |
| 443 | TCP | HTTPS |
Skip this if you chose the Docker CE application image in Part 1. Docker is already installed and running. Confirm with
docker --versionand proceed to Part 3.
SSH in (or open OrcaTerm from the Lighthouse console):
ssh ubuntu@YOUR_SERVER_IP
The official Docker install script handles everything:
curl -fsSL https://get.docker.com | sudo sh
That script adds Docker's repo, installs Docker Engine, containerd, and the Compose plugin. Then:
# Run Docker without sudo
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version # Docker version 26.x.x
docker compose version # Docker Compose version v2.x.x
# Start-on-boot
sudo systemctl enable docker
Run the classic test:
docker run hello-world
If you see "Hello from Docker!" you're good to go.
Let's start simple. We'll serve a static HTML page using Nginx in a container.
mkdir -p ~/apps/mysite && cd ~/apps/mysite
cat > index.html << 'EOF'
<!DOCTYPE html>
<html>
<head><title>Running on Docker</title></head>
<body>
<h1>It works.</h1>
<p>Served by Nginx in a Docker container on Tencent Cloud Lighthouse.</p>
</body>
</html>
EOF
docker run -d \
--name mysite \
--restart unless-stopped \
-p 8080:80 \
-v $(pwd)/index.html:/usr/share/nginx/html/index.html:ro \
nginx:alpine
Visit http://YOUR_SERVER_IP:8080 — your page is live.
What that command does:
| Flag | What it means |
|---|---|
-d |
Run in background |
--restart unless-stopped |
Auto-restart on crash or reboot |
-p 8080:80 |
Host port 8080 → container port 80 |
-v $(pwd)/index.html:... |
Mount your file into the container |
nginx:alpine |
Use the lightweight Alpine-based Nginx image |
That's the mental model for Docker: your files live on the host, the container provides the runtime environment. They're separate.
Single containers are fine. Where Docker gets genuinely powerful is multi-service apps. Here's a Node.js API that talks to a PostgreSQL database — defined in one file.
mkdir -p ~/apps/nodeapi && cd ~/apps/nodeapi
Create docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:16-alpine
container_name: nodeapi_db
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER: appuser
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
# No ports: section here — DB not accessible from outside, only from other containers
api:
image: node:20-alpine
container_name: nodeapi_app
restart: unless-stopped
working_dir: /app
volumes:
- ./app:/app
ports:
- "3000:3000"
environment:
NODE_ENV: production
DATABASE_URL: postgresql://appuser:${DB_PASSWORD}@db:5432/myapp
PORT: 3000
depends_on:
- db
command: sh -c "npm install && node server.js"
volumes:
postgres_data:
Create the .env file (keep this out of git):
echo "DB_PASSWORD=choose_a_strong_password_here" > .env
chmod 600 .env
docker compose up -d
# Check both services are running
docker compose ps
# Tail logs from both services
docker compose logs -f
The database and API start, the API connects to the database using the hostname db (Docker's internal DNS resolves container names automatically), and the API is accessible on port 3000. The database is not accessible from outside at all — only other containers on the same Docker network can reach it.
This is the right way to structure production apps.
Uptime Kuma is a self-hosted status monitoring dashboard. It pings your sites and services, and sends you alerts if something goes down. I run it on the same server as everything else it monitors — which is slightly meta, but practical.
mkdir -p ~/apps/uptime-kuma && cd ~/apps/uptime-kuma
docker-compose.yml:
version: '3.8'
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- uptime-kuma_data:/app/data
environment:
- TZ=America/New_York
volumes:
uptime-kuma_data:
docker compose up -d
Open port 3001 in the Lighthouse firewall, then visit http://YOUR_SERVER_IP:3001 to complete the setup wizard.
You can monitor HTTP/HTTPS sites, TCP ports, DNS resolution, ping — basically anything. Alert channels include email, Telegram, Slack, Discord, and webhooks.
At this point I had apps running on ports 8080, 3000, and 3001. That works, but it requires opening lots of firewall ports and the URLs look terrible. The solution: a reverse proxy that sits on port 80/443 and routes traffic to the right container based on the domain name.
Nginx Proxy Manager gives you a web UI to manage proxy rules. I use this because I add new services occasionally and don't want to manually edit Nginx configs.
mkdir -p ~/apps/proxy && cd ~/apps/proxy
docker-compose.yml:
version: '3.8'
services:
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "81:81" # Admin UI
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
docker compose up -d
Access the admin panel at http://YOUR_SERVER_IP:81:
admin@example.com / changemeTo add a new app:
api.yourdomain.comYOUR_SERVER_IP:3000That's it — https://api.yourdomain.com now routes to your Node API with a valid HTTPS cert, and you can close port 3000 in the firewall.
The commands I actually use regularly:
# See what's running
docker ps
# See everything including stopped containers
docker ps -a
# Live logs for a container
docker logs -f uptime-kuma
# Get a shell inside a running container (for debugging)
docker exec -it uptime-kuma sh
# Check resource usage
docker stats
# Update all containers to latest images
docker compose pull && docker compose up -d
# Clean up old images and stopped containers
docker system prune
# See how much disk Docker is using
docker system df
cd ~/apps/uptime-kuma
docker compose pull # Pull the latest image
docker compose up -d # Recreate the container with the new image
The volume data persists. The new container picks up right where the old one left off.
This bears repeating. In your docker-compose.yml, if a service doesn't need to be publicly accessible, don't add a ports: section. The database container in Part 4 has no public ports — it's only reachable by other containers on the same internal Docker network. This is correct.
# Wrong: exposes PostgreSQL to the internet
db:
image: postgres:16
ports:
- "5432:5432" # Don't do this
# Correct: accessible only to other containers
db:
image: postgres:16
# No ports section
.env files, not in compose files# Wrong
environment:
DB_PASSWORD: mypassword123
# Correct
environment:
DB_PASSWORD: ${DB_PASSWORD} # Reads from .env
Add .env to .gitignore so it never accidentally gets committed.
docker run -d \
--name watchtower \
--restart unless-stopped \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower \
--cleanup \
--schedule "0 0 4 * * *"
This checks for updated images every day at 4 AM and recreates containers with new images automatically. The --cleanup flag removes old images after updating. I set this up once and haven't thought about image updates since.
When I first set up the Node.js + PostgreSQL stack from Part 4, the API container kept restarting. The logs said "connection refused" when trying to connect to the database.
I triple-checked the DATABASE_URL. The container name was right. The port was right.
The problem: depends_on in Docker Compose only waits for the container to start, not for the database to be ready to accept connections. PostgreSQL takes a few seconds to initialize. My Node app was trying to connect before PostgreSQL was ready, failing, and crashing.
The fix was adding a retry loop in my Node app's startup code — retry the database connection a few times with a delay before giving up. Something like:
async function connectWithRetry(maxAttempts = 5) {
for (let i = 0; i < maxAttempts; i++) {
try {
await db.connect();
console.log('Database connected');
return;
} catch (err) {
console.log(`DB connection attempt ${i + 1} failed, retrying in 5s...`);
await new Promise(r => setTimeout(r, 5000));
}
}
throw new Error('Could not connect to database after multiple attempts');
}
This is good practice in any distributed system — don't assume dependencies are ready immediately. But it's easy to forget when you're just getting started with Docker Compose.
Here's what I currently run on my single Lighthouse instance:
| App | Container | Port (internal) |
|---|---|---|
| WordPress blog | wordpress + mysql |
8001 |
| Personal Node.js API | node-api + postgres |
3000 |
| Uptime Kuma monitoring | uptime-kuma |
3001 |
| Ghost blog | ghost + mysql |
2368 |
| Nginx Proxy Manager | proxy |
80/443 (public) |
All of these are behind Nginx Proxy Manager, so only port 80 and 443 are exposed publicly. Every app has its own subdomain and HTTPS certificate.
Total monthly cost: $6 for the server, $0 for software (all open source).
| Before (4 separate services) | After (Docker on one VPS) | |
|---|---|---|
| Monthly cost | ~$40 | $6 |
| Apps running | 4 | 5 |
| Deployment process | SSH into each host, run scripts | cd ~/apps/X && docker compose pull && docker compose up -d |
| SSL management | Each host separately | Nginx Proxy Manager, centralized |
| Backups | Inconsistent | Daily Lighthouse snapshot + per-app volumes |
If you're managing multiple small applications and paying for separate hosting for each, Docker on a single VPS is almost certainly the right consolidation move.
| Issue | Likely Cause | Fix |
|---|---|---|
| Connection refused | Service not running or wrong port | Check systemctl status SERVICE and verify firewall rules |
| Permission denied | Wrong file ownership or permissions | Check file ownership with ls -la and use chown/chmod to fix |
| 502 Bad Gateway | Backend service not running | Restart the backend service; check logs with journalctl -u SERVICE |
| SSL certificate error | Certificate expired or domain mismatch | Run sudo certbot renew and verify domain DNS points to server IP |
| Service not starting | Config error or missing dependency | Check logs with journalctl -u SERVICE -n 50 for specific error |
| Out of disk space | Logs or data accumulation | Run df -h to identify usage; clean logs or attach CBS storage |
| High memory usage | Too many processes or memory leak | Check with htop; consider upgrading instance plan if consistently high |
| Firewall blocking traffic | Port not open in UFW or Lighthouse console | Open port in Lighthouse console firewall AND sudo ufw allow PORT |
Should I use the Docker CE application image or install manually?
Use the Docker CE image — it's pre-installed and ready when the server provisions. Manual installation is only needed for existing servers.
How many containers can run on one server?
Depends on each container's resources. A 4 GB RAM server runs 5–8 typical apps simultaneously. Monitor with docker stats.
What is Docker Compose used for?
It defines multi-container apps in one YAML file. Start your entire stack with docker compose up -d.
How do I keep containers running after a server reboot?
Add restart: unless-stopped to each service in docker-compose.yml.
How do I update a running container?
Pull the new image: docker compose pull, then restart: docker compose up -d.
Set up your Docker server today:
👉 Tencent Cloud Lighthouse — Docker-ready VPS
👉 Check current pricing and launch promotions
👉 Explore all active deals and offers