I use PM2 for Node.js apps and it works great. But I also run Python scripts, a Go binary, and a Ruby job processor on the same server. None of those play nicely with PM2.
Supervisor manages all of them with one consistent interface. Any process that can run in the foreground gets managed the same way: automatic restart on crash, startup on boot, logs captured and rotated, multiple instances if needed. Language agnostic.
The environment variable handling is where most people get tripped up on first setup — Supervisor doesn't automatically inherit your shell environment, so secrets and API keys need to be explicitly configured. I'll cover that clearly.
I use Supervisor to keep a Python background worker running on one of my servers. The worker processes a job queue continuously, and before Supervisor, I'd occasionally find it had silently died hours earlier. With Supervisor, if it dies, it's back up in under a second.
This guide targets Tencent Cloud Lighthouse instances running Ubuntu 22.04. Supervisor works on any Linux distribution. The OrcaTerm browser terminal in the Lighthouse console pairs well with Supervisor — you can run
supervisorctl statusor tail process logs from any browser when you need to check on a background process quickly. The snapshot feature is also useful before adding new Supervisor-managed services, since a pre-change backup lets you restore the previous working configuration if a new service conflicts with an existing one.
- Key Takeaways
When you run a background process on a server, three things can go wrong:
Supervisor handles all three:
Supervisor works with any process that can run in the foreground and writes to stdout/stderr. It's language and framework agnostic.
sudo apt update
sudo apt install -y supervisor
sudo systemctl enable supervisor
sudo systemctl start supervisor
sudo systemctl status supervisor
supervisorctl status
# Should show "No config files found inside directory /etc/supervisor/conf.d"
# (normal on a fresh install with no programs configured yet)
The main Supervisor config is at /etc/supervisor/supervisord.conf. Individual program configs go in /etc/supervisor/conf.d/.
Each program you want Supervisor to manage gets its own .conf file in /etc/supervisor/conf.d/.
Example — a Python worker:
sudo nano /etc/supervisor/conf.d/myworker.conf
[program:myworker]
command=/usr/bin/python3 /opt/myapp/worker.py
directory=/opt/myapp
user=ubuntu
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/supervisor/myworker-error.log
stdout_logfile=/var/log/supervisor/myworker.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=5
environment=NODE_ENV="production",DB_URL="postgresql://localhost/myapp"
sudo supervisorctl reread
sudo supervisorctl update
reread loads the new config without applying it. update starts/stops programs based on config changes.
sudo supervisorctl status myworker
# myworker RUNNING pid 12345, uptime 0:00:05
[program:myapp]
# The command to run (full path)
command=/usr/bin/node /opt/myapp/server.js
# Working directory for the process
directory=/opt/myapp
# User to run as
user=ubuntu
# Start automatically when supervisord starts
autostart=true
# Restart if the process exits (including crashes)
autorestart=true
# Consider a process "successfully started" after running this many seconds
startsecs=5
# Number of times to retry starting before giving up
startretries=3
# Restart the process when it exits with these codes (default: 0)
# autorestart=unexpected means restart only on unexpected exit codes
exitcodes=0,2
# How many seconds to wait before force-killing a process that won't stop
stopwaitsecs=10
# Log configuration
stdout_logfile=/var/log/supervisor/myapp.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stderr_logfile=/var/log/supervisor/myapp-error.log
stderr_logfile_maxbytes=10MB
stderr_logfile_backups=5
# Environment variables (KEY="value" pairs, comma-separated)
environment=NODE_ENV="production",PORT="3000"
# Signal to use when stopping (default: TERM)
stopsignal=TERM
# Priority — lower numbers start first
priority=100
Run multiple related processes together:
[group:myapp]
programs=myapp-web,myapp-worker,myapp-scheduler
priority=100
Then control them as a group:
supervisorctl start myapp:*
supervisorctl stop myapp:*
supervisorctl is the command-line interface for interacting with Supervisor.
# Show all programs and their status
sudo supervisorctl status
# Start a specific program
sudo supervisorctl start myworker
# Stop a program
sudo supervisorctl stop myworker
# Restart a program
sudo supervisorctl restart myworker
# Reload config (after editing .conf files)
sudo supervisorctl reread
sudo supervisorctl update
# Or both in one:
sudo supervisorctl reload
# View recent log output for a program
sudo supervisorctl tail myworker
# Follow live log output
sudo supervisorctl tail -f myworker
# Interactive shell
sudo supervisorctl
Running sudo supervisorctl without arguments opens an interactive shell:
supervisor> status
myworker RUNNING pid 12345, uptime 2:34:10
myapp-web RUNNING pid 12346, uptime 2:34:10
supervisor> restart myworker
myworker: stopped
myworker: started
supervisor> quit
[program:api]
command=/usr/bin/node /opt/api/server.js
directory=/opt/api
user=deploy
autostart=true
autorestart=true
environment=NODE_ENV="production",PORT="3000",DB_URL="%(ENV_DATABASE_URL)s"
stdout_logfile=/var/log/supervisor/api.log
stderr_logfile=/var/log/supervisor/api-error.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
[program:celery-worker]
command=/opt/myapp/venv/bin/celery -A myapp worker --loglevel=info --concurrency=4
directory=/opt/myapp
user=ubuntu
numprocs=1
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=30
stopsignal=QUIT
stdout_logfile=/var/log/supervisor/celery-worker.log
stderr_logfile=/var/log/supervisor/celery-worker-error.log
stdout_logfile_maxbytes=100MB
stdout_logfile_backups=5
[program:mygoapp]
command=/opt/goapp/bin/myserver -port 8080 -config /opt/goapp/config.yaml
directory=/opt/goapp
user=ubuntu
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/goapp.log
stderr_logfile=/var/log/supervisor/goapp-error.log
Run 4 worker processes automatically:
[program:worker]
command=/usr/bin/python3 /opt/myapp/worker.py --id %(process_num)02d
directory=/opt/myapp
user=ubuntu
numprocs=4
process_name=%(program_name)s-%(process_num)02d
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/worker-%(process_num)02d.log
This creates worker-00, worker-01, worker-02, worker-03 — four independent worker processes.
[program:laravel-queue]
command=/usr/bin/php /var/www/myapp/artisan queue:work --sleep=3 --tries=3 --max-time=3600
directory=/var/www/myapp
user=www-data
numprocs=2
process_name=%(program_name)s-%(process_num)02d
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/laravel-queue.log
stderr_logfile=/var/log/supervisor/laravel-queue-error.log
Supervisor includes a built-in web UI on port 9001. Enable it in /etc/supervisor/supervisord.conf:
[inet_httpserver]
port=127.0.0.1:9001
username=admin
password=yourpassword
Reload:
sudo supervisorctl reload
Access via SSH tunnel:
ssh -L 9001:localhost:9001 ubuntu@YOUR_SERVER_IP
Open http://localhost:9001 to see all programs, start/stop them from the browser, and view logs.
My Python worker was configured and Supervisor showed it as RUNNING, but the actual job processing wasn't happening. The logs showed the script started, but no jobs were being picked up from the queue.
The issue: my Python script needed environment variables (REDIS_URL, DATABASE_URL) that I'd stored in a .env file. The environment= directive in Supervisor config doesn't read .env files — I needed to list each variable explicitly.
I'd written:
environment=REDIS_URL="redis://localhost:6379"
But the actual Redis URL required a password: redis://:password@localhost:6379. The URL had special characters that I hadn't quoted properly.
The fix: Wrap values containing special characters in extra quotes:
environment=REDIS_URL="redis://:p%40ssword@localhost:6379",DATABASE_URL="postgresql://user:p%40ss@localhost/mydb"
Note: special characters in Supervisor env values sometimes need percent-encoding. Alternatively, store vars in a separate shell file and source it:
command=/bin/bash -c "source /opt/myapp/.env && python3 /opt/myapp/worker.py"
| Issue | Likely Cause | Fix |
|---|---|---|
Program shows FATAL |
Process crashed immediately | Check logs: supervisorctl tail myapp |
Program shows BACKOFF |
Failing to start (retry loop) | Check command path and permissions |
| Config changes not applied | Not reloaded | Run supervisorctl reread && supervisorctl update |
| Permission denied | Wrong user | Check user= matches actual process requirements |
| Environment vars not available | Not in environment= |
Add each var explicitly to environment= |
| Logs filling up disk | Log rotation not set | Add stdout_logfile_maxbytes and stdout_logfile_backups |
| Process starts but immediately stops | startsecs too low |
Increase to give process time to initialize |
supervisorctl connection refused |
Supervisord not running | sudo systemctl start supervisor |
✅ What you can now do:
supervisorctl (start, stop, restart, tail logs)numprocsSupervisor is one of those tools you set up once and largely forget about — until you look at the uptime and realize your process has been running continuously for 47 days without a single manual restart.
When should I use Supervisor instead of managing Docker directly?
Use Supervisor when you want a visual management layer on top of Docker, simplified deployment workflows, or to manage Docker for users who aren't familiar with the CLI.
Is Supervisor suitable for production use?
For small to medium deployments, yes. Large-scale production typically uses orchestration platforms like Kubernetes. Supervisor is well-suited for individual developers, small teams, and homelab environments.
How do I back up container data?
Back up named volumes (where persistent data lives), not containers themselves. The guide covers volume backup strategies and how to combine application-level backups with Lighthouse snapshots.
What happens if I need to migrate to a different server?
Export your Docker volumes and compose files, provision a new Lighthouse instance with Docker CE image, and restore the data. Container-based deployments are designed to be portable.
docker stats for real-time monitoring, or deploy Netdata or Prometheus + Grafana for historical metrics and alerts. Portainer's web interface also shows per-container resource usage.👉 Get started with Tencent Cloud Lighthouse
👉 View current pricing and launch promotions
👉 Explore all active deals and offers