I used to do server maintenance manually. Every Sunday morning: clean the logs, check the disk usage, verify backups ran. It was a routine I could keep up with until I started managing more servers and more tasks.
Cron is the tool that makes those recurring tasks happen automatically — no reminders, no manual steps, no forgetting. Database backup at 2am, log cleanup every Sunday, disk usage check every hour. Set it up once and the server handles it.
The one thing that catches almost everyone is the PATH issue: cron runs with a minimal environment where your usual commands might not be found. I'll make this explicit so you don't spend an afternoon debugging a script that works perfectly when you run it manually.
Cron is built into every Linux system. It's been running scheduled tasks reliably for decades. Once you understand the syntax, setting up automated tasks takes about two minutes per job. I'll also cover systemd timers as a modern alternative for more complex scheduling needs.
The server in this guide is a Tencent Cloud Lighthouse instance running Ubuntu 22.04. Cron works identically on any Linux distribution. The OrcaTerm browser terminal built into the Lighthouse console is particularly convenient for cron setup — you can edit crontab entries, test scripts, and check syslog for cron activity from any browser without a local SSH client. The snapshot feature is also worth using before setting up complex cron schedules — a pre-automation backup means you can roll back if a cleanup job removes something it shouldn't have.
- Key Takeaways
The cron daemon (crond) runs continuously in the background. Every minute it wakes up, checks crontab files and /etc/cron.* directories, and runs any jobs whose schedule matches the current time.
Each user has their own crontab. There's also a system-wide crontab at /etc/crontab and scheduled directories:
/etc/cron.hourly/ — scripts run every hour/etc/cron.daily/ — scripts run daily/etc/cron.weekly/ — scripts run weekly/etc/cron.monthly/ — scripts run monthlyFor most purposes, user crontabs are the right choice.
A cron job line has six fields:
* * * * * command-to-run
│ │ │ │ │
│ │ │ │ └─── Day of week (0-7, where 0 and 7 = Sunday)
│ │ │ └───── Month (1-12)
│ │ └─────── Day of month (1-31)
│ └───────── Hour (0-23)
└─────────── Minute (0-59)
| Schedule | Crontab Expression |
|---|---|
| Every minute | * * * * * |
| Every 5 minutes | */5 * * * * |
| Every hour (at :00) | 0 * * * * |
| Every day at 2:30am | 30 2 * * * |
| Every Monday at 9am | 0 9 * * 1 |
| First day of month at midnight | 0 0 1 * * |
| Every weekday at 8am | 0 8 * * 1-5 |
| Every 15 minutes | */15 * * * * |
Quick reference tool: crontab.guru — paste any cron expression to see what it means in plain English.
Instead of five time fields, you can use:
| Keyword | Equivalent |
|---|---|
@reboot |
Run once at system startup |
@hourly |
0 * * * * |
@daily |
0 0 * * * |
@weekly |
0 0 * * 0 |
@monthly |
0 0 1 * * |
crontab -e
This opens your user's crontab in the default editor (usually nano). Add jobs here.
First time? You'll be asked which editor to use. Pick nano (option 1) if you're not sure.
crontab -l
crontab -r
Use with caution — this deletes everything.
sudo crontab -u username -e
Edit /etc/crontab for system-wide jobs — this format includes a username field:
30 2 * * * root /usr/local/bin/backup.sh
# Edit crontab
crontab -e
# Add this line — backup at 2am every night
0 2 * * * /usr/local/bin/backup-db.sh >> /var/log/backup.log 2>&1
Create /usr/local/bin/backup-db.sh:
#!/bin/bash
DATE=$(date +%Y-%m-%d)
BACKUP_DIR="/opt/backups/mysql"
DB_NAME="myapp"
DB_USER="backup_user"
DB_PASS="yourpassword"
mkdir -p "$BACKUP_DIR"
# Create backup
mysqldump -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" | gzip > "$BACKUP_DIR/backup-$DATE.sql.gz"
# Delete backups older than 30 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +30 -delete
echo "$(date): Backup completed - backup-$DATE.sql.gz"
chmod +x /usr/local/bin/backup-db.sh
Certbot installs its own cron job or systemd timer automatically. Verify it's there:
sudo systemctl status certbot.timer
# or
sudo crontab -l | grep certbot
If not present, add manually:
# Run twice daily (recommended by Let's Encrypt)
0 0,12 * * * root certbot renew --quiet
# Delete logs older than 14 days every Sunday at 3am
0 3 * * 0 find /var/log/myapp -name "*.log" -mtime +14 -delete
# Run analytics report at 6am every Monday
0 6 * * 1 /usr/bin/python3 /opt/myapp/scripts/weekly_report.py >> /var/log/weekly_report.log 2>&1
# Check disk every hour, alert if over 85% full
0 * * * * /usr/local/bin/check-disk.sh
Create /usr/local/bin/check-disk.sh:
#!/bin/bash
THRESHOLD=85
USAGE=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if [ "$USAGE" -gt "$THRESHOLD" ]; then
echo "WARNING: Disk usage is ${USAGE}% on $(hostname)" | \
mail -s "Disk Alert: $(hostname)" you@example.com
fi
For a quick way to restart a service if it stops:
# Check every 5 minutes if myapp is running
*/5 * * * * systemctl is-active --quiet myapp || systemctl start myapp
Note: For production use, PM2 or Supervisor (next guide) is more appropriate.
# Start a custom script on reboot
@reboot /opt/myapp/start.sh >> /var/log/startup.log 2>&1
Cron runs with a minimal environment — $PATH doesn't include your user's full path. Use absolute paths for everything:
# Wrong — 'python3' may not be found
*/5 * * * * python3 /opt/script.py
# Correct — full path
*/5 * * * * /usr/bin/python3 /opt/script.py
Find the path of any command:
which python3
# /usr/bin/python3
which node
# /usr/bin/node
By default, cron emails output to the local mail system, which most VPS setups don't deliver. Redirect output explicitly:
# Discard all output
0 2 * * * /usr/local/bin/backup.sh > /dev/null 2>&1
# Save output to log file
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1
# Save stdout and stderr separately
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>> /var/log/backup-errors.log
At the top of your crontab:
# Disable all email from cron
MAILTO=""
# Or send to a specific address
MAILTO="you@example.com"
Run scripts manually first to confirm they work:
bash /usr/local/bin/backup-db.sh
echo "Exit code: $?"
Exit code 0 = success. Non-zero = error.
Prevent overlapping runs of the same job using flock:
# Only one instance at a time
0 * * * * flock -n /tmp/myjob.lock /usr/local/bin/long-running-job.sh
Systemd timers are a more modern approach with better logging and dependency management.
/etc/systemd/system/backup.service:
[Unit]
Description=Database backup
[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup-db.sh
User=ubuntu
/etc/systemd/system/backup.timer:
[Unit]
Description=Run database backup daily at 2am
[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
[Install]
WantedBy=timers.target
sudo systemctl daemon-reload
sudo systemctl enable backup.timer
sudo systemctl start backup.timer
sudo systemctl status backup.timer
# See all active timers
systemctl list-timers
# View logs from the service
journalctl -u backup.service
# View syslog for cron activity
grep CRON /var/log/syslog | tail -20
# Or with journalctl
journalctl -t CRON --since "1 hour ago"
If you're writing to log files (recommended), check them:
tail -f /var/log/backup.log
For critical jobs, use a monitoring service like Healthchecks.io (free tier available):
# Send a ping after each successful run
0 2 * * * /usr/local/bin/backup.sh && curl -fsS https://hc-ping.com/YOUR-UUID > /dev/null
If the ping doesn't arrive on schedule, you get an alert.
I set up a cron job to run a Node.js script, but it kept failing silently. The crontab entry looked correct, and running the script manually worked fine.
The issue: my Node.js script used require('dotenv').config() to load environment variables from a .env file using a relative path. When cron ran the script, the working directory was / (or the user's home), not the script's directory — so .env wasn't found, and the database connection failed without obvious error.
Two fixes:
Option 1 — Change to the script's directory first:
0 2 * * * cd /opt/myapp && /usr/bin/node scripts/backup.js >> /var/log/backup.log 2>&1
Option 2 — Use __dirname in Node.js for absolute paths:
require('dotenv').config({ path: path.join(__dirname, '../.env') });
Always test cron jobs with cd in the command if your script depends on relative paths.
| Issue | Likely Cause | Fix |
|---|---|---|
| Job doesn't run | Syntax error in crontab | Validate with crontab -l; check time fields |
| Command not found | PATH issue | Use full absolute path to commands |
| Script works manually, fails in cron | Working directory or env vars | Add cd /path/to/dir && before command |
| No output/feedback | Output not redirected | Add >> /tmp/test.log 2>&1 to job |
| Job runs but nothing happens | Script permissions | chmod +x /path/to/script.sh |
@reboot job didn't run |
cron not started at boot | sudo systemctl enable cron |
| Time zone mismatch | Cron uses system timezone | Set TZ=America/New_York at top of crontab, or check timedatectl |
✅ What you learned:
@daily, @reboot, etc.)crontab -e, -l, -r)flock for preventing overlapping job runssystemd timers as a more modern alternativeCron is one of those tools that's simple to learn and saves significant manual effort over time. Set it up once for your recurring tasks and let the server handle them reliably.
What's the difference between cron jobs and built-in Linux cron?
Linux cron runs commands on a schedule. cron jobs provides additional capabilities like visual management, dependency handling, error notifications, and often Docker-native integration.
How do I debug a failing cron jobs task?
Check the execution logs first. Verify the command works when run manually with the same user/environment. Common issues: incorrect paths, missing environment variables, permission problems.
How do I make cron jobs tasks resilient to failures?
Implement retry logic, alert on failures (email/Slack notification), and log output to a file. For critical tasks, consider writing a simple success/failure status to a monitoring endpoint.
What happens if the server restarts — do scheduled tasks continue?
If configured as a systemd service (as shown in this guide), cron jobs restarts automatically on server reboot and resumes its schedule. Configure Restart=on-failure for crash recovery.
systemctl or its own web interface. For important tasks, write output to a log file and use a monitoring tool to check for failures.👉 Get started with Tencent Cloud Lighthouse
👉 View current pricing and launch promotions
👉 Explore all active deals and offers