I'll be honest: I set up database backups on a server for the first time because I'd just had a scare on a different project. Someone ran a query without a WHERE clause and wiped a table. We recovered from a backup, but only because someone had remembered to export it manually a few days earlier.
That was close enough. After that I set up automated backups on every database I manage. It's one of those things where the setup takes an hour and you then forget about it for years — until the one time you need it, and you're very glad you did it.
This guide covers mysqldump for logical backups, cron jobs for automation, binary log-based point-in-time recovery, and the part most guides skip: actually testing that restores work before you need them in production.
I run MySQL on Tencent Cloud Lighthouse. Lighthouse's snapshot feature provides a convenient full-server backup that captures everything — application code, MySQL data, configuration files — in one operation. Database-level backups with
mysqldumpgive you additional granularity: restore a single table, roll back to a specific point before a query went wrong, or migrate just the database to a different server. Using both together gives you comprehensive coverage: snapshots for fast full-server recovery,mysqldumpfor surgical database-level recovery.
- Key Takeaways
No single backup method covers every failure scenario. I use a layered approach:
| Method | Frequency | What it protects against |
|---|---|---|
mysqldump (local) |
Daily | Accidental data deletion, corruption |
mysqldump → remote storage |
Daily | Local disk failure |
| Binary log backup | Continuous | Point-in-time recovery |
| Lighthouse snapshot | Daily | Full server failure, OS corruption |
The most important rule: test your restores. A backup you've never restored from is a backup you don't know works.
mysqldump is the standard MySQL backup tool. It exports the database as a SQL file you can run to recreate the data.
# Basic dump
mysqldump -u USER -p DATABASE > backup.sql
# With common options for more complete backup
mysqldump \
-u myapp_user \
-p \
--single-transaction \ # Consistent snapshot without locking (InnoDB)
--routines \ # Include stored procedures and functions
--triggers \ # Include triggers
--add-drop-table \ # DROP TABLE before CREATE TABLE
myapp_db > myapp_db_$(date +%Y%m%d_%H%M%S).sql
mysqldump -u root -p --databases db1 db2 db3 > multi_backup.sql
mysqldump -u root -p --all-databases --single-transaction > all_databases_$(date +%Y%m%d).sql
mysqldump -u myapp_user -p myapp_db | gzip > myapp_db_$(date +%Y%m%d).sql.gz
Typical compression ratio is 5:1 to 10:1 for database dumps.
mkdir -p ~/backups/mysql
nano ~/backup_mysql.sh
#!/bin/bash
# MySQL automated backup script
# Configuration
DB_USER="myapp_user"
DB_PASS="your_database_password"
DB_NAME="myapp_db"
BACKUP_DIR="$HOME/backups/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"
RETAIN_DAYS=7
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
# Run the backup
mysqldump \
-u "$DB_USER" \
-p"$DB_PASS" \
--single-transaction \
--routines \
--triggers \
"$DB_NAME" | gzip > "$BACKUP_FILE"
# Check if backup succeeded
if [ $? -eq 0 ]; then
echo "$(date): Backup successful: $BACKUP_FILE"
echo " Size: $(du -sh $BACKUP_FILE | cut -f1)"
else
echo "$(date): Backup FAILED for $DB_NAME" >&2
exit 1
fi
# Remove backups older than RETAIN_DAYS
find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -mtime +$RETAIN_DAYS -delete
echo "$(date): Cleaned up backups older than $RETAIN_DAYS days"
chmod +x ~/backup_mysql.sh
# Test it manually first
~/backup_mysql.sh
# Verify the backup was created
ls -lh ~/backups/mysql/
crontab -e
Add:
# Run MySQL backup every day at 3:00 AM
0 3 * * * /home/ubuntu/backup_mysql.sh >> /home/ubuntu/backups/backup.log 2>&1
Check cron is running:
# Check cron service
sudo systemctl status cron
# Monitor backup log
tail -f ~/backups/backup.log
mysql -u myapp_user -p myapp_db < backup.sql
gunzip -c myapp_db_20260420_030000.sql.gz | mysql -u myapp_user -p myapp_db
# Create the new database
mysql -u root -p -e "CREATE DATABASE myapp_db_restored;"
# Restore into it
mysql -u root -p myapp_db_restored < backup.sql
To restore a single table from a full database backup:
# Extract just the CREATE TABLE and INSERT statements for specific_table
grep -n "Table structure for table" backup.sql
# Find the line numbers for the table you need
# Or use sed to extract a specific table
sed -n '/-- Table structure for table `specific_table`/,/-- Table structure for table/p' backup.sql | \
head -n -1 > specific_table_restore.sql
mysql -u myapp_user -p myapp_db < specific_table_restore.sql
Local backups protect against data loss but not against server failure or disk issues. Send backups to remote storage.
# Add to backup script after creating the backup
scp "$BACKUP_FILE" user@backup-server:/backups/mysql/
Rclone supports dozens of storage providers:
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure (interactive)
rclone config
# Choose your provider (S3, Google Drive, Backblaze, etc.)
# Upload backup
rclone copy "$BACKUP_FILE" remote:my-backup-bucket/mysql/
# Add to backup script:
rclone copy "$BACKUP_FILE" remote:my-backup-bucket/mysql/ && \
echo "$(date): Uploaded to remote storage"
Binary logs record every write operation. Combined with a full backup, you can restore to any point in time — useful for recovering from "I accidentally ran UPDATE without WHERE."
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
log_bin = /var/log/mysql/mysql-bin.log
binlog_expire_logs_seconds = 604800 # Keep 7 days of logs
max_binlog_size = 100M
sudo systemctl restart mysql
Scenario: full backup from 3 AM, accidental DELETE at 2 PM, need to restore to 1:59 PM.
# 1. Restore the 3 AM backup
mysql -u root -p myapp_db < backup_3am.sql
# 2. Find the binary log position for 1:59 PM
sudo mysqlbinlog --start-datetime="2026-04-20 03:00:00" \
--stop-datetime="2026-04-20 13:59:00" \
/var/log/mysql/mysql-bin.000001 > changes.sql
# 3. Apply the changes
mysql -u root -p myapp_db < changes.sql
Tencent Cloud Lighthouse provides server-level snapshots — a complete image of the server disk at a point in time.
Enable auto snapshots:
This complements database-level backups:
If something goes catastrophically wrong with the server itself (disk failure, OS corruption), you restore from the snapshot. For data-level issues (someone ran DELETE FROM users), you restore from the database backup.
A backup you've never tested is a backup you can't trust.
# Monthly test: restore yesterday's backup to a test database
mysql -u root -p -e "CREATE DATABASE IF NOT EXISTS myapp_db_test;"
gunzip -c ~/backups/mysql/myapp_db_latest.sql.gz | mysql -u root -p myapp_db_test
# Verify data
mysql -u root -p myapp_db_test -e "SHOW TABLES; SELECT COUNT(*) FROM users;"
# Clean up
mysql -u root -p -e "DROP DATABASE myapp_db_test;"
# Check if the gzip file is valid
gunzip -t ~/backups/mysql/myapp_db_20260420.sql.gz
echo "Exit code: $?" # 0 = valid, non-zero = corrupted
# Check file size (should be consistent with previous backups)
ls -lh ~/backups/mysql/ | sort -k5 -h
The scenario: something goes wrong, you go to restore from backup, and discover the backup file is corrupted, the restore command is subtly different from what you remember, or the backup was incomplete.
This is not hypothetical. It happens.
The rule: run a test restore every month (or after any significant schema change). Create a temporary database, restore into it, verify the data looks right, then drop the test database. 10 minutes of work, and you know your backup actually works.
Also: verify your automated backups are actually running. Check the log file:
tail -20 ~/backups/backup.log
And check the backup directory has recent files:
ls -lt ~/backups/mysql/ | head -5
# Should show recent timestamps
# Backup
mysqldump -u USER -p DB > backup.sql
mysqldump -u USER -p DB | gzip > backup.sql.gz
mysqldump -u root -p --all-databases > all.sql
# Restore
mysql -u USER -p DB < backup.sql
gunzip -c backup.sql.gz | mysql -u USER -p DB
# List binary logs
mysql -u root -p -e "SHOW BINARY LOGS;"
# View binary log contents
mysqlbinlog /var/log/mysql/mysql-bin.000001
# Check backup file size and date
ls -lh ~/backups/mysql/
du -sh ~/backups/mysql/
# Verify gzip backup integrity
gunzip -t backup.sql.gz
| Issue | Likely Cause | Fix |
|---|---|---|
| Connection refused | Service not running or wrong port | Check systemctl status SERVICE and verify firewall rules |
| Permission denied | Wrong file ownership or permissions | Check file ownership with ls -la and use chown/chmod to fix |
| 502 Bad Gateway | Backend service not running | Restart the backend service; check logs with journalctl -u SERVICE |
| SSL certificate error | Certificate expired or domain mismatch | Run sudo certbot renew and verify domain DNS points to server IP |
| Service not starting | Config error or missing dependency | Check logs with journalctl -u SERVICE -n 50 for specific error |
| Out of disk space | Logs or data accumulation | Run df -h to identify usage; clean logs or attach CBS storage |
| High memory usage | Too many processes or memory leak | Check with htop; consider upgrading instance plan if consistently high |
| Firewall blocking traffic | Port not open in UFW or Lighthouse console | Open port in Lighthouse console firewall AND sudo ufw allow PORT |
How long does it take to set up MySQL backup on a cloud server?
With the appropriate Lighthouse application image (if available), setup takes 10–30 minutes. Manual installation from a plain Ubuntu image typically takes 30–90 minutes following this guide.
What server specifications do I need for MySQL backup?
Check the prerequisites section of this guide for specific requirements. As a general rule: start with the minimum recommended spec, monitor actual usage, and upgrade from the Lighthouse console if needed.
How do I keep MySQL backup updated?
Follow the update procedure specific to how it was installed (package manager, Docker pull, or binary replacement). Take a Lighthouse snapshot before any major update as a safety net.
How do I back up MySQL backup data?
Use Lighthouse snapshots for full-server recovery plus the application's own export/backup mechanism for granular recovery. Both together provide comprehensive backup coverage.
htop and expand the server spec or attach CBS storage as needed.Protect your data with reliable backups:
👉 Tencent Cloud Lighthouse — Cloud server with snapshot protection
👉 View current pricing and promotions
👉 Explore all active deals and offers