Every CI/CD pipeline that builds Docker images needs somewhere to push them. Public registries work for open-source projects, but for proprietary application images you need a private option.
A self-hosted Docker registry gives you image storage with no pull rate limits, no exposure of proprietary code, and integration with your own infrastructure. The image is available to any server in your network the moment the pipeline pushes it — no external dependency.
Docker's official registry:2 image is the simplest path to a production-ready private registry. Combined with Nginx for HTTPS and htpasswd for authentication, the setup takes about 30 minutes.
Running your own Docker registry gives you a private image store with no pull limits, no exposure of proprietary code, and integration with your own CI/CD pipeline. Docker provides an official registry image that's easy to set up in about 30 minutes.
I run the private Docker registry on Tencent Cloud Lighthouse. The entry-level plan handles a small-team registry. As your image library grows, CBS cloud disk expansion lets you add storage without migrating the registry or its images. Lighthouse's low latency to other Lighthouse instances in the same region also means push and pull operations between your CI server and production servers are fast — keeping your build pipeline efficient.
- Key Takeaways
A self-hosted registry makes sense when you need:
| Requirement | Details |
|---|---|
| Server | Ubuntu 22.04, 1 GB+ RAM |
| Docker | Installed and running |
| Domain | For HTTPS (required for non-localhost registries) |
| Storage | Plan for image sizes — 20 GB+ recommended |
curl -fsSL https://get.docker.com | sh
sudo systemctl enable docker
sudo usermod -aG docker $USER
newgrp docker
sudo mkdir -p /opt/registry/data
sudo chown -R $USER:$USER /opt/registry
docker run -d \
--name registry \
--restart=always \
-p 127.0.0.1:5000:5000 \
-v /opt/registry/data:/var/lib/registry \
registry:2
We bind to 127.0.0.1:5000 only — Nginx will handle external traffic with TLS.
Verify it's running:
docker ps
curl http://localhost:5000/v2/
# Returns: {}
sudo apt install -y nginx certbot python3-certbot-nginx
Add a DNS A record:
registry.yourdomain.com → YOUR_SERVER_IP
sudo nano /etc/nginx/sites-available/docker-registry
server {
listen 80;
server_name registry.yourdomain.com;
# Increase body size for large image pushes
client_max_body_size 2G;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
sudo ln -s /etc/nginx/sites-available/docker-registry /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo certbot --nginx -d registry.yourdomain.com
Without authentication, anyone who reaches your registry can push and pull images. Always add auth before exposing your registry publicly.
sudo apt install -y apache2-utils
mkdir -p /opt/registry/auth
htpasswd -Bc /opt/registry/auth/htpasswd yourusername
# Prompted for password
Add more users (without the -c flag, which overwrites):
htpasswd -B /opt/registry/auth/htpasswd anotheruser
Stop the existing container:
docker stop registry && docker rm registry
Start with auth enabled:
docker run -d \
--name registry \
--restart=always \
-p 127.0.0.1:5000:5000 \
-v /opt/registry/data:/var/lib/registry \
-v /opt/registry/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
registry:2
docker login registry.yourdomain.com
# Username: yourusername
# Password: (your password)
# Build or pull an image first
docker pull nginx:alpine
# Tag it for your registry
docker tag nginx:alpine registry.yourdomain.com/nginx:alpine
docker push registry.yourdomain.com/nginx:alpine
On any machine logged in to your registry:
docker pull registry.yourdomain.com/nginx:alpine
curl -u yourusername:yourpassword https://registry.yourdomain.com/v2/_catalog
# Returns: {"repositories":["nginx"]}
# List tags for a specific image
curl -u yourusername:yourpassword https://registry.yourdomain.com/v2/nginx/tags/list
The registry API doesn't have a built-in web UI. Add docker-registry-ui for a visual interface:
Add a docker-compose.yml to /opt/registry:
version: '3.8'
services:
registry:
image: registry:2
restart: always
ports:
- "127.0.0.1:5000:5000"
volumes:
- ./data:/var/lib/registry
- ./auth:/auth
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
ui:
image: joxit/docker-registry-ui:latest
restart: always
ports:
- "127.0.0.1:8080:80"
environment:
REGISTRY_TITLE: My Private Registry
NGINX_PROXY_PASS_URL: http://registry:5000
SINGLE_REGISTRY: "true"
Create a second Nginx site for the UI at registry-ui.yourdomain.com, pointing to port 8080.
name: Build and Push
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Log in to private registry
run: echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login registry.yourdomain.com -u "${{ secrets.REGISTRY_USERNAME }}" --password-stdin
- name: Build image
run: docker build -t registry.yourdomain.com/myapp:${{ github.sha }} .
- name: Push image
run: docker push registry.yourdomain.com/myapp:${{ github.sha }}
- name: Deploy to server
run: |
ssh ubuntu@YOUR_SERVER_IP "
docker pull registry.yourdomain.com/myapp:${{ github.sha }} &&
docker stop myapp || true &&
docker run -d --name myapp -p 3000:3000 \
registry.yourdomain.com/myapp:${{ github.sha }}
"
Add REGISTRY_USERNAME and REGISTRY_PASSWORD as GitHub Actions secrets.
Pushing large images was failing with a 413 Request Entity Too Large error from Nginx.
The default Nginx client_max_body_size is 1 MB. Docker images can easily be hundreds of megabytes.
The fix: I had added client_max_body_size 2G to the Nginx server block, but I'd forgotten to reload Nginx after:
sudo nginx -t
sudo systemctl reload nginx
And on Docker client machines, large layer pushes can time out with default Docker settings. If you see push timeouts for very large images, the registry is fine — it's the Nginx proxy timeout. Add to your Nginx location block:
proxy_read_timeout 900;
proxy_send_timeout 900;
| Issue | Likely Cause | Fix |
|---|---|---|
unauthorized: authentication required |
Not logged in | docker login registry.yourdomain.com |
http: server gave HTTP response to HTTPS client |
Client using HTTP | Ensure registry URL starts with https:// |
| 413 on push | Nginx body limit | Set client_max_body_size 2G in Nginx |
| Push times out | Nginx proxy timeout | Add proxy_read_timeout 900 to Nginx config |
| Registry disk full | Large images accumulating | Delete old image versions via API or UI |
| Can't access registry API | Firewall blocking | Check port 443 is open in UFW and Lighthouse console |
| Image pull fails in deployment | Auth not configured | Add docker login step before docker pull in deploy scripts |
✅ What you built:
https://registry.yourdomain.comOnce the registry is running, your workflow becomes: build → push to your registry → pull on any server. No pull rate limits, no public exposure of your images.
When should I use private Docker registry instead of managing Docker directly?
Use private Docker registry when you want a visual management layer on top of Docker, simplified deployment workflows, or to manage Docker for users who aren't familiar with the CLI.
Is private Docker registry suitable for production use?
For small to medium deployments, yes. Large-scale production typically uses orchestration platforms like Kubernetes. private Docker registry is well-suited for individual developers, small teams, and homelab environments.
How do I back up container data?
Back up named volumes (where persistent data lives), not containers themselves. The guide covers volume backup strategies and how to combine application-level backups with Lighthouse snapshots.
What happens if I need to migrate to a different server?
Export your Docker volumes and compose files, provision a new Lighthouse instance with Docker CE image, and restore the data. Container-based deployments are designed to be portable.
docker stats for real-time monitoring, or deploy Netdata or Prometheus + Grafana for historical metrics and alerts. Portainer's web interface also shows per-container resource usage.👉 Get started with Tencent Cloud Lighthouse
👉 View current pricing and launch promotions
👉 Explore all active deals and offers
More from this series: