When your OpenClaw deployment grows beyond a single agent handling a handful of skills, you hit an architectural inflection point. The monolithic approach — one instance, all skills loaded, all channels connected — starts showing cracks: slow restarts, skill conflicts, scaling bottlenecks, and deployment risk.
The solution is the same one that transformed web application development a decade ago: microservice architecture. And OpenClaw's skill-based design is naturally suited for it.
Not every OpenClaw deployment needs microservices. If you're running a personal assistant with 5-10 skills on a single server, the monolithic approach is perfectly fine. Don't over-engineer.
But consider microservices when:
In a microservice OpenClaw architecture, each major skill or skill group runs as an independent service:
┌─────────────────┐
│ API Gateway │
│ (Load Balancer) │
└────────┬────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
┌───────▼───────┐ ┌────────▼────────┐ ┌────────▼────────┐
│ OpenClaw │ │ OpenClaw │ │ OpenClaw │
│ Instance A │ │ Instance B │ │ Instance C │
│ (Customer │ │ (Trading │ │ (Content │
│ Service) │ │ Skills) │ │ Pipeline) │
└───────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
▼ ▼ ▼
[WhatsApp] [Broker APIs] [CMS / Email]
[Telegram] [Market Data] [Social Media]
Each instance runs on its own Tencent Cloud Lighthouse server, with resources sized to its specific workload. Provision instances through the Tencent Cloud Lighthouse Special Offer.
The API Gateway routes incoming requests to the correct OpenClaw instance based on intent or channel:
For simple setups, nginx reverse proxy with path-based routing works well. For more complex scenarios, consider a service mesh.
Microservices need to share some state — user profiles, conversation history, configuration. Options:
Instead of services calling each other directly (tight coupling), use events:
Instance A (Customer Service) publishes:
→ "order_inquiry_received" event
Instance B (Order Management) subscribes:
→ Processes the inquiry
→ Publishes "order_status_retrieved" event
Instance A receives the event:
→ Formats and sends the response to the customer
This decoupling means services can be updated, restarted, or replaced independently.
Each instance has its own deployment pipeline:
# Deploy only the customer service instance
ssh lighthouse-a "cd /opt/openclaw && git pull && docker-compose up -d"
# Trading instance stays untouched
No more "we can't deploy the customer service update because the trading skill is in the middle of a backtest."
Group your skills by domain and scaling requirements:
| Service | Skills | Scaling Need | Uptime Requirement |
|---|---|---|---|
| Customer Service | FAQ, Order Lookup, Returns | High (100+ concurrent) | 99.9% |
| Trading | Strategy Engine, Risk Gate, Order Router | Medium (10-20 concurrent) | 99.99% |
| Content | News Generator, Briefing, Social Posts | Low (batch processing) | 99% |
| Internal Tools | Calendar, Meeting Notes, Monitoring | Low | 95% |
Each service gets its own Lighthouse instance, sized appropriately:
Deploy OpenClaw on each instance using the one-click deployment guide.
Install Redis or RabbitMQ on a shared instance (or use a managed service). Configure each OpenClaw instance to publish and subscribe to relevant event channels.
Set up nginx or a similar reverse proxy to route traffic:
upstream customer_service {
server instance-a-ip:8080;
}
upstream trading {
server instance-b-ip:8080;
}
server {
location /api/customer/ {
proxy_pass http://customer_service;
}
location /api/trading/ {
proxy_pass http://trading;
}
}
Each instance only loads the skills it needs. Follow the Skills guide for installation.
Route channels to the appropriate service:
With multiple instances, monitoring becomes critical:
/health endpoint; the gateway monitors themMicroservice architecture adds operational complexity. Be honest about the trade-offs:
| Benefit | Cost |
|---|---|
| Independent scaling | More servers to manage |
| Independent deployment | More complex CI/CD |
| Fault isolation | Network communication overhead |
| Team autonomy | Need for service contracts and API versioning |
For most teams, the sweet spot is 2-3 services, not 10. Start with a clear separation (e.g., customer-facing vs. internal), prove the pattern works, then split further if needed.
The Tencent Cloud Lighthouse Special Offer makes it economical to run multiple instances. Start with two: separate your highest-traffic, most critical workload from everything else. Deploy, monitor, and expand the architecture as your needs grow.
Microservices aren't a destination — they're a tool. Use them when the complexity is justified, and keep things simple when it isn't.