Getting an AI assistant up and running shouldn't take an entire afternoon. With OpenClaw (formerly Clawdbot) and Tencent Cloud Lighthouse, you can go from zero to a fully functional, 24/7 AI agent in under five minutes. No Docker headaches. No dependency rabbit holes. Just a one-click deploy and a handful of configuration steps.
Here's the exact workflow.
OpenClaw is an open-source AI assistant whose entire codebase was generated by AI — a fact that made it viral in dev communities. But here's the thing: the official community explicitly warns against deploying it on your primary machine. The agent can execute shell commands, access files, and interact with your system at a deep level. Running it on a personal workstation is a security liability.
The recommended path? A cloud instance. Specifically a lightweight VPS that gives you isolation, uptime, and peace of mind. That's where Tencent Cloud Lighthouse comes in.
Head over to the Tencent Cloud Lighthouse Special Offer page. For new users, instances start at $10.08/year — that's not a typo. The bundle includes compute, storage, and network resources with generous bandwidth.
When choosing your instance:
The key advantage of Lighthouse is its simplicity. Unlike traditional cloud VMs that require VPC configuration, security group gymnastics, and OS hardening, Lighthouse ships as a pre-packaged, ready-to-run environment. It's built for developers who want to ship, not sysadmins who want to tinker.
During the Lighthouse purchase flow, select:
Application Template → AI Agents → OpenClaw (Clawdbot)
That's it. The system provisions an instance with OpenClaw and all its dependencies pre-installed. No apt-get, no pip install, no version conflicts. The application template handles everything.
If you already have a running Lighthouse instance, you can also reinstall the OS with the OpenClaw template. Just note: reinstallation wipes all data, so snapshot your instance first.
For a complete walkthrough with screenshots, see the official deployment guide.
Once your instance is live, open the Lighthouse console and navigate to the Application Management panel. You'll find a section for model configuration.
OpenClaw supports a wide range of LLMs out of the box:
To add a key:
Done. Your agent now has a brain.
OpenClaw communicates through channels — messaging platforms where your bot lives. The console natively supports QQ, WeCom, DingTalk, and Lark. For Telegram, Discord, and WhatsApp, you'll configure via the command line.
To set up Telegram, for example, you'd SSH into your instance and run:
clawdbot onboard
Select QuickStart, choose Telegram (Bot API), paste your BotFather token, and restart. The full Telegram integration guide is available here.
If your instance uses the 2026.1.29+ template, OpenClaw runs as a background daemon by default. For older templates, enable persistence manually:
loginctl enable-linger $(whoami) && export XDG_RUNTIME_DIR=/run/user/$(id -u)
clawdbot daemon install
clawdbot daemon start
clawdbot daemon status
If the status shows running, you're golden. Close your terminal — the agent stays alive.
The entire flow — from purchasing a Lighthouse instance to chatting with your OpenClaw agent on Telegram — takes less than five minutes if you move with purpose. The one-click application template eliminates the deployment friction that kills most self-hosted AI projects before they even start.
If you're evaluating lightweight cloud options, Tencent Cloud Lighthouse hits a sweet spot: simple setup, high performance, and cost-effective pricing starting at $10.08/year for new users. For an always-on AI agent, that's essentially free.
Stop configuring. Start building.