Technology Encyclopedia Home >OpenClaw Quick Start - 5-Minute Installation Tutorial

OpenClaw Quick Start - 5-Minute Installation Tutorial

Getting an AI assistant up and running shouldn't take an entire afternoon. With OpenClaw (formerly Clawdbot) and Tencent Cloud Lighthouse, you can go from zero to a fully functional, 24/7 AI agent in under five minutes. No Docker headaches. No dependency rabbit holes. Just a one-click deploy and a handful of configuration steps.

Here's the exact workflow.


Why Not Run It Locally?

OpenClaw is an open-source AI assistant whose entire codebase was generated by AI — a fact that made it viral in dev communities. But here's the thing: the official community explicitly warns against deploying it on your primary machine. The agent can execute shell commands, access files, and interact with your system at a deep level. Running it on a personal workstation is a security liability.

The recommended path? A cloud instance. Specifically a lightweight VPS that gives you isolation, uptime, and peace of mind. That's where Tencent Cloud Lighthouse comes in.


Step 0: Grab a Lighthouse Instance

Head over to the Tencent Cloud Lighthouse Special Offer page. For new users, instances start at $10.08/year — that's not a typo. The bundle includes compute, storage, and network resources with generous bandwidth.

When choosing your instance:

  • Minimum spec: 2 vCPUs, 2 GB RAM
  • Recommended spec: 2 vCPUs, 4 GB RAM (or higher for heavier workloads)
  • Region: Pick a domestic region if you plan to integrate with QQ or Chinese LLM APIs. Choose an overseas region for Discord, Telegram, or models like GPT and Gemini.

The key advantage of Lighthouse is its simplicity. Unlike traditional cloud VMs that require VPC configuration, security group gymnastics, and OS hardening, Lighthouse ships as a pre-packaged, ready-to-run environment. It's built for developers who want to ship, not sysadmins who want to tinker.


Step 1: Deploy OpenClaw (One Click)

During the Lighthouse purchase flow, select:

Application Template → AI Agents → OpenClaw (Clawdbot)

That's it. The system provisions an instance with OpenClaw and all its dependencies pre-installed. No apt-get, no pip install, no version conflicts. The application template handles everything.

If you already have a running Lighthouse instance, you can also reinstall the OS with the OpenClaw template. Just note: reinstallation wipes all data, so snapshot your instance first.

For a complete walkthrough with screenshots, see the official deployment guide.


Step 2: Configure Your LLM API Key

Once your instance is live, open the Lighthouse console and navigate to the Application Management panel. You'll find a section for model configuration.

OpenClaw supports a wide range of LLMs out of the box:

  • Tencent Hunyuan / Tencent Cloud DeepSeek
  • DeepSeek, Qwen (Tongyi Qianwen), Kimi, Zhipu, Doubao
  • OpenAI GPT, Google Gemini

To add a key:

  1. Go to your model provider's dashboard and generate an API key.
  2. Paste it into the Models → API Key field in the Lighthouse console.
  3. Click Add and Apply.
  4. Wait for the status to flip to "in use".

Done. Your agent now has a brain.


Step 3: Connect a Channel

OpenClaw communicates through channels — messaging platforms where your bot lives. The console natively supports QQ, WeCom, DingTalk, and Lark. For Telegram, Discord, and WhatsApp, you'll configure via the command line.

To set up Telegram, for example, you'd SSH into your instance and run:

clawdbot onboard

Select QuickStart, choose Telegram (Bot API), paste your BotFather token, and restart. The full Telegram integration guide is available here.


Step 4: Keep It Running 24/7

If your instance uses the 2026.1.29+ template, OpenClaw runs as a background daemon by default. For older templates, enable persistence manually:

loginctl enable-linger $(whoami) && export XDG_RUNTIME_DIR=/run/user/$(id -u)
clawdbot daemon install
clawdbot daemon start
clawdbot daemon status

If the status shows running, you're golden. Close your terminal — the agent stays alive.


Common Pitfalls

  • Don't enable all hooks during onboard. Specifically, avoid auto-run scripts and command trace logging unless you know what you're doing. They spike CPU and introduce security surface.
  • Don't expose the WebUI directly. The default setup blocks public IP access to the web panel for good reason. If you need it, tunnel through SSH or configure a reverse proxy with auth.
  • Do use snapshots. Before any major config change, snapshot your instance via the Lighthouse console. It takes seconds and saves hours.

Wrapping Up

The entire flow — from purchasing a Lighthouse instance to chatting with your OpenClaw agent on Telegram — takes less than five minutes if you move with purpose. The one-click application template eliminates the deployment friction that kills most self-hosted AI projects before they even start.

If you're evaluating lightweight cloud options, Tencent Cloud Lighthouse hits a sweet spot: simple setup, high performance, and cost-effective pricing starting at $10.08/year for new users. For an always-on AI agent, that's essentially free.

Stop configuring. Start building.