Five minutes. That's all it takes to go from "I've never touched OpenClaw" to "I have an AI agent running 24/7 on a cloud server, connected to my Telegram." I timed it. Twice. Let's do this.
That's it. No Docker knowledge. No Linux expertise. No local installation.
Go to the Tencent Cloud Lighthouse Special Offer:
Select a 2-core / 4 GB instance (the sweet spot for most users). Pick an overseas region if you're connecting to Telegram, Discord, or WhatsApp — it gives you lower latency to their APIs.
Click through the purchase flow. Your instance starts provisioning immediately.
While the instance boots (usually 30–60 seconds), go to the Tencent Cloud console:
You're now SSH'd into your server. No local terminal app needed.
You have two options here:
In the Lighthouse console, go to your instance's "Application Management" tab. You'll see input fields for your LLM API key. Paste it, click "Add and Apply", and wait for the status to show "In Use."
openclaw onboard
# The wizard prompts you to:
# 1. Select your LLM provider (DeepSeek, OpenAI, Gemini, etc.)
# 2. Paste your API key
# 3. Choose your messaging channel
#
# CRITICAL: Never hardcode your API key in any file.
# The wizard handles secure storage automatically.
If you used the CLI wizard, you already selected Telegram in step 3. If you used the visual panel, run the onboard wizard now:
openclaw onboard
# Select "Telegram"
You'll need a Telegram Bot Token. Here's how to get one in 30 seconds:
/newbotDone. Your OpenClaw instance is now connected to Telegram.
This is the step that makes your agent truly 24/7. Without it, the agent dies when you close the terminal.
loginctl enable-linger $(whoami) && export XDG_RUNTIME_DIR=/run/user/$(id -u)
openclaw daemon install
openclaw daemon start
openclaw daemon status # Should show "active (running)"
Close the terminal. Open Telegram. Send your bot a message. It responds. You're live.
In five minutes, you deployed:
OpenClaw comes with the agent-browser skill pre-installed:
"Use your browser to visit [any URL] and tell me what's on the page."
"Create a file called todo.txt with these items: [list your tasks]."
"Check the disk usage on this server."
It'll run df -h and report back.
Tell it something in one conversation:
"Remember: my favorite color is blue."
Then in a new conversation:
"What's my favorite color?"
It remembers. That's the long-term memory at work.
"Install a skill from ClawhHub called [skill-name]."
The Skills guide lists available options.
Want to connect WhatsApp or Discord too? Just run the wizard again:
openclaw onboard
# Select your additional channel
# Paste the relevant API token
Guides for each platform:
| Symptom | Fix |
|---|---|
| Bot doesn't respond on Telegram | Check daemon status: openclaw daemon status |
| "API key invalid" error | Re-run openclaw onboard and re-paste the key |
| Slow responses | Check your LLM provider's status page; try a different model |
| Agent forgets context | Make sure daemon mode is active (not running in foreground) |
For deeper troubleshooting on AlmaLinux 9, see the troubleshooting guide.
Let's be transparent about what this costs monthly:
| Item | Cost |
|---|---|
| Lighthouse instance (2C/4G) | ~$10–25/month |
| LLM API (moderate usage) | ~$5–30/month |
| Total | ~$15–55/month |
That's less than a single lunch meeting. For a 24/7 AI assistant that handles customer queries, browses the web, manages files, and never takes a day off.
Everything above is reproducible in a single sitting. No prior experience required. The hardest part is deciding which LLM provider to use — and even that's a 30-second decision.
Head to the Tencent Cloud Lighthouse Special Offer:
Your AI worker is waiting. Let's go.