Welcome to deploying Hermes Agent on Lighthouse!
This article is included in the column: Mastering Lighthouse
๐ Don't have a cloud server yet? Click here to enjoy Lighthouse Exclusive Offers and quickly deploy Hermes Agent~
In April 2026, Nous Research officially released the open-source AI Agent project Hermes Agent, which quickly gained widespread attention on GitHub and in the AI community. Unlike coding assistants bound to IDEs or single-API chatbots on the market, Hermes Agent is a true Autonomous Agentโit runs on your server, has persistent memory, and becomes stronger over time. What's even more exciting is that it has a complete self-learning loop: autonomously creating skills, improving skills through use, and recalling memories across sessions, truly becoming "smarter the more you use it."
Hermes Agent has deep connections with the previously popular OpenClaw projectโit even includes the built-in hermes claw migrate command, supporting one-click migration of OpenClaw settings, memories, skills, and API keys. Hermes Agent represents a comprehensive evolution in the AI Agent space.
Similar to OpenClaw, Hermes Agent also has full system operation permissions (terminal execution, file read/write, browser automation, etc.), so the official recommendation is to deploy it in an environment isolated from your personal main computer to ensure data security.
Hermes Agent supports Linux, macOS, WSL2, and Android (Termux) platforms, with Linux being the most recommended deployment environment. Deploying Hermes Agent on a cloud server not only achieves secure isolation from your local computer but also enables 24/7 online availability, allowing you to interact with it anytime, anywhere through chat applications like Telegram and Discord.
Note: Hermes Agent currently does not support native Windows environments. Windows users need to install WSL2 first and run it there.
Tencent Cloud Lighthouse is a cloud server product designed for lightweight application scenarios. It doesn't require understanding complex cloud computing concepts and offers cost-effective server packages.
Compared to running on a local computer, deploying Hermes Agent using Lighthouse server has the following advantages:
Go to the Tencent Cloud Lighthouse product purchase page to purchase a server. Recommended configuration:
Click Buy Now, follow the page guidance to complete payment, and wait about 30 seconds to complete server creation.

Note: Reinstalling the system will erase all data on the server. Please proceed with caution!

If you have an existing idle Lighthouse instance, you can select the Hermes Agent image through "Reinstall System," then configure Hermes Agent following the subsequent steps.
After completing the installation, you need to configure Models and Gateways for Hermes Agent:
Hermes Agent supports a rich variety of model providers, including but not limited to:
| Provider | Provider ID | Required Environment Variable | Description |
|---|---|---|---|
| OpenRouter | openrouter |
OPENROUTER_API_KEY |
Default provider, can route 200+ models |
| Anthropic (Claude) | anthropic |
ANTHROPIC_API_KEY |
Direct call to Claude series models |
| Google Gemini | gemini |
GOOGLE_API_KEY |
Google AI Studio |
| DeepSeek | deepseek |
DEEPSEEK_API_KEY |
DeepSeek |
| Zhipu GLM (z.ai) | zai |
GLM_API_KEY |
Zhipu AI GLM series models |
| Kimi / Moonshot | kimi-coding |
KIMI_API_KEY |
Kimi Code / Moonshot |
| MiniMax | minimax |
MINIMAX_API_KEY |
MiniMax International |
| MiniMax China | minimax-cn |
MINIMAX_CN_API_KEY |
MiniMax China Version |
| Tongyi Qianwen | alibaba |
DASHSCOPE_API_KEY |
Alibaba Cloud Bailian Platform |
| Hugging Face | huggingface |
HF_TOKEN |
Hugging Face Inference |
| Custom Endpoint | custom |
Optional | Ollama/vLLM/LM Studio, etc. |
Below we'll use DeepSeek as an example to demonstrate step-by-step how to configure the model API Key.
First, go to the DeepSeek Open Platform to register an account and create an API Key.

Log into the server via Tencent Cloud OrcaTerm remote terminal or a third-party SSH tool. Enter the Lighthouse console, find the instance with Hermes Agent deployed, and click Password-free Login to enter the server terminal.

After logging into the server, check if the current user is lighthouse. If not, execute:
su lighthouse
Execute the following commands in sequence to complete the full configuration of the DeepSeek model:
# Set DeepSeek API Key (automatically writes to ~/.hermes/.env)
hermes config set DEEPSEEK_API_KEY replace_with_your_DeepSeekAPIKey
# Set default model to DeepSeek
hermes config set model.default deepseek-chat
# Set inference provider to DeepSeek
hermes config set model.provider deepseek
# Set base_url
hermes config set model.base_url https://api.deepseek.com/v1

After each command executes successfully, a confirmation message similar to โ Set ... in ... will be displayed.
Tip: You can also use the
hermes modelcommand to enter an interactive wizard to select models and providers for a more intuitive operation.
If you want to use other model service providers, the configuration method is similar to the DeepSeek example aboveโuse the hermes config set command to set the API Key, model name, and provider. Here's a quick configuration reference for common models:
Kimi Code (Moonshot, Recommended Domestic Model):
hermes config set KIMI_API_KEY sk-kimi-xxxxxxxxxxxxxxxx
hermes config set model.default kimi-k2.5
hermes config set model.provider kimi-coding
Zhipu GLM (Domestic Model):
hermes config set GLM_API_KEY your-glm-api-key
hermes config set model.default glm-4-plus
hermes config set model.provider zai
Local Model (Ollama):
hermes config set model.default qwen2.5:32b
hermes config set model.provider ollama
hermes config set model.base_url http://localhost:11434/v1
Tip: When using local models, you need to first install Ollama (
curl -fsSL https://ollama.ai/install.sh | sh) and pull the model (ollama pull qwen2.5:32b). It's also recommended to set theOLLAMA_CONTEXT_LENGTH=32768environment variable for better context support.
Hermes Agent's Gateway system supports integration with 15+ mainstream messaging platforms, including domestic platforms like WeCom, DingTalk, and Feishu, as well as overseas platforms like Telegram, Discord, Slack, and WhatsApp. A single background process can connect to all configured platforms simultaneously. Platform feature comparison:
| Platform | Voice | Images | Files | Threads | Streaming |
|---|---|---|---|---|---|
| CLI (Command Line) | โ | โ | โ | โ | โ |
| WeCom | โ | โ | โ | โ | โ |
| DingTalk | โ | โ | โ | โ | โ |
| Feishu/Lark | โ | โ | โ | โ | โ |
| Telegram | โ | โ | โ | โ | โ |
| Discord | โ | โ | โ | โ | โ |
| Slack | โ | โ | โ | โ | โ |
| โ | โ | โ | โ | โ | |
| Signal | โ | โ | โ | โ | โ |
| โ | โ | โ | โ | โ |
Below we'll use Telegram as an example to demonstrate step-by-step how to configure a gateway.
Connecting to Telegram involves two main steps: Creating a Telegram Bot and Configuring the Gateway in Hermes.
Open Telegram and search for @BotFather, then enter the conversation.
Send the /newbot command, and BotFather will guide you through creating a new Bot.
Set two names in sequence:
My Hermesbot and be globally unique, e.g., my_hermes_test_bot

User ID is a security measure to ensure only you can chat with the Bot.

Send any message, and it will reply with your User ID (a string of numbers, e.g., 123456789).
Copy this number, as you'll need to enter it during configuration.

Return to your server terminal and execute the following command:
hermes gateway setup

The configuration wizard will ask you for the following information in sequence:
Select Telegram from the platform list.

Paste the Token obtained from BotFather in step one, then press Enter to confirm.

Enter the User ID obtained from @userinfobot in step two, then press Enter to confirm.

Terminal prompt:

Home Channel is used to receive scheduled tasks and notification messages. You have two options:
Skipped (can configure later), and you can send /set-home in the Telegram chat later to set it.Here we choose to skip directly.
After skipping the previous step, the terminal will ask:

Enter y and press Enter to set your private chat as the Home Channel.
Seeing the green prompt โ Home channel set to 0123456789 indicates successful setup.
Then โ ๐ฑ Telegram configured! will be displayed โ Telegram channel configuration complete!

After Telegram configuration is complete, the wizard will continue to ask whether to install as a system background service.

Enter y and press Enter. After installing as a system service, Gateway will run in the background and auto-start on boot.

Enter the username to run the service. If you're using the root account, just enter root.
โ ๏ธ If you press Enter directly the first time, it will prompt
โ Enter a username., and you need to manually enter the username.

Displaying "System service started" indicates successful installation.
sudo hermes gateway status --system
journalctl -u hermes-gateway -f
โ ๏ธ Note: Before testing the Bot, make sure you have configured the LLM connection (configure LLM Provider and API Key via
hermes setup), otherwise the Bot can receive messages but cannot generate replies.
WeCom is one of the most commonly used enterprise instant messaging tools in China. Hermes Agent natively supports connecting to WeCom via WebSocket, requiring no public IP address or Webhook callback URL, making configuration simple.




In the popup window, click the Manual Creation button.

Scroll to the bottom of the page and click the Create via API Mode button.

Choose to create using "Long Connection" method, and click Click to Obtain in the Secret field under Configuration Method.

Click Save on the page.

After logging into the server, execute the following commands in sequence to set the WeCom Bot ID and Secret:
hermes config set WECOM_BOT_ID replace_with_your_BotID
hermes config set WECOM_SECRET replace_with_your_Secret
After each command executes successfully, you will see a confirmation message like โ Set ... in ~/.hermes/.env.

Tip: If you want all WeCom users to interact with the Bot, you can execute
hermes config set GATEWAY_ALLOW_ALL_USERS true. If you only want to allow specific users, you can set a whitelist usinghermes config set WECOM_ALLOWED_USERS user_id_1,user_id_2.
Register the gateway as a service:
hermes gateway install
Start the gateway:
hermes gateway start

Now you can find the AI Bot you just created in WeCom and send a message to verify that Hermes Agent is working properly.

WeCom Advanced Configuration: You can further configure WeCom access policies in ~/.hermes/config.yaml, such as restricting which groups can use the Bot or setting user whitelists within groups:
platforms:
wecom:
enabled: true
extra:
dm_policy: "open" # Direct message policy: open/allowlist/disabled
group_policy: "allowlist" # Group chat policy: open/allowlist
group_allow_from: # List of allowed group IDs
- "group_id_1"
Hermes Agent also supports connecting to many other platforms including Telegram, Discord, Slack, DingTalk, Feishu, and more. The simplest way is to use the interactive wizard:
hermes gateway setup
The system will list all supported platforms. Use the arrow keys to select one, then follow the prompts to enter the corresponding Token/credentials to complete the configuration. Below are detailed integration guides for each platform:
Domestic Platforms:
International Platforms:
After configuration is complete, use the following commands to manage the gateway service:
# Run in foreground (suitable for debugging)
hermes gateway
# Start in background
hermes gateway start
# Check running status
hermes gateway status
# Stop the gateway
hermes gateway stop
Tip: A single gateway process can connect to all configured platforms (WeCom, Telegram, Discord, etc.) simultaneously. There's no need to start a separate service for each platform.
After configuring the model, the simplest way to use it is to interact with Hermes Agent directly in the server terminal:
hermes
After launching, you will see a feature-rich TUI (Terminal User Interface) that supports multi-line editing, slash command auto-completion, session history, and more.
Common slash commands include:
| Command | Function |
|---|---|
/help |
View help information |
/new or /reset |
Start a new conversation (reset context) |
/model [provider:model] |
Dynamically switch models |
/personality [name] |
Set Agent personality |
/skills |
View and invoke skills |
/save |
Save current session |
/retry |
Retry the last response |
/undo |
Undo the last operation |
Tip: Press
Alt+EnterorCtrl+Jto enter multi-line content, which is useful for pasting code. PressCtrl+Cto interrupt the currently running task.
If you want to continue a previous conversation after exiting, you can use:
hermes --continue
# Or abbreviated
hermes -c
Hermes Agent supports customizing the Agent's personality and tone through the SOUL.md file. Simply edit the ~/.hermes/SOUL.md file:
nano ~/.hermes/SOUL.md
For example:
You are a warm, humorous AI assistant who likes to answer questions concisely.
When answering technical questions, you give a brief conclusion first, then elaborate in detail.
Occasionally you use witty metaphors to make explanations more vivid.
Tip: The
SOUL.mdfile is reloaded with every message, so changes take effect immediately without restarting.
Hermes Agent has a built-in Cron scheduler that supports creating scheduled tasks using natural language. For example, you can tell it directly in chat:
Scheduled tasks will execute automatically and send results to you through the configured channels.
The skill system is one of Hermes Agent's most powerful featuresโthe Agent can automatically create and improve skills, and you can also install ready-made skills from the community:
# Search for skills
hermes skills search kubernetes
# Install a skill
hermes skills install openai/skills/k8s
# View installed skills
hermes skills list
Use /skills or /<skill-name> in conversations to invoke the corresponding skill.
If you encounter any issues, you can run the diagnostic command:
hermes doctor
This command will check the Python environment, dependencies, configuration files, API Keys, directory structure, tool availability, and prompt you about issues that need to be fixed.
Keep Hermes Agent up to date:
hermes update
If you previously used OpenClaw, Hermes Agent supports one-click migration:
hermes claw migrate
This command will automatically import OpenClaw's settings, memories, skills, and API keys.
| Feature | Description |
|---|---|
| ๐ง Self-Learning Loop | Autonomously creates skills, self-improves during use, cross-session memory recall |
| ๐ Multi-Platform Integration | 15+ platforms: Telegram, Discord, Slack, WhatsApp, Signal, etc. |
| ๐ง 47 Built-in Tools | Terminal, files, browser, code execution, image generation, TTS, etc. |
| ๐ MCP Protocol Support | Connect to any MCP server to extend tool capabilities |
| โฐ Cron Scheduler | Natural language scheduled tasks with automatic result delivery |
| ๐ค Sub-Agent Delegation | Isolated sub-Agents handle complex tasks in parallel |
| ๐ณ Multiple Terminal Backends | Local, Docker, SSH, Modal, Daytona, Singularity |
| ๐ฃ๏ธ Voice Mode | Real-time voice interaction in CLI and messaging platforms |
| ๐ SOUL.md Personality | Define Agent personality via file, takes effect in real-time |
| ๐ Security Mechanisms | Command approval, key masking, container isolation, user whitelist |
Q: What is the relationship between Hermes Agent and OpenClaw?
A: Hermes Agent is a next-generation AI Agent project developed by the Nous Research team. It shares a similar philosophy with OpenClaw (an autonomous AI assistant running in user-owned environments) but has been comprehensively upgraded in architecture, functionality, and extensibility. Hermes Agent supports one-click migration of OpenClaw configurations via hermes claw migrate.
Q: Which operating systems are supported?
A: Linux, macOS, WSL2, and Android (Termux) are supported. Windows native environment is not supported.
Q: What are the minimum server configuration requirements?
A: Minimum 2 cores and 2GB memory can run it. 2 cores and 4GB or higher configuration is recommended for a better experience.
Q: How is the model API billed?
A: Hermes Agent itself is open source and free (MIT license), but the model APIs it calls require separate payment. OpenRouter is recommended for flexible top-ups and pay-as-you-go billing.
Q: How do I keep the Bot running in the background?
A: Use the hermes gateway start command to start the gateway as a background service, or use tools like screen / tmux to maintain terminal sessions.
A Discord has been created, and everyone is welcome to join and explore advanced ways to use Moltbot (Clawdbot) together!
Unlock advanced tips on Discord
Click to join the community
Note: After joining, you can get the latest plugin templates and deployment playbooks
Join WhatsApp / WeCom for dedicated technical support
| Channel | Scan / Click to join |
|---|---|
| WhatsApp Channel |
|
| WeCom (Enterprise WeChat) |
|