You've deployed OpenClaw, played with the built-in capabilities, and now you want more. The pre-built skills are great, but your business has specific needs that only a custom skill can address. Time to write your own.
This guide covers the full development cycle: planning your skill, writing the code, testing it, debugging common issues, and deploying to production. By the end, you'll have a working custom skill running on your OpenClaw instance.
Before writing code, understand how skills fit into OpenClaw's architecture:
Think of skills like microservices for your AI agent. Each one does one thing well, and the agent orchestrates them.
Before writing a single line of code, answer these questions:
What problem does this skill solve? Be specific. "Helps with sales" is vague. "Looks up product pricing and generates custom quotes based on customer segment and order volume" is actionable.
What inputs does it need? User messages, API data, file uploads, database queries?
What outputs does it produce? Text responses, formatted reports, API calls to external systems, file downloads?
What external systems does it connect to? List every API, database, and service the skill needs access to.
What are the failure modes? API timeout, invalid input, authentication failure, rate limiting — plan for each one.
A typical OpenClaw skill follows this structure:
my-custom-skill/
├── manifest.json # Skill metadata and configuration
├── index.js # Main skill logic
├── handlers/
│ ├── query.js # Handle user queries
│ └── action.js # Handle action requests
├── utils/
│ ├── api-client.js # External API wrapper
│ └── formatter.js # Response formatting
├── tests/
│ ├── query.test.js # Unit tests
│ └── action.test.js
└── README.md
{
"name": "product-pricing",
"version": "1.0.0",
"description": "Looks up product pricing and generates custom quotes",
"triggers": [
"price check",
"generate quote",
"product pricing",
"how much does * cost"
],
"permissions": [
"network",
"database"
],
"config": {
"api_endpoint": "",
"api_key": "",
"default_currency": "USD"
}
}
The triggers array defines phrases that activate this skill. OpenClaw's intent router uses these (along with semantic matching) to determine when to invoke your skill.
// index.js — Main skill entry point
class ProductPricingSkill {
constructor(config) {
this.apiClient = new ApiClient(config.api_endpoint, config.api_key);
this.currency = config.default_currency;
}
async handleQuery(context) {
const { message, entities, conversationHistory } = context;
// Extract product identifier from the message
const productId = this.extractProductId(entities);
if (!productId) {
return {
response: "Which product are you looking for pricing on? Please share the product name or SKU.",
expectFollowUp: true
};
}
// Fetch pricing from external API
try {
const pricing = await this.apiClient.getProductPricing(productId);
return {
response: this.formatPricingResponse(pricing),
data: pricing
};
} catch (error) {
return this.handleError(error);
}
}
handleError(error) {
if (error.code === 'TIMEOUT') {
return { response: "The pricing system is taking longer than usual. Let me try again in a moment." };
}
if (error.code === 'NOT_FOUND') {
return { response: "I couldn't find that product. Could you double-check the name or SKU?" };
}
// Log unexpected errors for debugging
console.error('Unexpected error:', error);
return { response: "Something went wrong while looking up pricing. Let me connect you with the sales team." };
}
}
Debugging skills can be tricky because they run within OpenClaw's runtime. Here are the most effective debugging techniques:
Add detailed logging at every decision point:
console.log(`[ProductPricing] Received query: ${message}`);
console.log(`[ProductPricing] Extracted entities: ${JSON.stringify(entities)}`);
console.log(`[ProductPricing] API response: ${JSON.stringify(pricing)}`);
Access logs through your OpenClaw dashboard or directly on your server.
Test your skill logic in isolation, without the OpenClaw runtime:
// tests/query.test.js
describe('ProductPricingSkill', () => {
it('should extract product ID from SKU mention', () => {
const context = { message: "What's the price for SKU-12345?", entities: {} };
const skill = new ProductPricingSkill(testConfig);
const productId = skill.extractProductId(context.entities);
expect(productId).toBe('SKU-12345');
});
it('should handle API timeout gracefully', async () => {
// Mock the API client to simulate timeout
const skill = new ProductPricingSkill(testConfig);
skill.apiClient.getProductPricing = () => Promise.reject({ code: 'TIMEOUT' });
const result = await skill.handleQuery(mockContext);
expect(result.response).toContain('taking longer than usual');
});
});
OpenClaw provides a skill testing mode where you can send messages directly to your skill and see the raw input/output — bypassing the intent router. This is invaluable for debugging:
Skill not triggering: Your trigger phrases don't match what users actually say. Add more trigger variations and test with natural language inputs.
API authentication failures: Double-check API keys in your manifest config. Ensure the key has the required permissions.
Slow responses: Profile your API calls. If an external API is slow, add caching or timeout handling.
Context lost between turns: Make sure you're properly using the conversationHistory object and returning expectFollowUp: true when you need more information.
Once your skill passes testing:
Your OpenClaw instance should be running on reliable infrastructure. Tencent Cloud Lighthouse is the recommended platform — provision through the Tencent Cloud Lighthouse Special Offer for the best combination of performance and cost.
Keep skills focused. One skill should handle one domain. Don't build a Swiss Army knife skill that does everything.
Handle errors gracefully. Users should never see raw error messages or stack traces. Every error path should produce a helpful, human-readable response.
Cache aggressively. If your skill calls external APIs, cache responses where appropriate. This improves response time and reduces API costs.
Version your skills. Use semantic versioning and maintain a changelog. When you update a skill, users should know what changed.
Document your configuration. Every config parameter should have a description, default value, and example. Future-you will thank present-you.
Test with real users early. The gap between developer testing and real-world usage is always larger than expected. Get feedback from actual users as soon as possible.
Once you're comfortable writing basic skills, explore advanced patterns:
The Tencent Cloud Lighthouse Special Offer gives you affordable infrastructure to experiment. The deployment guide gets you set up quickly. Start building.