The custom Skills development landscape has matured significantly in 2026, with organizations increasingly requiring specialized AI capabilities tailored to their unique business processes and industry requirements. OpenClaw's Skills SDK provides a comprehensive framework for developing, testing, and deploying custom AI plugins that seamlessly integrate with existing workflows. This detailed tutorial guides developers through the complete Skills development lifecycle, from initial concept to production deployment on Tencent Cloud Lighthouse.
Modern Skills development requires a sophisticated development environment optimized for AI plugin creation:
# Deploy optimized Lighthouse development environment
lighthouse deploy-instance \
--template=openclaw-skills-dev-2026 \
--specs=8c16g200s \
--region=ap-singapore \
--dev-tools=comprehensive
# Initialize development environment
openclaw-dev setup-environment \
--sdk-version=latest \
--ide=vscode \
--debugging=advanced \
--testing=comprehensive
# Install Skills development dependencies
npm install -g @openclaw/skills-cli@latest
pip install openclaw-sdk[dev]==2026.3.0
docker pull openclaw/dev-runtime:2026-latest
Development Environment Architecture:
development_stack:
core_tools:
- "OpenClaw SDK 2026.3.0"
- "Skills CLI v4.2.1"
- "VS Code with OpenClaw Extensions"
- "Docker Desktop with Skills Runtime"
- "Redis for local caching"
- "PostgreSQL for data persistence"
debugging_tools:
- "Skills Debugger Pro"
- "Performance Profiler"
- "Integration Tester"
- "API Mock Server"
testing_framework:
- "Unit Testing Suite"
- "Integration Test Runner"
- "Performance Benchmarking"
- "Security Vulnerability Scanner"
Optimized IDE setup accelerates Skills development with intelligent code completion and debugging:
// VS Code settings.json for Skills development
{
"openclaw.skills.autoComplete": true,
"openclaw.skills.linting": "strict",
"openclaw.skills.debugging": {
"breakpoints": "enhanced",
"variableInspection": "deep",
"performanceMonitoring": true
},
"openclaw.skills.testing": {
"autoRun": true,
"coverage": "comprehensive",
"mockServices": true
},
"python.defaultInterpreterPath": "/opt/openclaw/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.testing.pytestEnabled": true
}
Production-ready Skills follow sophisticated architectural patterns ensuring scalability, maintainability, and performance:
from openclaw_sdk import BaseSkill, SkillDecorator, SkillRegistry
from openclaw_sdk.patterns import AsyncPattern, CachingPattern, SecurityPattern
from openclaw_sdk.monitoring import PerformanceMonitor, HealthChecker
from openclaw_sdk.integration import DatabaseConnector, APIConnector
class AdvancedCustomSkill(BaseSkill):
"""
Advanced custom skill implementing enterprise design patterns
"""
def __init__(self, skill_config):
super().__init__(
name=skill_config.name,
version=skill_config.version,
description=skill_config.description,
author=skill_config.author,
license=skill_config.license,
dependencies=skill_config.dependencies
)
# Initialize core components with dependency injection
self.data_processor = self._create_data_processor(skill_config)
self.cache_manager = self._create_cache_manager(skill_config)
self.security_manager = self._create_security_manager(skill_config)
self.performance_monitor = PerformanceMonitor(self.name)
self.health_checker = HealthChecker(self.name)
# Initialize external connectors
self.db_connector = DatabaseConnector(skill_config.database_config)
self.api_connector = APIConnector(skill_config.api_config)
# Register event handlers
self._register_lifecycle_handlers()
# Initialize monitoring and alerting
self._setup_monitoring_and_alerting()
def _create_data_processor(self, config):
"""Factory method for data processor creation"""
processor_type = config.data_processor.type
if processor_type == "ml_enhanced":
return MLEnhancedDataProcessor(config.data_processor.ml_config)
elif processor_type == "stream_processing":
return StreamDataProcessor(config.data_processor.stream_config)
else:
return StandardDataProcessor(config.data_processor.standard_config)
def _register_lifecycle_handlers(self):
"""Register skill lifecycle event handlers"""
self.register_handler('skill_activated', self._on_skill_activation)
self.register_handler('skill_deactivated', self._on_skill_deactivation)
self.register_handler('skill_error', self._on_skill_error)
self.register_handler('performance_degradation', self._on_performance_issue)
@SkillDecorator.action("primary_function")
@SkillDecorator.rate_limit(requests_per_minute=1000)
@SkillDecorator.cache(strategy="intelligent", ttl=300)
@SkillDecorator.monitor_performance()
@SkillDecorator.validate_input()
@SkillDecorator.secure_execution()
async def execute_primary_function(self, input_data, execution_context):
"""
Primary skill function with comprehensive decorators and error handling
"""
# Validate execution context and permissions
await self._validate_execution_context(execution_context)
# Performance monitoring start
with self.performance_monitor.measure_execution() as monitor:
try:
# Pre-processing with validation
validated_input = await self._validate_and_preprocess_input(
input_data, execution_context
)
# Execute core business logic
processing_result = await self._execute_core_logic(
validated_input, execution_context
)
# Post-processing and result formatting
formatted_result = await self._postprocess_and_format_result(
processing_result, execution_context
)
# Update performance metrics
monitor.record_success(formatted_result.quality_metrics)
# Trigger success events
await self._trigger_success_events(formatted_result, execution_context)
return formatted_result
except SkillValidationError as e:
monitor.record_validation_error(e)
raise SkillExecutionException(f"Validation failed: {str(e)}")
except SkillProcessingError as e:
monitor.record_processing_error(e)
await self._handle_processing_error(e, execution_context)
raise SkillExecutionException(f"Processing failed: {str(e)}")
except Exception as e:
monitor.record_unexpected_error(e)
await self._handle_unexpected_error(e, execution_context)
raise SkillExecutionException(f"Unexpected error: {str(e)}")
async def _execute_core_logic(self, validated_input, execution_context):
"""
Core business logic implementation - override in specific skills
"""
# Example: Multi-step processing pipeline
pipeline_steps = [
self._step_data_enrichment,
self._step_analysis_processing,
self._step_result_generation,
self._step_quality_validation
]
processing_result = validated_input
for step_function in pipeline_steps:
processing_result = await step_function(
processing_result, execution_context
)
# Validate intermediate results
if not await self._validate_intermediate_result(processing_result):
raise SkillProcessingError(
f"Validation failed at step: {step_function.__name__}"
)
return processing_result
async def _step_data_enrichment(self, data, context):
"""Data enrichment step with external API integration"""
# Check cache for enriched data
cache_key = self._generate_enrichment_cache_key(data)
cached_enrichment = await self.cache_manager.get(cache_key)
if cached_enrichment:
return self._merge_enrichment_data(data, cached_enrichment)
# Fetch enrichment data from external APIs
enrichment_tasks = []
for api_config in context.enrichment_apis:
task = self.api_connector.fetch_enrichment_data(
api_config=api_config,
input_data=data,
timeout=context.api_timeout
)
enrichment_tasks.append(task)
# Execute enrichment tasks concurrently
enrichment_results = await asyncio.gather(
*enrichment_tasks, return_exceptions=True
)
# Process enrichment results
valid_enrichments = [
result for result in enrichment_results
if not isinstance(result, Exception)
]
# Merge enrichment data
enriched_data = self._merge_enrichment_data(data, valid_enrichments)
# Cache enriched data
await self.cache_manager.set(
cache_key, valid_enrichments, ttl=context.enrichment_cache_ttl
)
return enriched_data
Different use cases require specialized architectural patterns optimized for specific scenarios:
class MLEnhancedSkill(AdvancedCustomSkill):
"""
Skill with integrated machine learning capabilities
"""
def __init__(self, skill_config):
super().__init__(skill_config)
# Initialize ML components
self.model_manager = MLModelManager(skill_config.ml_config)
self.feature_extractor = FeatureExtractor(skill_config.feature_config)
self.prediction_cache = PredictionCache(skill_config.cache_config)
self.model_monitor = ModelPerformanceMonitor()
@SkillDecorator.action("ml_prediction")
async def perform_ml_prediction(self, input_data, prediction_config):
"""
Perform ML prediction with comprehensive monitoring and caching
"""
# Extract features from input data
features = await self.feature_extractor.extract_features(
data=input_data,
feature_config=prediction_config.feature_config
)
# Check prediction cache
cache_key = self._generate_prediction_cache_key(features)
cached_prediction = await self.prediction_cache.get_prediction(cache_key)
if cached_prediction and cached_prediction.is_valid():
return cached_prediction.result
# Load appropriate ML model
model = await self.model_manager.get_model(
model_type=prediction_config.model_type,
version=prediction_config.model_version
)
# Perform prediction with monitoring
with self.model_monitor.measure_prediction() as monitor:
prediction_result = await model.predict(
features=features,
confidence_threshold=prediction_config.confidence_threshold
)
# Validate prediction quality
quality_assessment = await self.model_monitor.assess_prediction_quality(
model=model,
features=features,
prediction=prediction_result
)
monitor.record_prediction_quality(quality_assessment.quality_score)
# Cache high-quality predictions
if quality_assessment.quality_score > 0.8:
await self.prediction_cache.store_prediction(
key=cache_key,
prediction=prediction_result,
quality_score=quality_assessment.quality_score,
ttl=prediction_config.cache_ttl
)
# Log prediction for model improvement
await self.model_monitor.log_prediction(
model_id=model.id,
features=features,
prediction=prediction_result,
quality_score=quality_assessment.quality_score
)
return MLPredictionResult(
prediction=prediction_result.value,
confidence=prediction_result.confidence,
model_version=model.version,
quality_score=quality_assessment.quality_score,
feature_importance=prediction_result.feature_importance
)
class StreamProcessingSkill(AdvancedCustomSkill):
"""
Skill optimized for real-time stream processing
"""
def __init__(self, skill_config):
super().__init__(skill_config)
# Initialize stream processing components
self.stream_processor = StreamProcessor(skill_config.stream_config)
self.window_manager = WindowManager(skill_config.window_config)
self.aggregator = StreamAggregator(skill_config.aggregation_config)
self.output_buffer = OutputBuffer(skill_config.buffer_config)
@SkillDecorator.stream_processor()
async def process_data_stream(self, data_stream, processing_config):
"""
Process continuous data stream with windowing and aggregation
"""
# Initialize stream processing pipeline
pipeline = await self.stream_processor.create_pipeline(
input_stream=data_stream,
processing_config=processing_config
)
# Process stream with windowing
async for window in self.window_manager.create_windows(
stream=pipeline,
window_config=processing_config.window_config
):
# Process window data
window_result = await self._process_stream_window(
window_data=window,
processing_config=processing_config
)
# Aggregate results
aggregated_result = await self.aggregator.aggregate_window_result(
window_result=window_result,
aggregation_config=processing_config.aggregation_config
)
# Buffer output for batch delivery
await self.output_buffer.add_result(
result=aggregated_result,
delivery_config=processing_config.delivery_config
)
# Trigger delivery if buffer conditions met
if self.output_buffer.should_deliver():
await self._deliver_buffered_results(processing_config)
async def _process_stream_window(self, window_data, processing_config):
"""Process individual stream window"""
# Apply window-specific processing logic
processed_items = []
for item in window_data.items:
# Apply item-level processing
processed_item = await self._process_stream_item(
item=item,
processing_config=processing_config
)
processed_items.append(processed_item)
# Apply window-level aggregations
window_aggregations = await self._calculate_window_aggregations(
processed_items=processed_items,
aggregation_config=processing_config.window_aggregations
)
return StreamWindowResult(
window_id=window_data.window_id,
processed_items=processed_items,
aggregations=window_aggregations,
processing_timestamp=datetime.utcnow()
)
Production-ready Skills require extensive testing across multiple dimensions:
import pytest
import asyncio
from unittest.mock import AsyncMock, MagicMock
from openclaw_sdk.testing import SkillTestFramework, MockServices
class TestAdvancedCustomSkill:
"""
Comprehensive test suite for custom skills
"""
@pytest.fixture
async def skill_instance(self):
"""Create skill instance for testing"""
test_config = SkillConfig(
name="test_skill",
version="1.0.0-test",
description="Test skill instance",
database_config=MockDatabaseConfig(),
api_config=MockAPIConfig(),
cache_config=MockCacheConfig()
)
skill = AdvancedCustomSkill(test_config)
await skill.initialize()
yield skill
await skill.cleanup()
@pytest.fixture
def mock_services(self):
"""Setup mock external services"""
return MockServices({
'database': MockDatabaseService(),
'api': MockAPIService(),
'cache': MockCacheService(),
'ml_model': MockMLModelService()
})
@pytest.mark.asyncio
async def test_primary_function_success(self, skill_instance, mock_services):
"""Test successful execution of primary function"""
# Arrange
test_input = {
"data": "test_data_value",
"parameters": {"param1": "value1", "param2": "value2"}
}
execution_context = ExecutionContext(
user_id="test_user",
session_id="test_session",
permissions=["read", "write"],
timeout=30
)
expected_result = SkillResult(
status="success",
data={"processed": "test_data_value"},
quality_score=0.95
)
# Configure mocks
mock_services.api.fetch_enrichment_data.return_value = {
"enrichment": "test_enrichment"
}
# Act
result = await skill_instance.execute_primary_function(
input_data=test_input,
execution_context=execution_context
)
# Assert
assert result.status == "success"
assert result.quality_score >= 0.9
assert "processed" in result.data
# Verify mock interactions
mock_services.api.fetch_enrichment_data.assert_called_once()
mock_services.cache.get.assert_called()
@pytest.mark.asyncio
async def test_primary_function_validation_error(self, skill_instance):
"""Test handling of validation errors"""
# Arrange
invalid_input = {
"data": None, # Invalid data
"parameters": {}
}
execution_context = ExecutionContext(
user_id="test_user",
session_id="test_session",
permissions=["read"],
timeout=30
)
# Act & Assert
with pytest.raises(SkillExecutionException) as exc_info:
await skill_instance.execute_primary_function(
input_data=invalid_input,
execution_context=execution_context
)
assert "Validation failed" in str(exc_info.value)
@pytest.mark.asyncio
async def test_performance_under_load(self, skill_instance, mock_services):
"""Test skill performance under concurrent load"""
# Arrange
concurrent_requests = 50
test_input = {
"data": "load_test_data",
"parameters": {"load_test": True}
}
execution_context = ExecutionContext(
user_id="load_test_user",
session_id="load_test_session",
permissions=["read", "write"],
timeout=30
)
# Act
start_time = time.perf_counter()
tasks = [
skill_instance.execute_primary_function(
input_data=test_input,
execution_context=execution_context
)
for _ in range(concurrent_requests)
]
results = await asyncio.gather(*tasks, return_exceptions=True)
end_time = time.perf_counter()
execution_time = end_time - start_time
# Assert
successful_results = [
r for r in results if not isinstance(r, Exception)
]
assert len(successful_results) >= concurrent_requests * 0.95 # 95% success rate
assert execution_time < 10.0 # Complete within 10 seconds
# Verify average response time
avg_response_time = execution_time / concurrent_requests
assert avg_response_time < 0.5 # Average response under 500ms
@pytest.mark.asyncio
async def test_caching_behavior(self, skill_instance, mock_services):
"""Test caching functionality and cache hit rates"""
# Arrange
test_input = {
"data": "cache_test_data",
"parameters": {"cacheable": True}
}
execution_context = ExecutionContext(
user_id="cache_test_user",
session_id="cache_test_session",
permissions=["read", "write"],
timeout=30
)
# Act - First execution (cache miss)
result1 = await skill_instance.execute_primary_function(
input_data=test_input,
execution_context=execution_context
)
# Act - Second execution (cache hit)
result2 = await skill_instance.execute_primary_function(
input_data=test_input,
execution_context=execution_context
)
# Assert
assert result1.data == result2.data
# Verify cache was used in second call
cache_stats = await skill_instance.cache_manager.get_stats()
assert cache_stats.hit_rate > 0.0
# Verify API was called only once (first execution)
assert mock_services.api.fetch_enrichment_data.call_count == 1
class TestMLEnhancedSkill:
"""
Specialized tests for ML-enhanced skills
"""
@pytest.fixture
async def ml_skill_instance(self):
"""Create ML skill instance for testing"""
ml_config = MLSkillConfig(
name="test_ml_skill",
version="1.0.0-test",
ml_config=MockMLConfig(),
feature_config=MockFeatureConfig()
)
skill = MLEnhancedSkill(ml_config)
await skill.initialize()
yield skill
await skill.cleanup()
@pytest.mark.asyncio
async def test_ml_prediction_accuracy(self, ml_skill_instance):
"""Test ML prediction accuracy and quality"""
# Arrange
test_data = {
"features": {
"numerical_feature_1": 0.75,
"numerical_feature_2": 0.25,
"categorical_feature": "category_a"
}
}
prediction_config = PredictionConfig(
model_type="classification",
model_version="latest",
confidence_threshold=0.8
)
# Act
prediction_result = await ml_skill_instance.perform_ml_prediction(
input_data=test_data,
prediction_config=prediction_config
)
# Assert
assert prediction_result.confidence >= 0.8
assert prediction_result.quality_score >= 0.7
assert prediction_result.prediction is not None
assert prediction_result.feature_importance is not None
@pytest.mark.asyncio
async def test_model_performance_monitoring(self, ml_skill_instance):
"""Test ML model performance monitoring and alerting"""
# Arrange
test_predictions = []
for i in range(100):
test_data = {
"features": {
"feature_1": random.uniform(0, 1),
"feature_2": random.uniform(0, 1)
}
}
prediction_config = PredictionConfig(
model_type="regression",
model_version="latest",
confidence_threshold=0.7
)
# Act
prediction = await ml_skill_instance.perform_ml_prediction(
input_data=test_data,
prediction_config=prediction_config
)
test_predictions.append(prediction)
# Assert
avg_quality = sum(p.quality_score for p in test_predictions) / len(test_predictions)
assert avg_quality >= 0.75
# Verify monitoring data collection
model_stats = await ml_skill_instance.model_monitor.get_performance_stats()
assert model_stats.total_predictions == 100
assert model_stats.average_quality_score >= 0.75
Modern Skills development integrates seamlessly with DevOps workflows:
# .gitlab-ci.yml for Skills development
stages:
- validate
- test
- security_scan
- build
- deploy_staging
- integration_test
- deploy_production
- monitor
variables:
SKILL_NAME: "${CI_PROJECT_NAME}"
LIGHTHOUSE_REGION: "ap-singapore"
DOCKER_REGISTRY: "registry.lighthouse.tencentcloud.com"
# Validation stage
validate_skill_manifest:
stage: validate
image: openclaw/skills-cli:2026-latest
script:
- openclaw-cli validate-manifest --file=skill_manifest.yaml
- openclaw-cli check-dependencies --skill=${SKILL_NAME}
- openclaw-cli lint-code --standards=pep8,openclaw-style
artifacts:
reports:
junit: validation-report.xml
# Testing stages
unit_tests:
stage: test
image: openclaw/skills-runtime:2026-latest
services:
- redis:7-alpine
- postgres:15-alpine
script:
- pip install -r requirements-test.txt
- pytest tests/unit/ --cov=src/ --cov-report=xml --junitxml=unit-test-report.xml
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
reports:
junit: unit-test-report.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
integration_tests:
stage: test
image: openclaw/skills-runtime:2026-latest
services:
- openclaw/mock-services:latest
script:
- openclaw-cli setup-test-environment --integration
- pytest tests/integration/ --junitxml=integration-test-report.xml
artifacts:
reports:
junit: integration-test-report.xml
performance_tests:
stage: test
image: openclaw/skills-runtime:2026-latest
script:
- openclaw-cli run-performance-tests --skill=${SKILL_NAME} --duration=5m
- openclaw-cli generate-performance-report --format=junit
artifacts:
reports:
junit: performance-test-report.xml
performance: performance-metrics.json
# Security scanning
security_scan:
stage: security_scan
image: openclaw/security-scanner:2026-latest
script:
- openclaw-cli security-scan --skill=${SKILL_NAME} --comprehensive
- openclaw-cli vulnerability-check --dependencies --format=json
- openclaw-cli compliance-check --standards=gdpr,sox,hipaa
artifacts:
reports:
sast: security-scan-report.json
paths:
- security-report.html
allow_failure: false
# Build stage
build_skill_package:
stage: build
image: docker:24-dind
services:
- docker:24-dind
script:
- docker build -t ${DOCKER_REGISTRY}/${SKILL_NAME}:${CI_COMMIT_SHA} .
- docker build -t ${DOCKER_REGISTRY}/${SKILL_NAME}:latest .
- docker push ${DOCKER_REGISTRY}/${SKILL_NAME}:${CI_COMMIT_SHA}
- docker push ${DOCKER_REGISTRY}/${SKILL_NAME}:latest
- openclaw-cli package-skill --format=container --tag=${CI_COMMIT_SHA}
artifacts:
paths:
- skill-package.tar.gz
# Staging deployment
deploy_to_staging:
stage: deploy_staging
image: openclaw/deployment-cli:2026-latest
script:
- lighthouse deploy-skill
--environment=staging
--skill=${SKILL_NAME}
--version=${CI_COMMIT_SHA}
--region=${LIGHTHOUSE_REGION}
- openclaw-cli verify-deployment
--environment=staging
--health-check-timeout=300s
environment:
name: staging
url: https://staging-${SKILL_NAME}.lighthouse.tencentcloud.com
only:
- develop
- main
# Staging integration tests
staging_integration_tests:
stage: integration_test
image: openclaw/skills-runtime:2026-latest
script:
- openclaw-cli run-integration-tests
--environment=staging
--skill=${SKILL_NAME}
--comprehensive
- openclaw-cli run-load-tests
--environment=staging
--duration=10m
--concurrent-users=50
artifacts:
reports:
junit: staging-integration-report.xml
performance: staging-performance-metrics.json
only:
- develop
- main
# Production deployment
deploy_to_production:
stage: deploy_production
image: openclaw/deployment-cli:2026-latest
script:
- lighthouse deploy-skill
--environment=production
--skill=${SKILL_NAME}
--version=${CI_COMMIT_SHA}
--region=${LIGHTHOUSE_REGION}
--deployment-strategy=blue-green
- openclaw-cli verify-deployment
--environment=production
--health-check-timeout=600s
- openclaw-cli enable-monitoring
--comprehensive
--alerting=enabled
environment:
name: production
url: https://${SKILL_NAME}.lighthouse.tencentcloud.com
when: manual
only:
- main
# Post-deployment monitoring
setup_monitoring:
stage: monitor
image: openclaw/monitoring-cli:2026-latest
script:
- openclaw-cli setup-monitoring
--skill=${SKILL_NAME}
--environment=production
--dashboard=comprehensive
- openclaw-cli configure-alerts
--channels=slack,email,webhook
--thresholds=performance,error-rate,availability
- openclaw-cli validate-performance
--baseline=staging
--tolerance=10%
only:
- main
Skills infrastructure can be managed through declarative configuration:
# Terraform configuration for Skills production deployment
terraform {
required_providers {
tencentcloud = {
source = "tencentcloudstack/tencentcloud"
version = "~> 1.81"
}
}
}
# Configure Tencent Cloud provider
provider "tencentcloud" {
region = var.lighthouse_region
}
# Variables
variable "skill_name" {
description = "Name of the skill to deploy"
type = string
}
variable "lighthouse_region" {
description = "Lighthouse deployment region"
type = string
default = "ap-singapore"
}
variable "environment" {
description = "Deployment environment"
type = string
validation {
condition = contains(["staging", "production"], var.environment)
error_message = "Environment must be either 'staging' or 'production'."
}
}
# Data sources
data "tencentcloud_lighthouse_bundle" "skill_bundle" {
bundle_type = "GENERAL"
filter {
name = "bundle-id"
values = ["bundle_8c16g200s_lighthouse"]
}
}
data "tencentcloud_lighthouse_blueprint" "openclaw_blueprint" {
blueprint_type = "APP"
filter {
name = "blueprint-name"
values = ["OpenClaw Skills Runtime 2026"]
}
}
# Lighthouse instances for Skills deployment
resource "tencentcloud_lighthouse_instance" "skill_primary" {
instance_name = "${var.skill_name}-${var.environment}-primary"
bundle_id = data.tencentcloud_lighthouse_bundle.skill_bundle.bundle_set[0].bundle_id
blueprint_id = data.tencentcloud_lighthouse_blueprint.openclaw_blueprint.blueprint_set[0].blueprint_id
login_configuration {
auto_generate_password = false
key_ids = [tencentcloud_lighthouse_key_pair.skill_keypair.id]
}
tags = {
Environment = var.environment
Skill = var.skill_name
Role = "primary"
ManagedBy = "terraform"
}
}
resource "tencentcloud_lighthouse_instance" "skill_secondary" {
count = var.environment == "production" ? 2 : 0
instance_name = "${var.skill_name}-${var.environment}-secondary-${count.index + 1}"
bundle_id = data.tencentcloud_lighthouse_bundle.skill_bundle.bundle_set[0].bundle_id
blueprint_id = data.tencentcloud_lighthouse_blueprint.openclaw_blueprint.blueprint_set[0].blueprint_id
login_configuration {
auto_generate_password = false
key_ids = [tencentcloud_lighthouse_key_pair.skill_keypair.id]
}
tags = {
Environment = var.environment
Skill = var.skill_name
Role = "secondary"
ManagedBy = "terraform"
}
}
# SSH Key Pair
resource "tencentcloud_lighthouse_key_pair" "skill_keypair" {
key_name = "${var.skill_name}-${var.environment}-keypair"
public_key = file("~/.ssh/id_rsa.pub")
}
# Firewall rules
resource "tencentcloud_lighthouse_firewall_rule" "skill_api" {
instance_id = tencentcloud_lighthouse_instance.skill_primary.id
firewall_rules {
protocol = "TCP"
port = "8080"
cidr_block = "0.0.0.0/0"
action = "ACCEPT"
firewall_rule_description = "Skill API endpoint"
}
firewall_rules {
protocol = "TCP"
port = "8443"
cidr_block = "0.0.0.0/0"
action = "ACCEPT"
firewall_rule_description = "Skill HTTPS API endpoint"
}
}
# Load balancer for production
resource "tencentcloud_clb_instance" "skill_lb" {
count = var.environment == "production" ? 1 : 0
network_type = "OPEN"
clb_name = "${var.skill_name}-${var.environment}-lb"
tags = {
Environment = var.environment
Skill = var.skill_name
ManagedBy = "terraform"
}
}
# Outputs
output "primary_instance_ip" {
description = "Public IP of primary skill instance"
value = tencentcloud_lighthouse_instance.skill_primary.public_addresses[0]
}
output "secondary_instance_ips" {
description = "Public IPs of secondary skill instances"
value = var.environment == "production" ? tencentcloud_lighthouse_instance.skill_secondary[*].public_addresses[0] : []
}
output "skill_endpoint" {
description = "Skill API endpoint URL"
value = var.environment == "production" && length(tencentcloud_clb_instance.skill_lb) > 0 ?
"https://${tencentcloud_clb_instance.skill_lb[0].domain}" :
"https://${tencentcloud_lighthouse_instance.skill_primary.public_addresses[0]}:8443"
}
Production Skills require sophisticated performance optimization techniques:
class SkillPerformanceOptimizer:
def __init__(self, skill_instance):
self.skill = skill_instance
self.profiler = AdvancedProfiler()
self.optimizer = CodeOptimizer()
self.cache_optimizer = CacheOptimizer()
self.resource_optimizer = ResourceOptimizer()
async def optimize_skill_performance(self):
"""
Comprehensive skill performance optimization
"""
# Profile current performance
performance_baseline = await self.profiler.profile_skill_execution(
skill=self.skill,
test_scenarios=self._generate_performance_test_scenarios(),
duration_minutes=10
)
# Identify optimization opportunities
optimization_opportunities = await self._identify_optimization_opportunities(
performance_baseline
)
# Apply optimizations
optimization_results = []
for opportunity in optimization_opportunities:
if opportunity.impact_score > 0.1: # Significant impact threshold
if opportunity.type == "caching":
result = await self.cache_optimizer.optimize_caching_strategy(
skill=self.skill,
caching_config=opportunity.caching_config
)
elif opportunity.type == "algorithm":
result = await self.optimizer.optimize_algorithm(
skill=self.skill,
algorithm_config=opportunity.algorithm_config
)
elif opportunity.type == "resource_allocation":
result = await self.resource_optimizer.optimize_resource_allocation(
skill=self.skill,
resource_config=opportunity.resource_config
)
optimization_results.append(result)
# Validate optimization results
optimized_performance = await self.profiler.profile_skill_execution(
skill=self.skill,
test_scenarios=self._generate_performance_test_scenarios(),
duration_minutes=10
)
# Calculate improvement metrics
improvement_metrics = self._calculate_improvement_metrics(
baseline=performance_baseline,
optimized=optimized_performance
)
return SkillOptimizationResult(
applied_optimizations=optimization_results,
performance_improvement=improvement_metrics,
baseline_performance=performance_baseline,
optimized_performance=optimized_performance
)
Rapid Skills development setup on Lighthouse:
# Deploy development environment with promotional pricing
lighthouse deploy-instance \
--template=openclaw-skills-dev-comprehensive \
--specs=8c16g200s \
--promotion=skills-dev-2026 \
--region=ap-singapore
# Initialize development workspace
openclaw-dev init-workspace \
--skill-name=my_custom_skill \
--template=enterprise \
--integrations=database,api,ml \
--testing=comprehensive
# Start development server with hot reload
openclaw-dev start-server \
--hot-reload=enabled \
--debugging=advanced \
--port=8080 \
--ssl=enabled
Successful Skills development follows proven methodologies:
Custom OpenClaw Skills development represents the pinnacle of AI automation customization, enabling organizations to create precisely tailored solutions that address unique business requirements and competitive advantages. The combination of advanced development frameworks, comprehensive testing methodologies, and enterprise-grade deployment infrastructure creates unprecedented opportunities for innovation and differentiation.
Tencent Cloud Lighthouse's simple, high-performance, and cost-effective platform provides the ideal foundation for Skills development and deployment. The promotional offerings eliminate financial barriers while providing access to enterprise-grade development and production environments.
Organizations that master custom Skills development gain the ability to rapidly prototype, develop, and deploy AI solutions that perfectly align with their business processes and strategic objectives. The comprehensive tutorial framework ensures successful implementation while maintaining enterprise-grade quality and performance standards.
Start your custom Skills development journey today with the Tencent Cloud Lighthouse Special Offer and unlock the full potential of tailored AI automation.
For comprehensive development resources and advanced tutorials, visit https://www.tencentcloud.com/techpedia/139184 and https://www.tencentcloud.com/techpedia/139672 to master the complete Skills development lifecycle.