tencent cloud

Feedback

Strengths

Last updated: 2023-05-06 17:36:46

    Orchestration Advantages

    Tencent Kubernetes Engine (TKE) is developed on the basis of Kubernetes, a container cluster management system made open-source by Google. Leveraging the Docker technology, Kubernetes provides containerized applications with a complete set of features ranging from deployment and execution and resource scheduling to service discovery and dynamic scaling, making it much easier to manage large-scale container clusters.
    Kubernetes brings the following benefits:
    Using elegant software engineering consisting of modularization and microservices, Kubernetes implements a modular design that allows you to customize network, storage, scheduling, monitoring, and log modules as needed through flexible plugins.
    The Kubernetes project community acts as an open-source platform for the implementation of container, network, and storage.

    TKE vs. Customer Self-Built Container Service

    Advantage
    TKE
    Customer Self-Built Container Service
    Ease of use
    Simplified cluster management
    TKE has various features such as large-scale container cluster management, resource scheduling, container arrangement, and code construction. It blocks the differences of underlying infrastructures and simplifies management and Ops of distributed applications. You no longer need to use cluster management software or design fault-tolerating cluster structures, thus eliminating all relevant management and scaling workloads.
    You just need to enable a container cluster and specify the tasks you want to run, and TKE will help you complete all the cluster management work, enabling you to focus on developing Dockerized applications.
    When using a self-built container management infrastructure, you usually need to go through complex management processes such as installing, operating, and scaling your own cluster management software as well as configuring management systems and monitoring solutions.
    Flexible scalability
    Flexible cluster management and integration with CLB
    You can use TKE to schedule long-running applications and batch jobs flexibly. You can also use APIs to obtain the latest information about cluster status for easy integration with your customized and third-party scheduling applications.
    TKE is integrated with Cloud Load Balancer (CLB), enabling you to distribute traffic among multiple containers. You just need to specify the container configuration and load balancer to be used, and the TKE management application will automatically add/delete resources for you. In addition, TKE can auto-recover faulty containers to guarantee that a sufficient number of containers are always running to sustain your applications.
    You need to determine how to manually deploy container services according to the business traffic and health status, which has poor availability and scalability.
    Security and reliability
    Secure isolation of resources and high availability of services
    TKE works inside your own Cloud Virtual Machine (CVM) instance without sharing computing resources with other customers.
    Your clusters run inside Virtual Private Clouds (VPCs) where you can use your own security groups and network ACLs. These features enable a high level of isolation and help you use CVM instances to construct applications with high security and reliability.
    TKE uses a distributed service structure to implement auto failover and fast migration for services. Together with distributed backend storage of stateful services, TKE also ensures high security and availability of services and data.
    Due to kernel issues and imperfect namespaces, self-built container services provide poor isolation for tenants, devices, and kernel modules.
    High efficiency
    Fast image deployment and continuous business integration
    TKE runs inside your VPCs where quality BGP networks ensure fast upload and download of images and allow a large number of containers to launch within seconds. This greatly reduces operational overheads and enables you to focus on business operations.
    You can deploy your businesses on TKE. After code is submitted to GitHub or other code hosting platforms, TKE can immediately create, test, pack, and integrate code and deploy the integrated code in pre-release and production environments.
    Due to unstable network quality, self-built container services cannot guarantee the efficiency of using images for container creation.
    Low cost
    High cost-effectiveness
    A TKE managed cluster is more cost-effective than a self-deployed or self-built cluster. You can get a highly reliable, stable, and scalable cluster management plane at low costs and do not need to care about Ops.
    You need to invest a lot of money to build, install, operate, and scale out your cluster management infrastructure.
    Cloud native
    Cloud-native scenario optimization
    TKE launches native nodes, a new type of nodes designed for Kubernetes environments. Leveraging Tencent Cloud’s expertise in managing millions of container cores, native nodes provide users with native, highly stable, and responsive Kubernetes node management capabilities.
    Kernel optimization is made, rendering the service highly suitable for cloud-native scenarios.
    Despite Kubernetes shielding the underlying infrastructure, adapting to the underlying architecture is necessary in the development process because the fundamental resources cannot be modified.
    Improved efficiency
    FinOps implementation
    A new cloud-native asset management platform is launched. This platform offers users insights into cost distribution and resource usage from multiple perspectives, such as cost analysis, job scheduling, and refined scheduling. With this platform, users can maximize the value of every cost incurred in the cloud.
    The elastic tools provided are poorly usable due to the difficulty in configuration and slow response. Furthermore, their visualization capabilities are insufficient.
    Serverless
    Serverless deployment
    The super node type is a new and upgraded Tencent Cloud node type that offers availability zone-level node capabilities with custom specifications. Similar to a large-scale CVM instance, a super node can simplify resource management and scaling.
    Self-building is a complex and resource-intensive process. Maintaining self-built container services are challenging and such services cannot be truly serverless.

    TKE Monitoring vs. Customer Self-built Container Monitoring

    TKE monitoring collects and displays comprehensive statistics of around 30 metrics such as cluster, service, Pod, and container, allowing you to check cluster health and create alarms accordingly. In addition, more metrics will be available soon.
    Advantage
    TKE
    Customer Self-Built Container Service
    Complete metrics
    Approximately 30 metrics are available, such as cluster, node, service, container, and Pod (instance) metrics.
    Only a few metrics are available and in-house development is required.
    Low construction cost
    TKE monitoring is provided when a cluster is created.
    Manual construction of monitoring is required and can be expensive.
    Low Ops cost
    Ops is performed by the platform with guaranteed data accuracy.
    Manual Ops is required.
    Low storage cost
    The data of each metric in the past three months is retained free of charge.
    Fees are charged based on the storage size.
    High scalability
    TKE continues to improve and add new metrics.
    Developers are required to develop new metrics.
    Alarming
    Available
    Unavailable
    Troubleshooting
    Container logs can be viewed in the console and web shells can be used to quickly log in to containers for troubleshooting.
    You need to manually log in to containers or servers for troubleshooting.
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support