Magalix Runs Kubernetes on Autopilot!

Guides    Discussions

Why an AI-Managed Kubernetes Autoscaling?

Scaling containers, pods, and nodes is a moving target. It is impacted by many factors such as application usage, run-time architecture, and used infrastructure. The current human-based approach to control all these factors is proven to be not scalable, tedious, and error-prone. Magalix’s AI is a combination of machine learning models combined with our Kubernetes agent to achieve that continuous balance between resources and performance - 3 minutes read

Magalix team has built and managed many cloud-based systems, and saw the many opportunities of containers and Kubernetes early on. At the same time, we also realized that to maximize the potential of such great tools, there has to be some intelligence added to how Kubernetes manages containers and underlying infrastructure.

Many Factors and Dependencies Impacting Performance and Capacity

Balancing performance and capacity is impacted by application's usage, run-time dependencies between different services, how pods are arranged and granted resources, and the available capacity.

challenges managing kubernetes autoscaling across the whole stack

challenges managing kubernetes autoscaling across the whole stack

Kubernetes provides a great abstraction of the different layers. Also, the relatively clean APIs model to manage and extend resources makes it easy to integrate it with lots of management tools. However, it lacks the intelligence needed to make scalability of all layers and resources continuously aligned with the above-mentioned factors that impact performance and capacity.

Magalix takes a 360-degree approach to connect all layers inside a Kubernetes cluster together. Collecting the right metrics and identifying the significance of these metrics can't be properly managed by if-then-else rules. It rather requires holistic metrics that is keenly aware of the run-time architecture. It also requires knowledge about the capacity of the underlying infrastructure where applications and services live. We believe that an AI approach that continuously learns, and evolves to keep that balance in such fast-moving environments will redefine how engineers interact with their systems.

Too Many Metrics and Complex Autoscaling

We have no shortage of metrics and data collected from applications, containers, and infrastructure. Engineers always have to juggle between different metrics to check the performance of applications, containers, and infrastructure before we set scalability rules or directly scale our containers or VMs.

It is inevitable to fall into either under-allocate resources leading to Live Site Incidents (LSIs) or over-provisioning that leads to paying a lot more and straining your budget. We believe there is a better way than guessing scalability rules as businesses and operations grow and get more complex.

Kubernetes provides good components to scale pods (horizontally or vertically) and cluster nodes. While they give more control, but it come with some disadvantages. They are quite complex to configure and make work in harmony. A continuous review of metrics at different layers is necessary to stay on top of available capacity and performance.

While Magalix has access to many metrics, it focuses on core metrics representing application's KPIs. Magalix AI approach dynamically weights the impact of different pods and containers on application's performance. The Magalix agent makes the whole experience to kickstart our solution a breeze, instant data in less than 5 minutes after installing the agent.

Connect Applications KPIs with Containers and Infrastructure

Unless Kubernetes Cluster Autoscaler (CA) is installed and properly configured, engineers need to combine external and internal tools to monitor and connect container level activity with infrastructure. For example, in case of Kubernetes clusters installed on AWS, engineers most of the time use a combination of Horizontal Pod Autoscaler (HPA) and AWS's Autoscaling Groups (ASGs) to manage the scale of pods and nodes. It is a challenge to connect these different scalability systems together.

Magalix AI also makes sure that the underlying infrastructure can keep up with your application and containers needs. Magalix supports resource-only or KPI-based (Key Performance Indicator) optimizations. In resource-only optimization, Magalix focuses on analyzing and optimizing for resources usage without optimizing for a specific metric. In KPI-based optimization, Magalix AI models optimize the rest of the application's resources and underlying infrastructure to keep the application within its performance thresholds. It focuses on infrastructure and containers optimization around achieving pre-set business goals.

Focus on the Right Stuff

DevOps engineers deserve better than what the current tools offer. Developers and operations engineers should dedicate more time to the important tasks needing their intelligence vs. repetitive and laborious tasks. Magalix is built to manage the variables that make scalability, and performance management to your business goals.


What's Next

Get started with just one command and see recommendations in a few hours

Connect Your First Cluster
How Magalix AI Works?

Why an AI-Managed Kubernetes Autoscaling?


Scaling containers, pods, and nodes is a moving target. It is impacted by many factors such as application usage, run-time architecture, and used infrastructure. The current human-based approach to control all these factors is proven to be not scalable, tedious, and error-prone. Magalix’s AI is a combination of machine learning models combined with our Kubernetes agent to achieve that continuous balance between resources and performance - 3 minutes read

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.