If you have a Pro subscription, you can turn on the autopilot for the whole cluster or for select namespaces based on your preferences. The Autopilot feature is disabled if you are on a free plan. Please visit our subscription plans page to learn more about the features of each plan.
Yes. If you turn it off, however, while a decision is being executed, this will not stop decision execution.
Yes, you can control the Autopilot at a more granular level.
We love to hear your feedback! Please leave us your features comments and requests at our product support forum.
We currently have a simple subscription model for the Pro plan, which bills you monthly per number of cores, regardless of the number of connected clusters. If you are interested in the Enterprise model, to have the on-premise version of our AI optimization, please call us for more details.
The Enterprise version of Magalix is a complete end-to-end solution behind your firewalls. Your data doesn't leave your network and you run Magalix fully isolated.
It is based on a yearly licensing model. Please give us a callfor a demo and more details.
For the beta, we provide the best effort SLA. However, we have a roadmap to exceed industry standard SLA using lots of redundancy and AI-powered capacity management algorithms.
Our machine learning algorithms work at multiple levels. They monitor interaction between the containers of your application to draw a conceptual graph of dependencies and create a profile for each container. It also learns from application’s KPIs and how they impact the behavior of the rest of the application. We fuse this knowledge with recurring usage patterns to identify the best scalability and money-saving decisions for the application. Think of a smart DevOps engineer going through this process every few minutes and implementing these adjustment changes.
No. You can run Magalix agent in read-only mode. Our prediction models will learn about your application and provide you with a lot of insights around its usage patterns and decisions that would have taken place if the AI was in control. You can turn it into complete or partial auto-pilot mode anytime and also switch it back into advisory mode anytime.
All our cluster’s drives, application’s data, and persistent storage are encrypted. We also encrypt application’s identification data in these clusters data. Application’s runtime identity is impossible to identify since we also encrypt and scrape out possible data that allows anyone to know application’s containers. Applications are isolated in their own virtualized networks, which also makes it impossible for other applications to sniff or read data outside their designated networks.
Magalix monitors applications on a second-by-second basis if you are using the Container as a Service model. However, we offer different accuracy and precision tiers to monitor and act on your application’s workload patterns. Magalix can track from sub-hour workload patterns all the way to monthly patterns.
Auto-scaling groups are static rules to scale your infrastructure or maybe your containerized application. These rules are based on the developer’s understanding of workloads and application’s behavior. This approach has two disadvantages: (1) Auto-scaling rules are designed to anticipate worst-case scenarios and workloads, which means, over-provisioned infrastructure most of the time, and (2) They are designed with the assumption of static users and application behavior; every time a new version of a service or container is deployed, its behavior changes and resources usage; auto-scaling rules become obsolete quickly.
Magalix continuously profiles your containers and services to understand the impact of workloads on the different components of the application. Scalability decisions and rules are continuously evaluated and always correlated to how an application is being used and how components are using infrastructure resources. Doing this with existing technologies requires a developer constantly monitoring an average of 10 to 30 metrics per application at different times, and reviewing, creating scalability rules every 1 to 5 minutes!
The prediction model starts with few data points generating prediction data. The more it runs the higher the model’s confidence in the predictions generated. We provide a confidence level along with each application to show the model's confidence level in its predictions. The confidence level typically gets higher very quickly on short-term predictions.
In an ideal case, Magalix prediction models can predict up to 100 points in the future. This is translated to time based on the precision of metrics. For example, 100 points in the 10 seconds precision metrics will show you 100 X 10 seconds = 1,000 seconds, around 16 minutes in the future. If you are using lower precision metrics, such as 5 minutes, you will be able to see more in the future, 5 X 100 = 500 minutes, around 8 hours and 18 minutes.
If it is not possible to find a repeatable usage pattern, Magalix AI starts acting in a reactive mode. It can still identify scalability decisions with relatively high precision. However, your application may suffer from occasional performance hiccups. We are continuously improving our models to keep enough resources buffer for unexpected spikes in our application till the AI catches up with usage surprises.
Yes, we use SSL 1024-bits encryption.
No, at this point we do not
Scalability decisions will kick in 2-4 hours after a cluster is connected. It takes time to identify usage patterns and start generating scalability decisions for your cluster.
Yes, if resources are correctly configured and the container is running optimally, then the decision-maker doesn't generate any decisions
Our AI models currently check resources allocation every hour. It doesn't mean that a scalability decision will be generated every hour as well. If the resources allocation matches the current and expected usage patterns, or a decision is generated from the previous scan, Magalix will not generate any additional decisions.
The generated decisions are sometimes recommendations to set limits and requests on CPU/memory. So, even if the workload is static and no changes, we still create a decision/recommendation to set the values of the CPU/memory request and limit. So, if pods do not have any of these values set, Magalix AI suggests values for the user in the form of decisions. To know which pods don't have limits and requests go to the resources scatter chart in your cluster's dashboard and click on
Get started with just one command and see recommendations in a few hours
|Connect Your First Cluster|