
MEC or Multi-access Edge Computing
MEC is a continuation of the idea of Edge. It was originally referred to as Mobile Edge Computing, but has since been renamed to Multi-access Edge Computing. The main idea is that some data is processed locally and some data is passed to the cloud for processing. You have some devices that generate data, which then gets put in data pipelines and then applications follow business logic based on the data.

There are “System level”, “Host level”, and “Network level” components. Another way of looking at them is management, cluster/host, and network components.
The system provides overarching management of the applications and is centrally controlled. This allows for the MEC applications to be deployed and treated as functions for data streams. The location of a host or cluster would be considered a “Service Domain”. The network would be a combination of local and remote elements. Locally, data streams have aggregation and can then be processed by the functions in the local service domain, or they can be forwarded to another service domain to be processed. Those other service domains can be in another datacenter, or in the cloud.
The MEC applications might employ some level of AI for inferencing and thus use the real-time data collected and apply a function on it. The models that the inferencing engines use are trained with a lot of data. So the datapipe could continue to another service domain which would have the Machine Learning analytics capabilities for continuous training and improvement of the models. As those models get updated, they would then update the inferencing engines.

Here is an example of this MEC architecture in use in a manufacturing context. Sensor data from PLC controllers, SCADA, etc., are collected. The data is monitored in real-time and custom dashboards are presented locally. Based on the defined business logic, there may be alternate machine / controller profiles that need to be loaded if there are variances in the sensor data. For example an increase in temperature may cause manufacturing process failures, unless they are accounted for or addressed. The inferencing engine could do this automatically, or provide indication on a dashboard suggesting preventative maintenance to avoid impact in the future. The output from the inferencing engines as well as the raw data can roll up to another service domain for longer retention, analysis and global management of multiple edge sites.

So how does Nutanix do this?
Well there are always a number of ways and components involved, but the simplest answer is Nutanix Karbon Platform Services, or KPS.
KPS provides a simple management plane (provided via SaaS), that you specify your service domains, data sources, functions, business logic and machine learning. It works across different clouds, on-prem, with virtual machines, kubernetes / containers, and even baremetal. Its the glue for complex application architectures, both cloud native and traditional.
Some neat aspects of KPS:
- KPS can provide management of Kubernetes clusters running in Nutanix Karbon, AWS, Azure, GCP and others.
- GPUs are supported for inferencing at the edge.
- A Terraform provider is available to automate the deployment of a KPS service domain in the public cloud.
- KPS can manage service domains running on non-Nutanix VMware ESXi hosts
Here is a graphic that provides a high-level overview.

If you want to dive a bit deeper into KPS, have a look at these articles and links:
- KPS Solution Brief
- Blog – Introducing KPS
- Blog – KPS Overview
- Blog – From there to here, from here to there, Containers are everywhere!
- Blog – Ingress and Load Balancing in Karbon Platform Services
- Blog – Data Interfaces on Nutanix Karbon Platform Services
- Tutorial – Deploy a Stateful MySQL Application on Karbon Platform Services
- Tutorial – Deploy a Kubernetes Application on Karbon Platform Services
- Product page – https://www.nutanix.com/products/karbon/platform-services