Tanzu Mission Control is part of VMware Tanzu. VMware Tanzu is a family of products and services for modernizing your applications and infrastructure with a common goal: deliver better software to production, continuously.
More about Tanzu portfolio here.
In this blog post I will focus on the #Manage part of Tanzu with Tanzu Mission Control (TMC) — A centralized management platform for consistently operating and securing Kubernetes infrastructure and modern applications across teams and clouds.
Why do we need it?
Tanzu Mission Control helps organizations to overcome the challenge of managing a fleet of Kubernetes clusters on-premises, in the cloud and from multiple vendors. More and more customers are leveraging services in multiple public clouds, and due to the laws of physics, they run Kubernetes workloads close to those services. The effect of that is that these customers find themselves operating many Kuberntes clusters from different vendors and flavors, such as EKS, AKS, GKE and on-premises as well such as TKG, PKS, OpenShift and more. Each cluster then becomes a snowflake that the organization needs to operate, things like the infrastructure, identity providers, policies, security, monitoring mechanisms and others require a huge effort across public and private, cloud managed and self-managed and different vendors. In times when organizations developing software are looking to reduce waste and automate operations, having such a complicated landscape is not conducive to achieving that goal. With Tanzu Mission Control, we provide our customers the capability of operating Kubernetes across a diverse landscape of vendors and flavors in aggregate.
In a way, we are abstracting the complexities and allowing our customers to focus on the most important thing in Kubernetes — the applications and services that run on it.
Main screen of Tanzu Mission Control
Built on Open-Source
Using Kubernetes to Manage Kubernetes
Tanzu Mission Control (TMC) represents the continuation of our strategy of embracing open-source. With Tanzu Mission Control, we are leveraging open-source projects for operations including the management of Kubernetes itself. We leverage ClusterAPI for the cluster life cycle management, Velero for backup and restore, Sonobuoy for conformance tests, Contour for ingress, and Kubernetes itself.
What can we manage with it?
Tanzu Mission Control managed clusters are divided into two groups:
- Provisioned clusters – Kubernetes clusters that were provisioned with Tanzu Mission Control and their lifecycle is completely managed by it. This category includes self-managed clusters on vSphere, AWS and Azure. The mechanism to deploy and manage the cluster life cycle is ClusterAPI; you can read more about ClusterAPI here.
- Attached clusters – clusters that are attached to Tanzu Mission Control for operations management. We can attach and manage any conformant Kubernetes cluster, which is a cluster that is conforming to the community best practices. Read more about conformant clusters here. Any Kubernetes flavor, whether self-managed or cloud-managed, can be attached to Tanzu Mission Control and have its operations managed. This includes EKS, AKS, GKE, PKS, Rancher, OpenShift (4.x), and TKG.
Tanzu Mission Control – Main Capabilities
Tanzu Mission Control manages the operations on attached clusters. In our example, we will attach a Microsoft AKS cluster to Tanzu Mission Control.
We will need to add the cluster to a cluster group and provide a name that is descriptive of the cluster.
We can create labels on any cluster or object inside Tanzu Mission Control, these labels can be used to create automation and orchestration processes. Once we finish the registration process, Tanzu Mission Control will output a YAML file that we will need to apply to the cluster. This will attach the cluster to the service.
Once we apply the YAML file, Tanzu Mission Control will create all the objects that are required to control that cluster and a registration call will be sent to Tanzu Mission Control.
This is how it looks when a cluster is registered and healthy.
With Tanzu Mission Control, we can deploy self-managed Kubernetes clusters with an “easy” button on vSphere*, AWS and Azure* IaaS services (*roadmap). There are two options to deploy a cluster:
- Development cluster – Single control plane node in a single availability zone.
- Production cluster — Three control plane nodes in multiple availability zones.
The second step is to choose the number and type of workers for that cluster:
*the cluster in the example is created on AWS
At this stage, the cluster will get deployed; we can see the EC2 instances that were created in the region the user had permissions to provision the cluster on:
In this example, there are several clusters provisioned in the same region, as seen the central control plane:
In the picture above, the cluster “openso-aws-cluster” has Kubernetes version of 1.16.4. The info message attached to it means that we can upgrade the cluster to a newer version.
By clicking the Upgrade button, Tanzu Mission Control will initiate an upgrade process, production type clusters will be upgraded in a canary (seamless) fashion with no downtime for workloads. Development type clusters will experience some downtime during the upgrade time due to having a single control plane node.
We can also resize or reduce the number of workers in the node pool tab on the cluster management view.
Changing the number of workers nodes will resize the cluster size.
We can manage the operations of both provisioned and attached clusters. In the operations pane we can see the health state of the cluster components, such as CPU and memory consumption, the number of workloads, namespaces, nodes and the respective health state, the version and infrastructure provider of the cluster and more.
We can also see the health of the Tanzu Mission Control components.
We can view the Namespaces, Deployments, Replica sets and Pods that are running, and the source YAML file of those.
Identity and Access
Managing Identity and access across many clusters is a hard task. With Tanzu Mission Control, we can operate this aspect in aggregate, and reduce complexity and increase security of the entire Kubernetes environment. We can see the permission hierarchy for a cluster group and change the direct access policy on a group of clusters; this allows us to grant various DevOps teams access to the clusters they need without them needing to go to each cluster separately.
To enable a user or a team to access the cluster group, we need to create a direct access policy and choose the type of permission:
Adding the user to the direct access policy will create the related Kubernetes objects on those clusters.
Network policies, image registry policies and *pod security policies enable you create consistent policies across any managed cluster regardless of the vendor or cloud.
In this example, you can see the configuration of an image registry policy that allows the user to pull images only from harbor.tanzuworld.com and not from docker.io, to avoid vulnerabilities and apply these policies to a cluster group.
If we will try to pull an image from a different image repository the pod won’t be scheduled, and kubectl describe of the pod will show the error message:
Conformance and compliance checks
With the Sonobuoy plugin to Tanzu Mission Control, we can inspect any cluster that is being managed by Tanzu Mission Control.
Deep inspection is the overall conformance test for Kubernetes cluster; the key message about the conformance is:
“In order to better serve these goals, the Kubernetes community (under the aegis of the CNCF) runs a Kubernetes Software Conformance Certification program. All vendors are invited to submit conformance testing results for review and certification by the CNCF, which formally certifies conformant implementations.”
To initiate an inspection, create a “New Inspection” task:
Once the inspection is finished, we can check the results by clicking the “View Inspection” link:
As you can see, there are 278 different parameters being checked as part of the conformance check. These conformance tests are the best practices generated by the open-source community. You can customize your own, according to organization policies leveraging the Sonubuoy open-source project plugins.
A Workspace is a multi-cluster, multi-namespace object, and is one of the main logical constructs in Tanzu Mission Control that allows us to further abstract Kubernetes operations. With workspaces we can group several namespaces from any cluster anywhere and apply identity policies, image registry policies, network policies and backup and restore policies in aggregate.
This is where we go up the stack from the cluster to address the application side of things. We create a workspace for an application that may span multiple clusters. There are many reasons customers need to deploy a multi-cluster application, such as for tenancy reasons or separation of workload type or duties. It is not recommended that clusters have dependencies on other shared application clusters, when there are clusters keeping consistent on some aspects such as security, identity and more.
In the example below we have created a workspace for the dev team of the “Hello” app, on two EKS clusters, one for dev and one for prod. Now we can give access to the dev team on those clusters:
Or, like in the previous example, we make sure the dev team pulls images only from the organization’s internal registry, for which it is maintaining the images and scanning for vulnerabilities (see below ‘*.harbor.tanzuworld.com’).
With Tanzu Mission Control, we can create a single pane of glass, and leverage Kubernetes and the open-source ecosystem around it to operate and manage any conformant Kubernetes cluster. Tanzu Mission Control and the workspace construct allow us to switch into the application context and manage applications across multi-cluster environments, while providing operational consistency.
Tanzu Mission Control allows you to operate and manage cloud-native applications across any vendor, with Kubernetes as the common ground.
Oren Penso is a technical lead for Emerging Solutions and Modern Application Platforms at VMware. Twitter: @Openso, Email: openso@VMware.com