This document describes the architecture of a9s Kubernetes and the different plans.
The a9s Kubernetes data service offers instances both clustered and single plans.
The basic a9s Kubernetes plan offers pure, single Kubernetes on demand, and the user can use the kubectl CLI tool to manage the service.
The Kubernetes plans also use the NGINX Ingress Controller as ingress controller, being the gateway between the external network and the local private virtual network only accessible inside the deployment. The plans have integrated the Linkerd service mesh for observable, reliable and secure handling of service-to-service communication.
The masters keep data in sync using etcd cluster, and create a virtual network using
flannel for the container to container communication, where
the gateway of this virtual network is the ingress controller.
provides a frontend for
etcd, which shares the state across all nodes of the a9s Kubernetes deployment.
manages the whole lifecycle of the container, including replication controller, endpoints controller,
namespace controller, and serviceaccounts controller. The communication is made through the
is responsible for making sure new container gets allocated in the best possible node. It also communicates
Current plans vary the number of masters from 1 to 3, but this number of nodes can scale horizontaly and vertically.
Each process running on
master is handling the cluster that exeutes on the
worker node executes a docker daemon which is controlled by the
Docker controls the containers which are represented as a part of a Pod while the
communicates with the masters via
kube-apiserver and control the lifecycle of the docker container
on the hosting node.
kube-proxy provides networking services. It handles network proxies between the services and the external
network. We take leverage of this service for the ingress controller, which will be details in the
The number of workers of the current plans vary from 1 to 5, but it is possible to upgrade both vertically and horizontally.
a9s Kubernertes routing works the same way as described in the a9s Router Architecture document, until it hits the ingress controller, then it forwards the connection to the correct target Pod using the virtual network created between the worker nodes.
a9s Router only routes
https connections and will redirect all
a9s Kubernetes uses an internal Tenant Isolation based on the a9s Node Guard. The Tenant Isolation only allows incoming requests from the cluster nodes itself (Master & Worker nodes) and the a9s Router, which handles incoming requests from the outside.
The table below shows which ports are isolated on the Kubernetes cluster nodes: