a9s Kubernetes Architecture
This document describes the architecture of a9s Kubernetes and the different plans.
The a9s Kubernetes data service offers instances with and without Rancher, both clustered and single plans.
The basic a9s Kubernetes plan offers pure, single Kubernetes on demand, and the user can use the kubectl CLI tool to manage the service.
The plans with Rancher includes clustered and non clustered plans, that can be managed by either
using the dashboard or the
kubectl CLI to manage the service.
The Rancher and Kubernetes plans also use the NGINX Ingress Controller as ingress controller, being the gateway between the external network and the local private virtual network only accessible inside the deployment. The plans have integrated the Linkerd service mesh for observable, reliable and secure handling of service-to-service communication.
The masters keep data in sync using etcd cluster, and create a virtual network using
flannel for the container to container communication, where
the gateway of this virtual network is the ingress controller.
provides a frontend for
etcd, which shares the state across all nodes of the a9s Kubernetes deployment.
manages the whole lifecycle of the container, including replication controller, endpoints controller,
namespace controller, and serviceaccounts controller. The communication is made through the
is responsible for making sure new container gets allocated in the best possible node. It also communicates
Current plans vary the number of masters from 1 to 3, but this number of nodes can scale horizontaly and vertically.
Each process running on
master is handling the cluster that exeutes on the
worker node executes a docker daemon which is controlled by the
Docker controls the containers which are represented as a part of a Pod while the
communicates with the masters via
kube-apiserver and control the lifecycle of the docker container
on the hosting node.
kube-proxy provides networking services. It handles network proxies between the services and the external
network. We take leverage of this service for the ingress controller, which will be details in the
The number of workers of the current plans vary from 1 to 5, but it is possible to upgrade both vertically and horizontally.
a9s Kubernertes routing works the same way as described in the a9s Router Architecture document, until it hits the ingress controller, then it forwards the connection to the correct target Pod (Rancher Dashboard or a Workload Pod) using the virtual network created between the worker nodes.
a9s Router only routes
https connections and will redirect all
After spinning an a9s Kubernetes service instance with Rancher, a domain will be generated and it is
accessible either by
cf service or service key. It is possible to specify the domain when creating
When accessing this domain the a9s Router will compare it, find the correct matching deployment and forward
the connection to a worker node on port
31390, this will be forwarded to the ingress controller via
The main domain is configured to target the Rancher Pod, and it has the following format
When the ingress controller receives the connection it forwards it to the Rancher Pod. For the
Workload Pods, it is possible to specify a sub domain for the Pod following the format
In this example,
myapp-domain.a9s.de is the route for a workload Pod. The Rancher domain for this
example would be
The connection between the ingress controller and the Pod uses an internal virtual network which
is configured as
a9s Kubernetes uses an internal Tenant Isolation based on the a9s Node Guard. The Tenant Isolation only allows incoming requests from the cluster nodes itself (Master & Worker nodes) and the a9s Router, which handles incoming requests from the outside.
The table below shows which ports are isolated on the Kubernetes cluster nodes: