a9s Kubernetes Architecture

This document describes the architecture of a9s Kubernetes and the different plans.

Overview

The a9s Kubernetes data service offers instances with and without Rancher, both clustered and single plans.

The basic a9s Kubernetes plan offers pure, single Kubernetes on demand, and the user can use the kubectl CLI tool to manage the service.

The plans with Rancher includes clustered and non clustered plans, that can be managed by either using the dashboard or the kubectl CLI to manage the service.

The Rancher and Kubernetes plans also use the NGINX Ingress Controller as ingress controller, being the gateway between the external network and the local private virtual network only accessible inside the deployment. The plans have integrated the Linkerd service mesh for observable, reliable and secure handling of service-to-service communication.

a9s Kubernetes Cluster

The masters keep data in sync using etcd cluster, and create a virtual network using flannel for the container to container communication, where the gateway of this virtual network is the ingress controller. The kube-apiserver provides a frontend for etcd, which shares the state across all nodes of the a9s Kubernetes deployment. The kube-controller-manager manages the whole lifecycle of the container, including replication controller, endpoints controller, namespace controller, and serviceaccounts controller. The communication is made through the kube-apiserver.

The kube-scheduler is responsible for making sure new container gets allocated in the best possible node. It also communicates through the kube-apiserver.

Current plans vary the number of masters from 1 to 3, but this number of nodes can scale horizontaly and vertically.

a9s Kubernetes Cluster - Masters Communication

Each process running on master is handling the cluster that exeutes on the worker nodes. Each worker node executes a docker daemon which is controlled by the kubelet process. Docker controls the containers which are represented as a part of a Pod while the kubelet proccess communicates with the masters via kube-apiserver and control the lifecycle of the docker container on the hosting node.

The kube-proxy provides networking services. It handles network proxies between the services and the external network. We take leverage of this service for the ingress controller, which will be details in the routing section.

a9s Kubernetes Cluster - Master-Worker Communication

The number of workers of the current plans vary from 1 to 5, but it is possible to upgrade both vertically and horizontally.

Routing

a9s Kubernertes routing works the same way as described in the a9s Router Architecture document, until it hits the ingress controller, then it forwards the connection to the correct target Pod (Rancher Dashboard or a Workload Pod) using the virtual network created between the worker nodes.

a9s Router only routes https connections and will redirect all http to https automatically.

Rancher

After spinning an a9s Kubernetes service instance with Rancher, a domain will be generated and it is accessible either by cf service or service key. It is possible to specify the domain when creating the instance.

When accessing this domain the a9s Router will compare it, find the correct matching deployment and forward the connection to a worker node on port 31390, this will be forwarded to the ingress controller via kube-proxy.

The main domain is configured to target the Rancher Pod, and it has the following format rancher-<domain>. When the ingress controller receives the connection it forwards it to the Rancher Pod. For the Workload Pods, it is possible to specify a sub domain for the Pod following the format *<domain>.

Kubernetes Routing

In this example, myapp-domain.a9s.de is the route for a workload Pod. The Rancher domain for this example would be rancher-domain.a9s.de.

The connection between the ingress controller and the Pod uses an internal virtual network which is configured as 10.200.0.0/16.

Tenant Isolation

a9s Kubernetes uses an internal Tenant Isolation based on the a9s Node Guard. The Tenant Isolation only allows incoming requests from the cluster nodes itself (Master & Worker nodes) and the a9s Router, which handles incoming requests from the outside.

The table below shows which ports are isolated on the Kubernetes cluster nodes:

NodePort
Master2379
Master2380
Master8443
Master10251
Master10252
Master10257
Master10259
Worker10250
Worker10256
Worker31380
Worker31390