a9s Kubernetes Architecture

This document describes the architecture of a9s Kubernetes and the different plans.

Overview

The a9s Kubernetes data service offers instances both clustered and single plans.

The basic a9s Kubernetes plan offers pure, single Kubernetes on demand, and the user can use the kubectl CLI tool to manage the service.

The Kubernetes plans also use the NGINX Ingress Controller as ingress controller, being the gateway between the external network and the local private virtual network only accessible inside the deployment. The plans have integrated the Linkerd service mesh for observable, reliable and secure handling of service-to-service communication.

a9s Kubernetes Cluster

The masters keep data in sync using etcd cluster, and create a virtual network using flannel for the container to container communication, where the gateway of this virtual network is the ingress controller. The kube-apiserver provides a frontend for etcd, which shares the state across all nodes of the a9s Kubernetes deployment. The kube-controller-manager manages the whole lifecycle of the container, including replication controller, endpoints controller, namespace controller, and serviceaccounts controller. The communication is made through the kube-apiserver.

The kube-scheduler is responsible for making sure new container gets allocated in the best possible node. It also communicates through the kube-apiserver.

Current plans vary the number of masters from 1 to 3, but this number of nodes can scale horizontaly and vertically.

a9s Kubernetes Cluster - Masters Communication

Each process running on master is handling the cluster that exeutes on the worker nodes. Each worker node executes a docker daemon which is controlled by the kubelet process. Docker controls the containers which are represented as a part of a Pod while the kubelet proccess communicates with the masters via kube-apiserver and control the lifecycle of the docker container on the hosting node.

The kube-proxy provides networking services. It handles network proxies between the services and the external network. We take leverage of this service for the ingress controller, which will be details in the routing section.

a9s Kubernetes Cluster - Master-Worker Communication

The number of workers of the current plans vary from 1 to 5, but it is possible to upgrade both vertically and horizontally.

Routing

a9s Kubernertes routing works the same way as described in the a9s Router Architecture document, until it hits the ingress controller, then it forwards the connection to the correct target Pod using the virtual network created between the worker nodes.

a9s Router only routes https connections and will redirect all http to https automatically.

Tenant Isolation

a9s Kubernetes uses an internal Tenant Isolation based on the a9s Node Guard. The Tenant Isolation only allows incoming requests from the cluster nodes itself (Master & Worker nodes) and the a9s Router, which handles incoming requests from the outside.

The table below shows which ports are isolated on the Kubernetes cluster nodes:

NodePort
Master2379
Master2380
Master8443
Master10251
Master10252
Master10257
Master10259
Worker10250
Worker10256
Worker31380
Worker31390