a9s Harbor Architecture
This document describes the architecture of a9s Harbor across different plans.
Overview
The a9s Harbor provides Harbor service instances on demand.
Each a9s Harbor deployment contains an a9s PostgreSQL service and an a9s Redis service, which can be
either a cluster or single setup, depending on the plan. It can also have one or more harbor-app
instances to provide high availability. Please, check our documentation to know more about a9s PostgreSQL
and a9s Redis. We will only get into details on how a9s Harbor uses these services.
a9s Harbor stores container images on AWS S3 instead of the file system, making them available
across the harbor-app nodes. A container is created with the configured prefix (credhub variable: /harbor_storage_provider_bucket_prefix
). After the instance gets deleted, the harbor-spi
deletes
the bucket for the deleted a9s Harbor instance.
Metadata about the state is either stored on the PostgreSQL database (e.g., projects, users, etc) or the Redis cluster, when this is temporary information about the job metadata for the job service.
Besides the Harbor processes, the a9s Harbor includes by default Clair for vulnerability scan. Notary for content trust, signing container images. And Chartmuseum for chart repository.
The a9s Harbor deployment runs several containers using docker-compose,
so starting the harbor
process is running a docker-compose up
, while stopping it runs a
docker-compose down
. Here you have a list of the running containers to provide the Harbor service:
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------------
chartmuseum /docker-entrypoint.sh Up (healthy) 9999/tcp
clair /docker-entrypoint.sh Up (healthy) 6060/tcp, 6061/tcp
harbor-core /harbor/start.sh Up (healthy)
harbor-db /entrypoint.sh postgres Up (healthy) 5432/tcp
harbor-jobservice /harbor/start.sh Up
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up (healthy) 80/tcp
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp,
0.0.0.0:80->80/tcp
notary-server /bin/server-start.sh Up
notary-signer /bin/signer-start.sh Up
redis docker-entrypoint.sh redis ... Up 6379/tcp
registry /entrypoint.sh /etc/regist ... Up (healthy) 5000/tcp
registryctl /harbor/start.sh Up (healthy)
The redis
and harbor-db
containers are not being currently used.
If you want to know more about the Harbor architecture itself, take a look at their official repository.
Routing
The SPI can be configured to generate certificates signed by the router (e.g.: gorouter) certificate authority. This means that certificates provided by service keys can be used to verify connections using the external route and still, every certificate is unique. You can know more about the service instance certificate generation here.
The harbor-
app announces its route using the route_registrar
process, which uses NATS
to communicate with the Cloud Foundry Router.
The route is announced periodically every route_registrar.routes[].registration_interval
(default every 20s
).
It is possible to access the Harbor dashboard and API using the uri
of a service key (cf create-service-key
).
When accessing the route, the request hits the Cloud Foundry Router (gorouter
), which forwards the
connection to the harbor-app
. Inside the harbor-app
, the connection hits the docker-proxy
process,
responsible for forwarding the request to the nginx
container (on ports 80
, 443
and 4443
), which forwards
the connection to the appropriated node. When accessing the Harbor dashboard, for example, the connections are
forwarded to the harbor-portal
on the container port 443
inside the virtual network 172.18.0.0/16
.
External outgoing connections have the source port of every package changed to the harbor-app
address.
While ingoing connections, have the destination port changed to 172.18.0.2/16
, which is the gateway of the
containers' virtual network as soon as the connections hit the harbor-app
. The package is manipulates and
routed using netfilter
and the docker-proxy
.
The Cloud Foundry Router removes the route if the route_registrar
of a harbor-app
node fails to send
heartbeats, and the connections are balanced accross the remaining working nodes.
Additional Routes
It is possible to introduce additional routes that refer to the a9s Harbor service instance.
Per default the base URI of a a9s Harbor service instance has the form:
<service-instance-guid>.<domain>.<top-level-domain>
Each additional route adds a URI of the following form:
<service-instance-guid>.<route-name>.<domain>.<top-level-domain>
These additional routes lead to the same service instance as the base URI. They can be used with (e.g.) a load balancer to limit access to the instance for different users.
The additional routes can be configured through the harbor-spi.additional-routes
property of the harbor-spi
job.
The property is a list of strings. Each string represents an additional route name, which resuls in a additional uri as explained above.
Important: That feature doesn't work with a custom public_host
.
Example
a9s Harbor SPI manifest:
...
- name: harbor-spi
properties:
...
harbor-spi:
additional-routes:
- route-one
- route-two
...
release: harbor-spi
...
Will lead to the following additional routes:
<service-instance-guid>.route-one.<domain>.<top-level-domain>
<service-instance-guid>.route-two.<domain>.<top-level-domain>