Harbor Registry Storage

In this version of the a9s Harbor Data Service, the registry uses Amazon S3 as its storage service. This means that the images will be stored in an Amazon S3 bucket.

Configuring S3 for the a9s Harbor Data Service

For each service instance deployment, most of the information relative to the storage, such as the credentials, is coming from the SPI. For this reason, it is the role of the platform operator to provide this information to the Harbor Service deployment.

The information that is to be provided consists of the Amazon Access Key ID, the Amazon Secret Key, an Amazon Region and a prefix for the S3 buckets that are going to be generated for each service instance deployment. The S3 buckets are automatically configured with default server-side-encryption with each new instance.

The name of the variables that must be provided at deployment time are harbor_storage_provider_s3_accesskey, harbor_storage_provider_s3_secretkey, harbor_storage_provider_s3_region and harbor_storage_provider_bucket_prefix.

Consequently, the name of the bucket for each deployment will be in the format of <harbor_storage_provider_bucket_prefix><service_instance_deployment_name>.

Configuring Aliyun OSS for the a9s Harbor Data Service

To configure the a9s Harbor Data Service to use Aliyun OSS as storage provider use the following OPS file.

- type: remove
  path: /instance_groups/name=spi/jobs/name=harbor-spi/properties/harbor-spi/storage/s3
- type: replace
  path: /instance_groups/name=spi/jobs/name=harbor-spi/properties/harbor-spi/storage
  value:
    oss:
      access_key_id: ((/harbor_storage_oss_access_key_id))
      access_key_secret: ((/harbor_storage_oss_access_key_secret))
      endpoint: ((/harbor_storage_oss_endpoint))
      region: ((/harbor_storage_oss_region))
      bucket_prefix: ((/harbor_storage_oss_bucket_prefix))

For a reference of available regions and OSS endpoints you can check out the Aliyun OSS documentation.

Migrating From anynines-deployment v9.1 to v10

On the a9s Harbor service instance there is a script to be used to encrypt the existing Amazon S3 buckets.

You need to BOSH ssh to the a9s Harbor SPI VM. Change to the superuser with sudo -i.

The script uses the data from a set of environment variables: HARBOR_BROKER_USERNAME - a9s Harbor Service Broker username HARBOR_BROKER_PASSWORD - a9s Harbor Service Broker password HARBOR_BROKER_URL - a9s Harbor Service Broker URL

Please export them as environment variables.

Next export the following environment variables: $ export PATH=/var/vcap/packages/ruby-2.4-r5/bin:$PATH $ export HOME=/home/vcap $ export bundle_cmd=/var/vcap/packages/ruby-2.4-r5/bin/bundle

To execute the script please use the following command: $ cd /var/vcap/packages/harbor-spi/ $ bundle exec ruby /var/vcap/jobs/harbor-spi/helpers/encrypt_existing_buckets.rb

It traverses all service instances known to the a9s Service Broker (no matter the current state) and sets the default encryption for every S3 bucket.