Skip to main content
Version: 53.0.0

a9s-pg Major Upgrade

Up until anynines-deployment version v51.0.0 a9s-pg uses the a9s PostgreSQL 11 BOSH release which requires the Bionic stemcell. This document describes the steps necessary to safely upgrade a9s-pg to a9s PostgreSQL 15 using the Jammy stemcell.

tip

We recommend executing a test run in a development environment, with a clone of the production environment, to better prepare for the upgrade since there is a downtime inherent to it and the data stored in the cluster, which varies from case to case.

Upgrade Limitations

The upgrade to a9s PostgreSQL 15 is only supported when upgrading from a9s PostgreSQL 11 version 3.0.0 or later.

Make sure the cluster is healthy before executing the upgrade.

Description

The upgrade procedure takes place in the pre-start step of the bootstrap node update with the following steps:

  • Initialize the new data directory (/var/vcap/store/postgresql15) with initdb.
  • Ensure there is a valid primary.
  • Discard the current data in the old data dir (/var/vcap/store/postgresql11).
  • Clone the data from the current primary using pg_basebackup to ensure the node is up-to-date.
  • Block all nodes in the cluster. See Downtime for more information.
  • Prepare the data directory for upgrade.
    • Includes ensuring the node is up-to-date after the cluster is blocked.
  • Validate and upgrade the data using pg_upgrade with --link.
  • Reindex all databases. See Reindex All Databases for more information.

Then, normal initialization of the node as primary proceeds, as the node has just been bootstrapped, initializing a new cluster in the new version containing all the data. Subsequent nodes are upgraded, discarding the current data and initializing the node following the new primary in the bootstrap node.

The steps described above require both PostgreSQL binary versions, so the new a9s PostgreSQL BOSH release contains the PostgreSQL binaries from the old version.

danger

When BOSH is updating the bootstrap node of the a9s-pg deployment with the new a9s PostgreSQL 15, the pre-start script detects that a major upgrade is happening based on the PG_VERSION file of the old data directory (/var/vcap/store/postgresql11), the existence of this file is necessary for the success of the upgrade procedure.

Any attempt to upgrade the cluster with an empty bootstrap node will cause data loss, as the upgrade procedure will not be executed.

Downtime

The in-place major version upgrade procedure has an inherent downtime due to:

  • Bootstraping a new cluster: after cloning finishes on the bootstrap node, any data written to the primary of the cluster is not replicated to the bootstrap node and is discarded during the upgrade. The part of the cluster in the old version is blocked after the bootstrap node finishes cloning to avoid losing data. The PostgreSQL process is started one more time after the old part of the cluster is blocked to ensure the bootstrap node is up-to-date with the primary. Connections are only accepted when the bootstrap node is ready to accept connections again.
  • The way pg_upgrade manipulates the local data to provide compatibility with the new version: this tool requires the PostgreSQL process to be stopped. The number of databases, amount of data, and disk throughput affect how long this operation takes.
  • Stemcell upgrade: the operating system update affects glibc and potentially corrupts the indexes leading to data corruption and data loss, therefore requiring a reindexing operation to be executed on all existing databases. The number of databases, indexes, rows in tables, and the resources available in the machine affect how long this operation takes. See Reindex all databases for more information.
  • Not enough quorum when bootstrapping the cluster in the new version: a9s PostgreSQL comes with PgGuard, a component that watches the cluster and blocks unhealthy nodes to prevent data loss. When the bootstrap node finishes upgrading, it has no replica, and to avoid exposing new changes to a cluster where new data is not redundant, PgGuard blocks the bootstrap node until the second node starts following the new primary. This behavior can be configured. See Unblock Bootstrap Node on Major Upgrade for more information.

We recommend measuring the downtime while staging the upgrade so you can prepare a more accurate maintenance window.

Preparing to Upgrade

Some steps are necessary to ensure a safe upgrade, and the possibility to recover in case of a disaster. This section describes some aspects to take into consideration.

Increase the Persistent Disk

During the execution of pg_upgrade, data is copied from the old data directory (/var/vcap/store/postgresql11) to the new data directory (/var/vcap/store/postgresql15) using hard links. Which should save a lot of space usage during the upgrade. However, we recommend watching the storage usage when validating the upgrade in the staging environment paying special attention to the a9s Parachute limit.

Create a Backup

Before going ahead with the upgrade, we recommend creating a backup in case disaster recovery is necessary.

Using the a9s Backup Framework

Please, ensure that your a9s-pg deployment contains a well configured a9s Backup Agent, and therefore does not work if the ops/a9s-pg-without-backup.yml ops file is applied.

Before triggering backup you need to check that it was configured correctly. If we look at the manifest of the a9s-pg, we should see that we installed backup-agent and postgresql-backup-endpoint:

After checking that everything fine we can trigger back up. The instruction of how to do that you can find a9s-pg Backup.

Even though a backup of the a9s PostgreSQL 11 backup is not ensured to be compatible with the a9s PostgreSQL 15 version, it is possible to reset the cluster, roll back to the old version, and restore a backup. To know more about this, see the Disaster Recovery section below.

Using pg_dumpall (Manual Backup)

Another option if to create a local dump file with the changes to be restored in case of a disaster with pg_dumpall by using the following command:

$ pg_dumpall --clean --oids --no-password --if-exists --quote-all-identifiers --host=<primary-addr> --port=5432 \
--username=admin_user -f a9s-pg.sql

The password for this user is on the CredHub variable a9s_pg_admin_password. It should be under the path /<bosh-directory-name>/a9s-pg/a9s_pg_admin_password.

If the password does not seem to be working, try to set the PGPASSWORD environment variable.

Even though a backup of the a9s PostgreSQL 11 backup is not compatible with the a9s PostgreSQL 15 version, it is possible to reset the cluster, roll back to the old version, and restore a backup. To know more about this, see the Disaster Recovery section below.

Customize The Upgrade

Unblock Bootstrap Node on Major Upgrade

In this section, we explain how to configure PgGuard to alter its behavior and unblock the bootstrap to start accepting connections as soon as the node starts without waiting for a quorum, therefore decreasing the downtime.

danger

Enabling this feature, however, exposes any data written until the second node starts replicating from the bootstrap node to a vulnerable scenario as there is only one copy of the new changes, which presents a risk of data loss if data in the bootstrap node is lost due to any outage.

PgGuard can be configured to unblock the bootstrap node as soon as it starts with the following Ops File:

- type: replace
path: /instance_groups/name=pg/jobs/name=postgresql-ha/properties/postgresql-info-webservice/pg_guard?/unblock_bootstrap_major_upgrade?
value: true

The default value of the postgresql-info-webservice.pg_guard.unblock_bootstrap_major_upgrade is false, and node are unblocked only when there is enough standby nodes following the primary, which is the behavior described in the Description section.

Reindex All Databases

During the upgrade of a9s-pg to a9s PostgreSQL 15, the stemcell will be upgraded as well, and the cluster will start using the Ubuntu Jammy Stemcell, with a newer version of the operating system. This version comes with glibc version 2.28 installed, which brings a collation change that can mess with the indexes of the databases created in the previous version. See Locale data changes for more information.

The major upgrade procedure includes a step to automatically reindex all databases in a9s-pg to address this issue. This step happens after the data is upgraded. The PostgreSQL process starts with a special configuration tailored for the execution of the reindexdb command, listening only on localhost.

It is possible to skip the reindex step and decrease the downtime with the following Ops File:

- type: replace
path: /instance_groups/name=pg/jobs/name=postgresql-ha/properties/postgresql-info-webservice/major_upgrade?/reindex?/enabled?
value: false
danger

We strongly adivice not to skip this step, as skipping it can lead to data corruption.

During this step, the reindex command is executed with the --jobs specified as the same number of CPUs available in the machine.

The PostgreSQL process is started with the following configuration by default:

  • max_connections: (4 * number of CPUs). Default minimum value for this property is 128.
  • max_worker_processes: number of CPUs.
  • max_parallel_workers: number of CPUs.
  • max_parallel_maintenance_workers: number of CPUs.
  • max_parallel_workers_per_gather: (number of CPUs / 4). Default minumum value for this property is 2.
  • maintenance_work_mem: (50% of the total memory / (max_worker_processes * max_parallel_workers_per_gather)). Default minimum value for this property is 64MB.

It is possible to override these values and provide any PostgreSQL configuration with the following Ops File:

- type: replace
path: /instance_groups/name=pg/jobs/name=postgresql-ha/properties/postgresql-info-webservice/major_upgrade?/reindex?/pg_ctl_start_opts?
value:
max_connections: 128
max_worker_processes: 64
max_parallel_workers: 64
max_parallel_maintenance_workers: 64
max_parallel_workers_per_gather: 4
maintenance_work_mem: 128MB

Any configuration supported by the --options parameter of the pg_ctl command under the -c config can be overwritten extending the Ops File above.

The optimal execution of the reindexdb command depends on the amount of data stored, so it makes sense to observe the resource usage and attempt to change these values to decrease the reindex operation time, and therefore decrease the downtime itself.

Run Upgrade

The upgrade takes place in main two steps when deploying the new version of a9s-pg BOSH deployment:

  1. Update the a9s-pg deployment to most recent a9s PostgreSQL 11 version. This is currently the default version used in the a9s-pg/a9s-pg.yml manifest of the most recent anynines-deployment version. See Deploy a9s-pg for more information.

  2. Apply the a9s PostgreSQL 15 BOSH release to the a9s-pg deployment by deploying it again while applying the a9s-pg/ops/upgrade-to-postgresql15.yml Ops File. The major upgrade will happen during this step. See the command below for an example:

$ bosh deploy -d a9s-pg a9s-pg/a9s-pg.yml -l config/iaas-config.yml -o a9s-pg/ops/upgrade-to-postgresql15.yml

After applying the upgrade with success, the a9s-pg cluster will be ready for use. If it fails, please refer to the Disaster Recovery section below.

Disaster Recovery

In case the upgrade procedure fails, the cluster can be recovered by:

  1. Cleanup all nodes in the cluster:
$ monit stop all
$ rm -rf /var/vcap/store/postgresql15*
tip

If the update fails while updating the bootstrap node, you can execute the cleanup in the bootstrap node only.

  1. Deploy the previous version of your a9s-pg manifest, to roll back the version to 11.

  2. If data is lost for any reason, restore the backup taken in the begin of the upgrade, and try to upgrade to the new version again as described on Run Upgrade section.

Restore a Backup

Using the a9s Backup Framework

See the Restore a Backup documentation for further information.

Restore a Manual Backup

Restore the local dump file created on the Using pg_dumpall Manual Backup with the command below:

$ psql --host=<primary-addr> --username=admin_user --port=5432 --dbname=postgres < a9s-pg.sql

It should be ok to see some errors related to the repmgr database when restoring the dump file since the repmgrd is using the database and already with the correct information. But in case of any issue, see the a9s PostgreSQL Cluster Recovery documentation.