Skip to main content
Version: 50.0.0

Migration

This document describes options to migrate data from an older a9s PostgreSQL Service Instance using an older PostgreSQL version to an a9s PostgreSQL Service Instance with a higher version, from the Application Developer's perspective.

Dump and Restore From Origin Instance to New Instance

This feature is only available for migrating to a9s PostgreSQL 11 and newer.

If you have an existing a9s PostgreSQL Service Instance, you can create a copy of its data by creating a new a9s Service Instance with the custom parameter copy_from.

The custom parameter copy_from requires in the default case an object with the following keys:

  • host: The hostname of the existing instance.
  • username: The username of the existing instance.
  • password: The password of the existing instance.

The values for the required keys can be derived from any service binding (either cf service-key or from cf env APP_NAME when bound to an a9s Service Instance).

When the parameter role_privileges with CREATEDB has been used and there are additional databases next to the main database, the copy_from parameter requires the key main_database that specifies what is the main database on the origin a9s Service Instance. The main database name can also be derived from the binding information in the field name.

danger

Be aware that when using the additional databases, you must take responsibility for handling manually created databases during the migration once only the main_database is fully migrated.

An example call to create a new a9s Service Instance and initiate the database with data from another a9s Service Instance looks the following way:

cf create-service a9s-postgresql postgresql-single-small new-instance-name -c '{"copy_from": {"host": "rgd25db46-psql-master-alias.node.dc1.consul.dsf2", "username": "a9s9d374017060a7d4573bc61baf714500f4d948631", "password": "a9se2fdcf291672e4555293b13c4b7119ba2f77312f"} }'

The create-service call will validate whether the given credentials are correct. In the case of multiple databases, the presence of a value for the main_database will also be validated.

The databases postgres, template0 and template will not be copied over.

Behind the scenes, the new a9s Service Instance will copy over the data using pg_dump and pg_restore calls. The pg_dump call receives the command line arguments -O -x --quote-all-identifiers --lock-wait-timeout=300s --verbose --format=custom.

If an error happens during the time the a9s Service Instance copies the data from the old a9s Service Instance to the new a9s Service Instance, Cloud Foundry will just report back the instance failed. To receive more information why the copy process failed, it is recommended to specify a syslog endpoint via custom parameter to receive the logs from the copy process. The copy process reports progress and error lines with the text copy_from included.

The credentials stored from the origin a9s Service Instance on the new a9s Service Instance will disappear the next time the instance gets updated.

Migrating to a9s PostgreSQL 15 and Above

With PostgreSQL 15, the creation permission was removed from the public schema for regular users, and the owner was changed to the pg_database_owner role. Therefore, users under the cfuser role can no longer create tables or other objects on the public schema. Due to this change, a new schema called shared was introduced to ensure the same behavior as before.

When copy_from is used to migrate from an older version of PostgreSQL to version 15 or greater, all data from the public schema will be automatically migrated to the new shared schema. This is done by renaming the restored schema public to shared.

Caveats

  • This feature is only supported when used to copy over data from a9s Service Instances that have either an older or the same major version as the new a9s Service Instance. This means that 11 -> 13, 13 -> 15 or 11 -> 11 for example are supported. Backwards migration such as 13 -> 11 is not supported by this feature. This is due limitations in the PostgreSQL pg_dump utility.
  • This feature only supports instances where PITR is disabled.
  • It is recommended to try out the copy_from procedure and to validate all expected data has been restored before doing the real switch.
  • If possible, this feature should be executed at a time the database has not much load.
  • This feature has been tested with example feature sets for the following supported extensions: 'postgis', 'uuid-ossp', 'ltree','pgcrypto', 'citext', 'vector'. Your experience may differ when using advanced features of these extensions.
  • When migrating data you might need more space than the old a9s Service Instance currently requires due to WAL files occupying additional space. For this reason, we recommend always copying to a single, and then updating from single to cluster. You need to pay special attention to this if using continuous archiving since uploading WAL files takes a long time compared to WAL files being consumed by replication.
  • All objects in all databases being copied MUST be owned by the cfuser role.

Keeping Track of the Migration Progress

caution

This monitoring approach is given for informational purposes only. Please keep in mind that the total processing time and the progress' pace might fluctuate depending on various factors, such as:

  • Your environment.
  • Your Service Plan.
  • The size of the records.
  • The presence of indices and constraints.

You can estimate your current progress by making use of the metric_pg_database_size on the target a9s PostgreSQL Service Instance and by observing its available metrics, which allows you to extrapolate in order to have an approximate processing time.

For example, let us assume that you have a database that is 100GB, and you decide to monitor after the progress after 10 minutes and notice that the progress is currently at 120MB, after 20 mins 250MB; with this you can initially estimate a duration for the whole process by comparing the current size to the known size of the source a9s PostgreSQL Service Instance's original database, with the possibility of recalculating when it slows down due to expensive transactions (e.g REINDEX).