Skip to main content
Version: 67.0.0

Forking and Migration

This document describes how to do data migration between two a9s Messaging Service Instances. We present two approaches:

  1. the first one uses the Disaster Recovery feature to create a fork of an existing Service Instance,
  2. the second one uses different commands via the RabbitMQ CLI to migrate the data.
Known Limitations

These methods only migrate the server queues and exchanges structures, not the messages contained on it.

Migrate a9s Messaging Using the Forking Feature

Follow the steps described in Fork a Service Instance to create a fork of an existing a9s Messaging Service Instance.

Migrate a9s Messaging Manually

In this scenario, we will use an SSH tunnel to get the RabbitMQ server definitions from an a9s Messaging Service Instance and then restore them on a fresh a9s Messaging Service Instance, using a similar process.

Migration Steps

Create New a9s Messaging Service Instance

Create a new and empty a9s Messaging Service Instance that will be the target for the data migration from an existing a9s Messaging Service Instance.

Note
  • Do not insert any data on the new Service Instance before the migration and verification are done.
  • Create a new Service Instance with the same capabilities or greater.

Create an SSH Tunnel

Create a reverse tunnel

The goal is to give the Application Developer access to the a9s Messaging Service Instance. And for this, it is necessary to make a SSH tunnels to the a9s Messaging origin Service Instance.

In the end, we want to achieve the following scenario below:


/----> * RabbitMQ origin * /--------> * RabbitMQ destination *
/ /
* CF Application * (via SSH Tunnel) * CF Application * (via SSH Tunnel) [Infrastructure network]
--------/----------------------------------------------/--------------------------------------------------------------
/ / [Developer network]
/ /
* Developer Machine * * Developer Machine *

It is possible to access any of the a9s Data Services locally. That means you can connect with a local client to the service for any purpose such as debugging. Cloud Foundry (CF) provides a smart way to create SSH forward tunnels via a pushed application. For more information about this feature, see the Accessing Apps with SSH section of the CF documentation.

First of all an application must be bound to the Service Instance. For more information, see Bind an application to a Service Instance.

note

cf ssh support must be enabled in the platform. Ask your Platform Operator if you are not sure.

Follow the section Create a Tunnel to The Service to create the reverse tunnel.

Create RabbitMQ Administrator Credentials

You will need a user with administrative privileges to do the tasks required for the migration. In both Service Instances, create a rabbitmq user with the administrator role with the following command:

cf create-service-key my-messaging-service my-key -c '{"roles": ["administrator", "management"]}'

You should use these credentials to run all the commands on RabbitMQ instances described in the next sections.

Export Definitions From Old Instance

To export and import RabbitMQ definitions files on a9s Messaging services, one can use two methods: getting the files directly through the HTTP API, or using Backup Manager to create a backup that contains the Service Instance definitions and then restore this backup on the new instance.

To get the definition file through the HTTP API, make sure the SSH tunnel to the old instance is set up as described on the previous section, and run the following command from your local environment:

cf ssh <APP_NAME> -L 127.0.0.1:15672:<OLD_SERVICE_HOSTNAME>:15672

Then you can use curl to download the definitions file, passing the administrator credentials created in the previous step:

$ curl -o definitions.defs -u <SERVICE_ADMIN_USERNAME>:<SERVICE_ADMIN_PASSWORD> -X GET http://localhost:15672/api/definitions

# for ssl instances
$ curl -k -o definitions.defs -u <SERVICE_ADMIN_USERNAME>:<SERVICE_ADMIN_PASSWORD> -X GET https://localhost:15672/api/definitions

If, for some reason, you cannot access the HTTP API, you can use the Backup Manager to get the definitions file. Access the Service Dashboard of the Service Instance you want to migrate from, you can find the dashboard URL with this command:

$ cf service <SERVICE_NAME> | grep dashboard

Make sure you:

Use the password set up before to decrypt the backup and write its contents to a file:

cat <BACKUP_FILE> | openssl enc -aes256 -md md5 -d -pass 'pass:<BACKUP_PASSWORD>' | gunzip -c > definitions.defs
caution

This definitions file does NOT contain any message data, only queues and exchanges definitions.

Import Definitions to New Instance

To import the queue definitions to the new instance, set up the SSH tunnel to the new instance:

cf ssh <APP_NAME> -L 127.0.0.1:15672:<NEW_SERVICE_HOSTNAME>:15672

Then you can use curl to import the definitions to the new instance like this:

$ curl --header 'Content-Type: application/json' -u <SERVICE_ADMIN_USERNAME>:<SERVICE_ADMIN_PASSWORD> -X POST -T definitions.defs http://localhost:15672/api/definitions

# for ssl instances
$ curl -k --header 'Content-Type: application/json' -u <SERVICE_ADMIN_USERNAME>:<SERVICE_ADMIN_PASSWORD> -X POST -T definitions.defs https://localhost:15672/api/definitions

With this step done, we will have both Service Instances set up with the same structure, so we can proceed to the application migration.

Migrate Producer Applications

You can now switch your producers to use the new Service Instance. This step changes depending on the application setup, reconfigure your load balancer or your consumer applications.

Migrate Consumer Applications

Once the queues in the old instance are almost empty, you can stop consumers. If message ordering is important to you, you can still wait a bit more so that the consumers finish draining the queues on the old instance. When they are empty, reconfigure them as you did for the producers and restart them. At this point everything is migrated to the new instance.

Decommission Old Instance

The last step is to stop the old Service Instance. Your migration is now complete.

Create a Fork of a Service Instance

The procedure of forking a Service Instance involves creating a backup of a Service Instance and restoring it to a different Service Instance.

Having two Service Instances is a prerequisite for the process:

cf services
Output
Getting services in org system / space test as admin...

name service plan bound apps last operation
messaging1 a9s-messaging40 messaging-cluster-small bindingo create succeeded
messaging2 a9s-messaging40 messaging-cluster-small create succeeded

Fork an a9s Messaging Service Instance Using the Disaster Recovery Feature

To fork an existing a9s Messaging Service Instance from a specific backup, you can follow the instructions in Fork a Service Instance.

caution

As of the current release, this approach for forking is only tested when forking to a new instance that uses the same GA version.

Fork an a9s Messaging Service Instance Manually

This approach has additional prerequisites regarding command line tools:

  • BASH (or some other shell)
  • cat
  • openssl
  • python (see below for the verion required)
  • node (tested with v6.11.0)

Open the service dashboard of the Service Instance you want to fork. We use messaging1 for this example. You can find the dashboard URL like this:

cf service messaging1
Output
Showing info of service messaging1 in org system / space test as admin...

name: messaging1
service: a9s-messaging40
bound apps: bindingo
tags:
plan: messaging-cluster-small
description: This is a service creating and managing dedicated Messaging Service Instances, powered by the anynines Service Framework
documentation:
dashboard: https://a9s-messaging-dashboard.de.a9s.eu/service-instances/950cb675-3ed9-4613-8bb6-b2d618391d2f

[...]

Make sure you set a encryption password for the backups using the Service Instance dashboard. Create a backup using the dashboard. Download the backup to your local machine. The filename will be something like racsd92baee-1522222422893. Decrypt the backup and write its contents to a file:

cat racsd92baee-1522222422893 | openssl enc -aes256 -md md5 -d -pass 'pass:mytopsecretpassword' | gunzip -c > backup_settings.json

Create a tunnel to the admin interface (like explained in a previous chapter) of the RabbitMQ instance that will be the fork of the original instance. I used messaging1 as the original instance and messaging2 as the fork instance. The tunnel is needed to get the matching version of the rabbitmqadmin script as well as to restore the backed up data. I assume you're using something like this to set up the tunnel:

cf ssh someapp -L 127.0.0.1:15672:racsd92baee.service.dc1.a9ssvc:15672

racsd92baee.service.dc1.a9ssvc is the host of the messaging2 Service Instance.

Go to http://127.0.0.1:15672/cli/ to download the rabbitmqadmin tool. rabbitmqadmin is a python script. On that page you'll find information about the python version required.

Download a copy of the backup script restore_queues.js. Make sure to chmod u+x the script.

Restore the backed up queues using the restore script, e.g.:

./restore_queues.js $(which python) ~/Downloads/rabbitmqadmin 127.0.0.1 15672 a9s-brk-usr-xxxxxxxx xxxxxxxyyyyyyyyyzzzzzzzzz ./backup_settings.json

If you are restoring an SSL instance, you will need to do a small change to the restore script:

 args.push("-p")
args.push(process.argv[7])

+args.push("-s")
+
args.push("declare")

After this change to the script, you can restore your SSL instance exactly as a non-SSL one:

./restore_queues.js $(which python) ~/Downloads/rabbitmqadmin 127.0.0.1 15672 a9s-brk-usr-xxxxxxxx xxxxxxxyyyyyyyyyzzzzzzzzz ./backup_settings.json

Migrate messages using the Shovel plugin

While the previous migration options only migrate queue definitions, there is a way to manually migrate a Service Instances' messages into any forked/restored Service Instance. The message migration can be accomplished using the Shovel plugin.

It is recommended to use dynamic shovels. More information about dynamic shovels is available here. Unlike with static shovels, dynamic shovels are configured using (runtime parameters)[https://www.rabbitmq.com/docs/parameters]. They can be started and stopped at any time, including programmatically.

To use shovels, the Shovel plugin and the Shovel management plugin must be enabled on both a9s Messaging instances:

cf update-service my-messaging-service \
-c '{ "plugins": ["rabbitmq_shovel", "rabbitmq_shovel_management"] }'

Furthermore, to create shovels via the HTTP API, an administrator role must be created.

cf create-service-key my-messaging-service my-key -c '{"roles": ["administrator"]}'

Also, an app must be created and bound to the Data Service. Then, the required port for the HTTP API can be tunneled to localhost. More information is available here.

cf ssh a9s-messaging-app -L 15672:<hostname-instance>:15672

With a shovel, all messages can be moved from a certain queue to a specific target queue on a different RabbitMQ cluster. While the shovel is active, all messages that are still in the queue and all future messages added to it will be moved to the other queue. A shovel can be created by the following command:

curl --insecure -u <administrator-user-name>:<administrator-user-password> -X PUT https://localhost:15672/api/parameters/shovel/<encoded-vhost>/<shovel-name> \
-H "content-type: application/json" \
-d @- <<EOF
{
"value": {
"src-protocol": "amqp091",
"src-uri": "amqps://<administrator-user-name>:<administrator-user-password>@<hostname-instance>:5671/<encoded-vhost>?cacertfile=/var/vcap/jobs/rabbitmq/config/cacert.pem",
"src-queue": "<source_queue_name>",
"dest-protocol": "amqp091",
"dest-uri": "amqps://<administrator-user-name>:<administrator-user-password>@<hostname-instance>:5671/<encoded-vhost>?cacertfile=/var/vcap/jobs/rabbitmq/config/cacert.pem",
"dest-queue": "<target_queue_name>"
}
}
EOF

::: caution

  • To ensure that TLS is properly used, the location of the cacert file needs to be added as an argument to the URL. The default path on the RabbitMQ VM is cacertfile=/var/vcap/jobs/rabbitmq/config/cacert.pem.
  • The vhost must be specified for both the source and target queues. This vhost must be encoded after the port, before specifying any additional parameters in the URL (f.e. cacertfile). The default vhost / must be encoded as %2F. More information on encoding the default vhost is available in the RabbitMQ HTTP API Reference.

:::

The status of all shovels can be checked using the following command:

curl --insecure -u <administrator-user-name>:<administrator-user-password> -X GET https://localhost:15672/api/shovels/

The following command can be used to get more information about a particular queue in a a9s Messaging instance:

curl --insecure -u <administrator-user-name>:<administrator-user-password> -X GET https://localhost:15672/api/queues/<encoded-vhost>/<queue-name>
note

It is also possible to create shovels or view their status within the Management Dashboard. The shovel related settings are accessible inside the Admin Tab. More information about the Management Dashboard is available here.

The following command can be used to delete a shovel:

curl --insecure -u <administrator-user-name>:<administrator-user-password> -X DELETE https://localhost:15672/api/parameters/shovel/<encoded-vhost>/<shovel-name>
note

It is possible to use shovels to move messages between different queues or nodes within the same cluster. However, this is not recommended due to a lot of overhead.

For moving messages between different queues: Use fanout with exchanges and appropriate bindings. For moving messages between different nodes on a cluster: Use quorum queues.

However, to move messages between different vhosts, shovels should be used as vhosts are isolated logical environments that live within the same physical cluster.