CloudFerro cloud region migration tips for CREODIAS

In this article, we will focus on a few important aspects of migration from older CREODIAS regions (WAW3-1 or CF2) to the new ones, such as WAW3-2, FRA1-2, and WAW4-1:

What We Are Going To Cover

  • Prepare a migration toolbox

  • Perform virtual machine migration

  • Copy data volumes

  • Reconnect EO Data access at new regions

Prerequisites

No. 1 Account

You need a CREODIAS hosting account with access to the Horizon interface: https://horizon.cloudferro.com.

No. 2 Credentials

You need to obtain appropriate credentials to be able to configure EO data access: How to get credentials used for accessing EODATA on a cloud VM on CREODIAS.

No. 3 The migration process

The VM migration process is described in detail in the CREODIAS documentation: OpenStack instance migration using command line on CREODIAS.

No. 4 rsync

How to Upload and Synchronise Files with SCP / RSYNC?

No. 5 OpenStack CLI client

You need to have OpenStack CLI client installed. One of the following articles should help:

To use OpenStack CLI client to control CREODIAS cloud, you need to prove your identity: How to activate OpenStack CLI access to CREODIAS cloud using one- or two-factor authentication

No. 6 Virtual machine

You need a virtual machine running Ubuntu 22.04. Other operating systems might also work but might require adjusting of commands. The virtual machine must have a floating IP attached.

The following articles cover how to create a virtual machine with a floating IP:

Migration Toolbox

To perform migration effectively, you need access to:

  • Horizon GUI (Prerequisite No. 1) as well as

  • OpenStack command line client (Prerequisite No. 5).

We also recommend creating a virtual machine at the destination region, dedicated to running migration commands (Prerequisite No. 6).

This instance should have the OpenStack command line interface client installed. Additionally, RC files or Application Credentials to access both source and destination cloud regions should be copied here. This instance can be based on a low-performance and low-cost flavor such as eo1.small or even eo1.xsmall.

Purposes for keeping this type of instance during migration are:

  • You can execute time-consuming tasks in batch mode.

  • If you attach a volume for storing instance images, the entire network transfer of large files would be made internally in the high-performance internal CloudFerro infrastructure, without involving your Internet access.

VM Migration

The VM migration process is described in detail in Prerequisite No. 2.

It is worth mentioning a few aspects:

  1. The instance should be shut down before creating an image.

  2. Before shutting down the instance, consider whether your workflow requires manual stopping of the machine, according to specific requirements, for example, a fixed order of shutting services. If yes, do not shut down the instance from Horizon GUI or by command line before properly shutting down all your services.

  3. Analyze or test how your software on this instance will behave when started without mounted volumes. If it causes issues, please consider disabling autostart before shutting it down and making an image.

  4. If your instance has attached volumes, note the exact order of attachments and device names used. To that end, use command

openstack volume list

and on the instance, with commands lsblk and mount. Recreation of volume attachments would be significantly easier at the destination region.

Volumes migration

There are two possible ways of migrating volumes, one using rsync and the other using dd comands.

For technical guidance how to use rsync, see Prerequisite No. 4.

Using rsync
  • Create identical volumes at the destination.

  • Attach them to the migrated instance.

  • Recreate partitions and file systems exactly as in the source region.

  • Copy data using the rsync command.

Pros

The rsync command synchronizes all data and verifies the transfer.

Cons

Transferring larger volumes with this method may be time-consuming.

Using dd
  • Create identical volumes at the destination and attach them to the migrated instance without recreating partitions and file systems.

  • Verify the following: - The source machine has Internet access. - The destination has a floating IP and is accessible via SSH. - The private key to access the destination is copied to the source directory $HOME/.ssh/.

  • Unmount both source and destination volumes.

  • Execute the following command on the source machine:

sudo dd if=VOLUME_DEVICE_AT_SOURCE bs=10M conv=fsync status=progress | gzip -c -9
ssh -i .ssh/DESTINATION_PRIVATE_KEY eouser@DESTINATION_IP 'gzip -d sudo dd of=VOLUME_DEVICE_AT_DESTINATION bs=10M'

This would do the work much faster than using rsync.

If volumes are attached as /dev/sdb on both machines, the command would look like:

sudo dd if=/dev/sdb bs=10M conv=fsync status=progress | gzip -c -9
ssh -i .ssh/destination_private_key eouser@Destination_ip 'gzip -d sudo dd of=/dev/sdb bs=10M'
  • After successful execution of this command, check whether the entire partition table was copied using the command lsblk at the destination instance. You should see exactly the same partitions as at the source volume.

  • Finally, mount partitions at the same points as at the source.

How to update EO Data mounting when VM was migrated from CF2 or WAW3-1 to WAW3-2 or FRA1-2

New cloud regions such as WAW3-2 and FRA1-2, or any future ones, will have EO Data access configured differently than the older CF2 or WAW3-1 regions.

Differences between these regions

  • Different endpoint names:

  • S3fs authorization is required at WAW3-2, WAW4-1 and FRA1-2 regions.

    This policy will be continued for any new region provided by CloudFerro.

    Credentials for s3fs are automatically created for VMs in WAW3-2, FRA1-2 or WAW4-1 clouds. For technical details, see Prerequisite No. 2.

In the next part of this article, we will use CF2 and WAW3-2 as an example of region migrations. The main difference will be in file /etc/systemd/system/eodata.mount.

Content of /etc/systemd/system/eodata.mount created with VM at CF2

[Unit]
Before=remote-fs.target

[Mount]
Where=/eodata
What=s3fs#DIAS
Type=fuse
Options=noauto,_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=http://data.cloudferro.com,max_stat_cache_size=60000,list_object_max_keys=10000

[Install]
WantedBy=multi-user.target

Content of /etc/systemd/system/eodata.mount created with VM at WAW3-2

[Unit]
Before=remote-fs.target
After=dynamic-vendor-call.service
Requires=network-online.target

[Mount]
Where=/eodata
What=s3fs#eodata
Type=fuse
Options=_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=https://eodata.cloudferro.com,passwd_file=/etc/passwd-s3fs-eodata,max_stat_cache_size=60000,list_object_max_keys=10000,sigv2

[Install]
WantedBy=multi-user.target

Preparation to update EO Data mount

Before we start the reconfiguration of EO Data access, it is important to verify that networks are properly configured at the destination project:

  1. Execute the command:

openstack network list

You should get an output table containing a minimum of 3 networks:

“external”
YOUR_PROJECT_NAME
“eodata”
  1. Execute the command:

openstack subnet list

You should get an output table containing a minimum of 4 subnets:

YOUR_PROJECT_NAME
“eodata1-subnet”
“eodata2-subnet”
“eodata3-subnet”
  1. After the creation of migrated instance, check whether it was added to the necessary networks:

openstack server show -c address INSTANCE_NAME

You should get an output table containing the following networks:

YOUR_PROJECT_NAME
“eodata”

Manual procedure EOData mount update

  1. Log in via SSH to the migrated VM.

  2. Edit the file with the configuration of mounting EOData

cd /etc/systemd/system
sudo YOUR_EDITOR_OF_CHOICE eodata.mount
  1. Replace the content of this file for CF2 with content for WAW3-2 as shown above in the “Differences” chapter.

  2. Save this file.

  3. Execute

cd /etc
  1. Execute

curl http://169.254.169.254/openstack/latest/vendor_data2.json \
| jq .nova.vmconfig.mountpoints[0]

 And save/note values of:

 * s3_access_key
 * s3_secret_key
  1. Execute

sudo YOUR_EDITOR_OF_CHOICE passwd-s3fs-eodata
  1. Paste the saved values here in the following format:

s3_access_key:s3_secret_key
  1. Execute

sudo chmod go-rwx passwd-s3fs-eodata
  1. Activate EOData access by restarting the VM:

sudo reboot
  1. Execute tests verifying that your services using EOData work properly.

After taking those steps, the VM migrated from CF2 should be able to access EOData as before the migration.

EOData mount update automation

This entire procedure can be automated by executing a script remotely with SSH:

  • Save the script listed below to a file named eodata_mount_update.sh.

  • Execute the command:

ssh -i .ssh/YOUR_PRIVATE_KEY -t eouser@IP_OF_VM "sudo bash -s" < eodata_mount_update.sh

Automation Script

This is the script to be saved as eodata_mount_update.sh and executed remotely via SSH:

#!/bin/bash
cat <<EOF > /etc/systemd/system/eodata.mount
[Unit]
Before=remote-fs.target
After=dynamic-vendor-call.service
Requires=network-online.target

[Mount]
Where=/eodata
What=s3fs#eodata
Type=fuse
Options=_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=https://eodata.cloudferro.com,passwd_file=/etc/passwd-s3fs-eodata,max_stat_cache_size=60000,list_object_max_keys=10000,sigv2

[Install]
WantedBy=multi-user.target
EOF

S3_ACCESS=`curl http://169.254.169.254/openstack/latest/vendor_data2.json | jq -r .nova.vmconfig.mountpoints[0].s3_access_key`
S3_SECRET=`curl http://169.254.169.254/openstack/latest/vendor_data2.json | jq -r .nova.vmconfig.mountpoints[0].s3_secret_key`

echo "$S3_ACCESS":"$S3_SECRET" > /etc/passwd-s3fs-eodata
chmod go-rwx /etc/passwd-s3fs-eodata

sleep 30s

reboot