Skip to main content
Skip table of contents

DCT data backup, recovery, and migration

This method is only applicable for Kubernetes and OpenShift.

  • For Kubernetes, use the kubectl command prefix.

  • For OpenShift, use the oc command prefix. 

The following directions will guide you through a Data Control Tower (DCT) backup, recovery, and migration. They can be used for a “lift and shift” or a “blue-green” deployment. However, the backup and recovery steps are standardized, and can be applied in one-off scenarios as well.

This page refers to the two DCT servers as initial (source) and destination (target). 

Example deployment scenarios

An example of a “lift and shift” deployment could be:

  1. Backup the running initial server pre-upgrade.

  2. Upgrade the initial server to the desired version and confirm functionality.

  3. Backup the initial server post-upgrade.

  4. Install a new destination server as the same initial server’s post-upgrade version.

  5. Restore the initial post-upgrade backup to the destination server.

  6. Restart the destination services and confirm functionality.

  7. Shutdown the initial server or maintain it for further testing.

An example of a “blue-green” deployment could be:

  1. Backup the running initial server.

  2. Install a new destination server with the same initial server’s version.

  3. Restore the initial backup to the destination server.

  4. Restart the destination services and confirm functionality.

  5. Upgrade the destination server to the newer version and confirm functionality.

  6. Redirect traffic to the destination server.

  7. Shutdown the initial server or maintain it for a future blue-green deployment.

Prerequisites

  1. The initial (source) DCT server is up and running.

    1. Referred to as svr_source in any CLI commands.

  2. The destination (target) DCT server is installed in a separate Kubernetes cluster. 

    1. It is the same version as the initial DCT server when the backup is taken.

    2. Referred to as svr_target in any CLI commands.

  3. Ability to share backup files from initial to destination environments.

  4. Sufficient access to perform various kubectl commands on both the initial and destination clusters.

Backup instructions

Ensure the initial DCT server is running, to take a successful backup. Then, run the following commands:

CODE
kubectl cp <srv_source-gateway-pod>:/data gateway_data --namespace <srv_source-namespace>
CODE
kubectl cp <srv_source-masking-pod>:/data masking_data --namespace <srv_source-namespace>
CODE
kubectl cp <srv_source-virtualization-app-pod>:/data virtualization_app_data --namespace <srv_source-namespace>
CODE
kubectl exec -it <srv_source-database-pod> --namespace <srv_source-namespace> -- pg_dumpall -U postgres > postgres_db_all.sql

This will create four files: gateway_data, masking_data, virtualization_app_data, and postgres_db_all.sql:

  • gateway_data is the gateway pod’s persistent volume containing encryption keys and various other configuration information.

  • masking_data is the masking pod’s persistent volume containing various configuration information.

  • virtualization_app_data is the virtualization-app pod’s persistent volume containing various configuration information.

  • postgres_db_all.sql is a complete database backup.

Restore instructions

Ensure the destination DCT server is running to restore successfully. In addition, make the  postgres_db_all.sql, gateway_data, masking_data, and virtualization_app_data files available to the destination cluster in the subsequent steps.

Then, run the following commands:

CODE
kubectl cp gateway_data <srv_target-namespace>/<srv_target-gateway-pod>:/data
CODE
kubectl cp masking_data <srv_target-namespace>/<srv_target-masking-pod>:/data
CODE
kubectl cp virtualization_app_data <srv_target-namespace>/<srv_target-virtualization-app-pod>:/data
CODE
kubectl cp postgres_db_all.sql <srv_target-namespace>/<srv_target-database-pod>:/tmp
CODE
for i in app bookmarks data-library jobs masking virtualization
{
   kubectl exec -it <srv_target-database-pod> --namespace <srv_target-namespace> -- psql -U postgres -c "drop database \"$i\" with (FORCE)"
}
CODE
kubectl exec -it <srv_target-database-pod> --namespace <srv_target-namespace> -- psql -U postgres -f /tmp/postgres_db_all.sql

Finally, delete and restart the DCT pods:

CODE
for i in `kubectl get pods --namespace <srv_target-namespace> | awk '{print $1}' | grep -v jobs-cleaner | egrep "gateway|data-library|jobs|data-bookmarks|masking|virtualization-app"`
{
   kubectl delete pod $i -n <srv_target-namespace>
}

After deleting the pods, Kubernetes will automatically recreate them and absorb the new database backup, and gateway volume data.

Additional environment configuration

The HELM chart’s values.yaml contains information specific to your environment, such as certificates, hostname, or resource limits. You can update this information before or after the migration process. The standard installation and configuration process can be followed to update these values.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.