DCT data backup, recovery, and migration
This method is only applicable for Kubernetes and OpenShift.
For Kubernetes, use the kubectl command prefix.
For OpenShift, use the oc command prefix.
The following directions will guide you through a Data Control Tower (DCT) backup, recovery, and migration. They can be used for a “lift and shift” or a “blue-green” deployment. However, the backup and recovery steps are standardized, and can be applied in one-off scenarios as well.
This page refers to the two DCT servers as initial (source) and destination (target).
Example deployment scenarios
An example of a “lift and shift” deployment could be:
Backup the running initial server pre-upgrade.
Upgrade the initial server to the desired version and confirm functionality.
Backup the initial server post-upgrade.
Install a new destination server as the same initial server’s post-upgrade version.
Restore the initial post-upgrade backup to the destination server.
Restart the destination services and confirm functionality.
Shutdown the initial server or maintain it for further testing.
An example of a “blue-green” deployment could be:
Backup the running initial server.
Install a new destination server with the same initial server’s version.
Restore the initial backup to the destination server.
Restart the destination services and confirm functionality.
Upgrade the destination server to the newer version and confirm functionality.
Redirect traffic to the destination server.
Shutdown the initial server or maintain it for a future blue-green deployment.
Prerequisites
The initial (source) DCT server is up and running.
Referred to as
svr_source
in any CLI commands.
The destination (target) DCT server is installed in a separate Kubernetes cluster.
It is the same version as the initial DCT server when the backup is taken.
Referred to as
svr_target
in any CLI commands.
Ability to share backup files from initial to destination environments.
Sufficient access to perform various
kubectl
commands on both the initial and destination clusters.
Backup instructions
Ensure the initial DCT server is running, to take a successful backup. Then, run the following commands:
kubectl cp <srv_source-gateway-pod>:/data gateway_data --namespace <srv_source-namespace>
kubectl cp <srv_source-masking-pod>:/data masking_data --namespace <srv_source-namespace>
kubectl cp <srv_source-virtualization-app-pod>:/data virtualization_app_data --namespace <srv_source-namespace>
kubectl exec -it <srv_source-database-pod> --namespace <srv_source-namespace> -- pg_dumpall -U postgres > postgres_db_all.sql
This will create four files: gateway_data
, masking_data
, virtualization_app_data
, and postgres_db_all.sql
:
gateway_data
is the gateway pod’s persistent volume containing encryption keys and various other configuration information.masking_data
is the masking pod’s persistent volume containing various configuration information.virtualization_app_data
is the virtualization-app pod’s persistent volume containing various configuration information.postgres_db_all.sql
is a complete database backup.
Restore instructions
Ensure the destination DCT server is running to restore successfully. In addition, make the postgres_db_all.sql
, gateway_data
, masking_data
, and virtualization_app_data
files available to the destination cluster in the subsequent steps.
Then, run the following commands:
kubectl cp gateway_data <srv_target-namespace>/<srv_target-gateway-pod>:/data
kubectl cp masking_data <srv_target-namespace>/<srv_target-masking-pod>:/data
kubectl cp virtualization_app_data <srv_target-namespace>/<srv_target-virtualization-app-pod>:/data
kubectl cp postgres_db_all.sql <srv_target-namespace>/<srv_target-database-pod>:/tmp
for i in app bookmarks data-library jobs masking virtualization
{
kubectl exec -it <srv_target-database-pod> --namespace <srv_target-namespace> -- psql -U postgres -c "drop database \"$i\" with (FORCE)"
}
kubectl exec -it <srv_target-database-pod> --namespace <srv_target-namespace> -- psql -U postgres -f /tmp/postgres_db_all.sql
Finally, delete and restart the DCT pods:
for i in `kubectl get pods --namespace <srv_target-namespace> | awk '{print $1}' | grep -v jobs-cleaner | egrep "gateway|data-library|jobs|data-bookmarks|masking|virtualization-app"`
{
kubectl delete pod $i -n <srv_target-namespace>
}
After deleting the pods, Kubernetes will automatically recreate them and absorb the new database backup, and gateway volume data.
Additional environment configuration
The HELM chart’s values.yaml
contains information specific to your environment, such as certificates, hostname, or resource limits. You can update this information before or after the migration process. The standard installation and configuration process can be followed to update these values.