Failover steps for object configuration if you are using local authentication for object
Use the following steps on a protocol node in the secondary cluster to fail over the object configuration data when you are using local authentication for object.
- Cluster hostname
- Specifies the DNS value that returns a CES IP address from the pool of addresses in the secondary cluster.
- Object database node
- Specifies the CES IP address that is configured to run the postgresql-obj database for object
services. You can find this value as the address designated as the
object_database_node
in the output of the mmces address list command:mmces address list Address Node Group Attribute -------------------------------------------------------------- 10.0.100.115 vwnode3 none object_database_node 10.0.100.116 vwnode3 none object_singleton_node
The following object steps need to be run on the node that is designated as
object_database_node
in the secondary cluster. Running these object steps makes
sure that postgresql-obj
and Keystone servers can connect during this configuration
process.
- Run the following command to stop the object protocol services:
mmces service stop OBJ --all
- Make two changes in the preserved Cluster Configuration
Repository (CCR) configuration to update it for the DR environment:
- Edit the keystone.conf file to change the database connection
address to
object_database_node
of the secondary cluster:Change this: [database] connection = postgresql://keystone:password@192.168.56.1/keystone to this: [database] connection = postgresql://keystone:password@192.168.1.3/keystone
Note: If the mmcesdr command is used to save the protocol cluster configuration, then the preserved copy of the keystone.conf file is at the following location:CES_shared_root_mount_point/.async_dr/Failover_Config/Object_Config/latest/ccr_files/keystone.conf
You can edit the file directly to make this change or use the openstack-config command. Retrieve the current value by using get, and then update it using the set option:
openstack-config --get keystone.conf database connection \ postgresql://keystone:passw0rd@192.168.56.1/keystone openstack-config --set keystone.conf database connection \ postgresql://keystone:passw0rd@192.168.1.3/keystone openstack-config --get keystone.conf database connection \ postgresql://keystone:passw0rd@192.168.1.3/keystone
- The ks_dns_name variable is one of the object-related variables
that was originally preserved from CCR. Modify the value of this variable to the cluster hostname of
the secondary cluster. Note: If the mmcesdr command is used to save the protocol cluster configuration, then the preserved copy of the ks_dns_name variable is located as a line in the following file:
CES_shared_root_mount_point/.async_dr/Failover_Config/Object_Config/latest/ccr_vars/file_for_ccr_variables.txt
Change the value of the variable in this preserved copy of the file.
- Edit the keystone.conf file to change the database connection
address to
- [Optional] If spectrum-scale-localRegion.conf exists from CCR, change the cluster hostname and cluster ID properties to the cluster hostname and cluster ID as shown in the output of the mmlscluster command.
- Restore the Postgres database information to the shared root directory. The directory
needs to be first cleaned out before the archive is restored. Cleaning out the directory can be done
by running commands similar to the following, assuming that the directory was
.tar file or .zip file when it was backed
up:
- Run the following command to delete the old Postgres data:
rm -rf <shared_root_location>/object/keystone/*
- Run the following command to verify that the shared root directory is empty:
ls <shared_root_location>/object/keystone
- Run the following command to restore the current Postgres database:
tar xzf <tar_file_name>.gz -C <shared_root_location>
- Run the following command to delete the process status file from the primary:
rm -rf <shared_root_location>/object/keystone/postmaster.pid
- Run the following command to list the Postgres files:
ls <shared_root_location>/object/keystone
- Run the following command to delete the old Postgres data:
-
Restore the object configuration CCR files except objRingVersion,
including keystone.conf with the modification for
object_database_node
, with a command similar to the following:mmccr fput <file> <location>/<file>
- If object policies are present, restore the object policy-related CCR files.
- Restore the object configuration CCR file objRingVersion.
- Restore the object configuration CCR variables, including ks_dns_name
with the modification for cluster hostname, with a command similar to the following:
mmccr vput name value
- Run the following command to start the Postgres database. Then, verify that the Postgres
database is running:
systemctl start postgresql-obj sleep 5 systemctl status postgresql-obj These commands generate output similar to: postgresql-obj.service - postgresql-obj database server Loaded: loaded (/etc/systemd/system/postgresql-obj.service; disabled) Active: active (running) since Thu 2015-05-28 19:00:17 EDT; 20h ago
- Run the following command to load openrc definitions:
source /root/openrc
- Run the following command for the value of the
api_v3
pipeline. Save the output of this command into the variable<savedAPI_V3Pipeline>
.mmobj config list --ccrfile keystone-paste.ini --section pipeline:api_v3 --property pipeline
.If the value of theapi_v3
pipeline does not have the string admin_token_auth, then do the following:- Make a note that the
api_v3
pipeline had to be updated. - Generate the new value of the
api_v3
pipeline by inserting the string admin_token_auth directly after the string token_auth in the saved copy of theapi_v3
pipeline. - Run the following command to change the
api_v3
pipeline value:mmobj config change --ccrfile keystone-paste.ini --section pipeline:api_v3 --property pipeline --value <newAPI_V3Pipeline>
.<newAPI_V3Pipeline> is the updated value from the previous step. - Run the following command to list the updated
api_v3
pipeline value and make sure that the string isadmin_token_authis present:mmobj config list --ccrfile keystone-paste.ini --section pipeline:api_v3 --property pipeline
.
- Make a note that the
- Save the restored CCR copy of the keystone.conf to the local
location by using a command similar to the following
command:
Also, update the owner and the group of this file by using the following command:mmccr fget keystone.conf /etc/keystone/keystone.conf
hown keystone:keystone /etc/keystone/keystone.conf
Note: If SSL is enabled, SSL certificates need to be in place when you save the keystone.conf file from another cluster. - If the DEFAULT admin_token set, run the following command to save
its current value:
openstack-config --get /etc/keystone/keystone.conf DEFAULT admin_token
If the command returns a value, save it. The value needs to be restored later.
- In the keystone.conf file, run the following command to set
admin_token to ADMIN by using
openstack-config:
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN
- Run the following command to set the following environment variables:
export OS_TOKEN=ADMIN # The value from admin_token export OS_URL="http://127.0.0.1:35357/v3"
- Run the following command to start Keystone services and get the list of endpoint
definitions:
systemctl start httpd sleep 5 openstack endpoint list
These commands generate output similar to: +-----------+------+-------------+-------------+-------+---------+-------------------------------------------------------------| |ID |Region|Service Name |Service Type |Enabled|Interface|URL | +-----------+------+-------------+-------------+-------+---------+-------------------------------------------------------------| | c36e..9da5| None | keystone | identity | True | public | http://specscaleswift.example.com:5000/ | | f4d6..b040| None | keystone | identity | True | internal| http://specscaleswift.example.com:35357/ | | d390..0bf6| None | keystone | identity | True | admin | http://specscaleswift.example.com:35357/ | | 2e63..f023| None | swift | object-store| True | public | http://specscaleswift.example.com:8080/v1/AUTH_%(tenant_id)s| | cd37..9597| None | swift | object-store| True | internal| http://specscaleswift.example.com:8080/v1/AUTH_%(tenant_id)s| | a349..58ef| None | swift | object-store| True | admin | http://specscaleswift.example.com:8080 | +-----------+------+-------------+-------------+-------+---------+-------------------------------------------------------------|
- Update the hostname specified in the endpoint definitions in the URL value from the
endpoint list. The values in the endpoint table might have the cluster hostname (such
asces1) from the primary system. They need to be updated for the cluster host
name in the DR environment. In some environments, the cluster hostname is the same between the
primary and secondary clusters. If that is the case, skip this step.
- Delete the existing endpoints with the incorrect cluster hostname. For each of the
endpoints, use the ID value to delete the endpoint. Run the following command to delete each of the
six endpoints:
openstack endpoint delete e149
- Run the following commands to re-create the endpoints with the cluster hostname of the
secondary cluster:
The CHN variable in the following commands is the cluster hostname for the secondary cluster.
openstack endpoint create identity public "http://$CHN:5000/v3" openstack endpoint create identity internal "http://$CHN:35357/v3" openstack endpoint create identity admin "http://$CHN:35357/v3" openstack endpoint create object-store public "http://$CHN:8080/v1/AUTH_%(tenant_id)s" openstack endpoint create object-store internal "http://$CHN:8080/v1/AUTH_%(tenant_id)s" openstack endpoint create object-store admin "http://$CHN:8080"
- Run the following command to verify that the endpoints are now using the correct
cluster:
openstack endpoint list
- Delete the existing endpoints with the incorrect cluster hostname. For each of the
endpoints, use the ID value to delete the endpoint. Run the following command to delete each of the
six endpoints:
- If the
api_v3
pipeline had to be updated previously, run the following command to return it to its original value:mmobj config change --ccrfile keystone-paste.ini --section pipeline:api_v3 --property pipeline --value <savedAPI_V3Pipeline>
, where <savedAPI_V3Pipeline> is the value of theapi_v3
pipeline that is saved. - Depending on whether a value for the DEFAULT
admin_token
was previously set, do one of the following:- If a value for the DEFAULT admin_token was previously set, run the
following command to reset it to that value:
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ${currentAdminToken}
- If there was no previous value for the DEFAULT admin_token, run the
following command to delete the value that was set to convert the endpoint
definitions:
openstack-config --del /etc/keystone/keystone.conf DEFAULT admin_token
- If a value for the DEFAULT admin_token was previously set, run the
following command to reset it to that value:
- Run the following commands to stop the services that were manually started earlier and
clean up:
systemctl stop httpd systemctl stop postgresql-obj unset OS_URL unset OS_TOKEN
- Run the following command to start the object protocol services:
mmces service start OBJ --all
Failover steps for object configuration if you are not using local authentication
Use the following steps on a protocol node on a secondary cluster to fail over object configuration data. Use the following set of steps when you are not using local authentication for object.
- Stop the object protocol services using the following command:
mmces service stop OBJ --all
-
Restore the object configuration CCR files except objRingVersion with a
command similar to the following:
mmccr fput <file> <location>/<file>
- If object policies are present, restore the object policy-related CCR files (that is *.builder and *.ring.gz files).
- Restore the object configuration CCR file objRingVersion.
- Restore the object configuration CCR variables with a command similar to the
following:
mmccr vput name value
- Start the object protocol services using the following
command:
mmces service start OBJ --all