Topology changes (add or drop members)

A member topology refers to the members that are part of a Db2® instance. For partitioned database environments and the Db2 pureScale® Feature, the members in the member topology are those members that are listed in the configuration file db2nodes.cfg. For a Db2 Enterprise Server Edition instance, the member topology contains a single member, member 0 as the member identifier.

Starting in version 10.5, adding members can be performed online without having to bring down the entire Db2 pureScale cluster. In addition, members can be added by specifying a member identifier.

Adding new members online

As in previous releases, adding a member to the Db2 pureScale instance must be performed from a host member or cluster caching facility (CF) that is already part of the instance.

The new member is added by using the db2iupdt command. When you add a member to a member host with one or more members, the other members can either be in STARTED or STOPPED state. However, when you add a member to a CF host, as in previous releases, the CF must be stopped. Stop the CF by using the db2stop command and indicating the CF identifier.

Note: As part of planning of adding new members, ensure that the CF has enough worker threads to allow the addition of new members. Run db2 get database manager configuration | grep CF_NUM_WORKERS

After the new member is added, the instance is in the same state it was before the db2iupdt command was issued. Members and CFs are not explicitly started or stopped as part of the add operation. In addition, after an add member operation (either online or offline), a recoverable database is not placed in backup pending state. Therefore, starting in Db2 10.5, you are not required to take a full offline database backup for a database to be usable. The newly added member is not explicitly started by the db2iupdt command so that you can make any member-specific database or database manager configuration changes required. If no member-specific configuration changes are required, the newly added member can be started immediately. In either case, after a new member is added, the new member must be started manually by using the db2start command.

If you use member subsets and a subset is defined for the database alias, the new member must be added to the subset manually. After you add the new member to the subset, clients are directed to that new member. Adding a member to a subset is dynamic, and takes effect immediately. If you are using WLB without a subset, clients are directed to the new member immediately.

For a database to be usable on the new member, the database must be activated on the new member for the first time. This activation can be done either:
  • Implicitly by initiating a first user connection to the database on the new member with the CONNECT TO command or the RESTART DATABASE command, or
  • Explicitly by issuing the ACTIVATE DATABASE command.
After the database is activated on the newly added member, a database or table space rollforward operation can be issued. After the newly added member is started, clients can connect automatically to the new member.
These operations are restricted during an online add member operation:
  • To maintain quorum and to allow for the change of the resource model, N/2 + 1 hosts must be currently online, where N is the total number of hosts in the cluster (including CFs and members).
  • When a db2iupdt -add -m operation is in progress, the global start or stop operation of the instance results in an error message.
  • When you run a db2iupdt -add -m operation, these database operations are not compatible with the instance level addition of a member:
    • CATALOG DATABASE command
    • CREATE DATABASE command
    • DROP DATABASE command
    • BACKUP DATABASE command (database or table space level)
    • RESTORE DATABASE command (database or table space level)
    • RECOVER DATABASE command
    • SET WRITE SUSPEND command (I/O is suspended)
    If any of the above commands is run while there is a db2iupdt -add -m operation in progress, an error results.
These operations are restricted during a first database activation on a newly added member. The first database activation on a newly added member can be done either:
  • Implicitly by a CONNECT TO command or a RESTART DATABASE command, or
  • Explicitly by a ACTIVATE DATABASE command.
The first database activation on a newly added member is not compatible with these commands:
  • Online DATABASE BACKUP command
  • Online TABLESPACE BACKUP command
  • Online TABLESPACE RESTORE command
If one of the non-compatible commands are in progress, the database activation attempt results in an error. If the first database activation on a newly added member is in progress, any of those commands are blocked until the database activation is complete.

Topology changes that require a database backup

Some topology changes make transaction logs and table space backups that are created before the topology change incompatible with the database after the topology change:
  • A drop member operation.
  • A restore of a Db2 pureScale database to a Db2 pureScale instance where the source member topology includes members that are not included in the target member topology.
  • A restore of a Db2 Enterprise Server Edition database to a Db2 pureScale instance.
The rollforward utility cannot roll forward through these topology changes. After one of these topology changes occurs, the database backup pending state is set when the database is activated. To ensure the recoverability of the database, before the database can be activated again you must take either a full or incremental database backup.

For non-recoverable databases, after a topology life change event, you do not need to take a database backup. (Rollforward is not supported in non-recoverable databases.)

Topology change in a Geographically dispersed pureScale cluster (GDPC)

All information in the sections above are also applicable to GDPC. In addition, in GDPC, Db2 automatically performs various self-verifications to ensure the cluster is in a healthy state. For example, a validation will verify that the number of nodes (members and CF) in each non-tiebreaker site is the same. This is to ensure that in the case of a site failure, the surviving site along with the tiebreaker host can acquire operational quorum and continue to function. As a result, when you consider adding to a GDPC cluster, you should add two members, one to each non-tiebreaker site.

After adding the first new member to one site in GDPC, an alert will be generated indicating that an unequal number of nodes has been detected in the two sites. This is normal and can be ignored. Once the second member is added to the other site, the alert will be cleared automatically.