Gateway peering

A gateway peering instance defines peer group members for the gateway cluster.

Use of independent versus shared instances

Although each gateway peering instance generally has the same peer group members, you must assign independent instances for some purposes but can assign the same instance for some purposes. Each defined instance must use different monitoring ports and local ports.
  • You must assign different instance for the following purposes.
    • API Connect Gateway Service
    • API probe
    • GatewayScript rate limiting
    • Internal token store for the API security token manager
    • External token store for the API security token manager
  • You can assign the same instance for the following purposes when both are peer-based.
    • API rate limiting
    • API subscriptions

By default, the DataPower® Gateway provides the default-gateway-peering instance. By default, this gateway peering instance is disabled.

Compatibility mode-based differences

Depending on the compatibility mode for API Connect integration, the gateway peering instance or the gateway peering manager synchronizes data. For more information about the compatibility mode, see Configuring the API Connect Gateway Service.
  • When in V5 compatibility mode, you set a single gateway peering instance to synchronize distributed state and configuration data or the following purposes.
    • API Connect Gateway Service
    • API rate limiting
    • API subscriptions
    If you do not want to persist data across a restart, you can store the data in memory.
  • When not in V5 compatible mode, you use the default gateway peering manager, which is the only supported value. By default, the default gateway peering manager is disabled. In the gateway peering manager, set which gateway peering instances to use.

Available instances

When not in V5 compatible mode, you can configures and use the following gateway peering instances.
API Connect Gateway Service
This gateway peering instance synchronizes distributed state and configuration data across peer group members. If you do not want to persist data across a restart, you can store the data in memory.
API rate limiting
This gateway peering manages count limits, rate limits, and burst limits across peer group members. These rate limits are defined by the API collection, API plans, API operation rate limits, and assembly rate limit actions. This gateway peering instance is separate from the GatewayScript rate limiting instance. This gateway peering instance can be peer-based or cluster-based. If you do not want to persist data across a restart, you can store the data in memory.
API subscriptions
This gateway peering that manages subscribers across peer group members. If you do not want to persist data across a restart, you can store the data in memory.
Internal token store for the API security token manager
This gateway peering instance manages and stores internal OAuth token data across peer group members. This gateway peering instance is part of the configuration for the API security token manager. You must define a persistence location that persists data across a restart.
External token store for the API security token manager
This gateway peering instance stores and manages the responses from external OAuth token management services across peer group member. The external store includes OAuth token data for native tokens that are managed by an external token management service. This gateway peering instance is part of the configuration for the API security token manager. If you do not want to persist data across a restart, you can store the data in memory.
API probe
This gateway peering instance synchronizes distributed API probe data across peer group members. If you do not want to persist data across a restart, you can store the data in memory.
GatewayScript rate limiting
This gateway peering instance synchronizes the keys for rate thresholds, counters, and concurrent transactions across peer group members. These rate limits are defined by APIs in the GatewayScript ratelimit module that an assembly GatewayScript action calls. This gateway peering instance is separate from the API rate limiting instance. If you do not want to persist data across a restart, you can store the data in memory.

Configuration and environmental requirements

Members in the peer group must meet the following conditions.

Identical configuration
Have identical configurations, which includes the following settings.
  • The same monitor port, which does not apply to cluster-based
  • The same local port
  • The same TLS configuration, which includes the following properties
    • TLS mode
    • Identification credentials or the deprecated key and certificate aliases
    • Validation credentials
  • The same persistence location
    • When memory, data does not persist across a restart.
    • When local file store or RAID, data persists across a restart.
    Attention: The DataPower local: directory is not a secure option. For optimal security, set the persistence location to the RAID volume. When the RAID volume is not available, set the persistence location to memory.
Key and certificate restrictions
Keys and certificates are restricted to PEM and PKCS #12 formats.
Clock synchronization
Be clock-synchronized by using the NTP service. To set, see Managing the NTP service.
System name
Have a unique system name, as defined in System Settings. To set, see Defining information specific to the DataPower Gateway.

Data-synchronization and failover

Gateway peering requires a quorum of at least three peer members. In peer-based or cluster-based gateway peering, one member is the primary member and the others members are replicas or secondary members.

When a member joins the peer group, data is synchronized across the peer group so that each member has identical data.

When the primary member is operational, after a request is accepted data is saved to the primary member. Then, the primary member synchronizes the data to all replicas so that all peers have the same copy of the data.

When the primary member becomes unavailable, one of the following situations happens.
  • After failover, quota enforcement is implemented without impact across the peer group.
  • Before failover, during the 10-second timeout for the primary member, the following behaviors happen.
    • Replicas cannot process requests.
    • When data storage of all peers is in-memory, after the primary member is restarted, the data in the primary member becomes empty. The replicas synchronize data with the resumed primary memory. In this case, the originally stored data is lost.

When a replica becomes unavailable and is automatically restarted, it synchronizes with the primary member. In this case, the originally stored data remains consistent with the primary member.

Failover-detection is as follows.
  • The failure is detected when two replicas agree that the primary member is unreachable after a timeout of 10 seconds. Therefore, at least three members must be configured to support failover.
  • More than half of all members must be reachable.