Configuring your HTTP topology
Learn about how you can integrate IBM® Integration Bus into your HTTP
topology, and the consequences of different configurations for load-balancing,
failover, and administration.
The configuration of
IBM Integration Bus in
each scenario described in this topic is closely connected to your
choice of HTTP listener. You can choose between broker-wide listeners
and integration server (embedded) listeners. To understand the differences
between these listeners, how to configure them, and how ports are
allocated to them, see
HTTP listeners.
Learn about common HTTP topologies, and how to configure
IBM Integration Bus in each case. Understand the
implications of your choice of topology and broker configuration.
The following scenarios are listed in order of increasing complexity.
- Scenario one: single machine, single broker; ease of use is the
highest priority; load-balancing is relatively important; machine
failover and high throughput are not high priorities
This scenario uses
a broker-wide listener, listening on port 7080, and all HTTP communication
from the clients is processed by that listener. Load-balancing is
achieved by having additional instances of message flows within integration
servers, and by using the same URL in message flows across integration
servers.
Strengths:- Simplicity of administration and web service discovery: all inbound
requests (and any responses) are routed through a single port for
HTTP (and a second for HTTPS, if required).
- Load-balancing across message flows and integration servers: a request
to a specific web service (URL) can be processed by any message flow
registered to handle that URL. The message flows (MF1, MF2)
can be in separate integration servers (ExGp1, ExGp2),
and the requests are load balanced between them. As usual, within
each integration server you can deploy additional instances of each flow,
as required. This configuration offers a scalable load-balanced solution
with some degree of failover; if one integration server fails, the others
continue to process the workload while the first integration server restarts.
Weaknesses: - Failover: there is a single point of failure (broker or machine).
If machine failover is implemented, it is complicated: the secondary
machine must take over the IP address of the primary machine.
- Activity partitioning: there is no partitioning between activities
managed by the broker.
- Throughput: a single listener handles all HTTP and all HTTPS messages
sent through two ports on the broker. This single point of processing
and error handling can cause bottlenecks if high message throughput
is required.
- Scenario two: single machine, single broker; high throughput is
the highest priority; machine failover and load-balancing are not
high priorities
This scenario uses integration
server listeners to improve message throughput.
Strengths:
- High throughput: you can deploy message flows to different integration
servers so that the HTTP (or HTTPS) messages can be handled by multiple
listeners on multiple ports to meet high throughput requirements
- Simple configuration: these listeners communicate directly with
the HTTP transport network; no intermediate queues are required
- Activity partitioning: activities managed by the broker are partitioned
into separate integration servers
Weaknesses: - Failover: there is a single point of failure (integration server,
broker, or machine). If machine failover is implemented, it is complicated,
and involves the secondary machine taking over the IP address of the
primary machine
- You must include both the input and the reply nodes in the same
message flow, or deploy separate message flows to the same integration
server, so that they use the same listener; matching input and reply
messages must be processed by the same port
- Scenario three: multiple machines, multiple brokers, failover
the highest priority; load-balancing and security are also important
This scenario uses
broker-wide listeners. There is also a server that acts as a load
balancer and a network dispatcher, thus simplifying the client interface.
The configuration of message flows and integration servers is replicated
across multiple brokers, and the load-balancing server is configured
to manage high availability cluster multiprocessing (HACMP™) across the brokers.
Strengths:
- Load balancing: having a server that acts as a network dispatcher
and load balancer provides additional load-balancing capability to
this scenario
- Failover: the distribution of brokers across systems allows for
both machine and broker failover
- Simplified client configuration: the load-balancing server provides
a single point of contact for clients
Weaknesses: - Complexity: the scenario is complex, although the complexity can
be hidden from clients and managed in centralized locations