IBM Support

IBM Security Information Queue FAQ

Question & Answer


Question

Q1. Where do I get the latest version of IBM Security Information Queue (ISIQ)?

Q2. From a conceptual point of view, is ISIQ the same as ISIGADI, namely, an interface component between ISIM and IGI?

Q3. Even though ISIQ is composed of multiple Docker containers, could we deploy it on a single node? For example, if we have a non-HA IGI that's used for recertification, can we continue to run it like that along with a single-node ISIQ?

Q4. What are the minimum requirements of a server that hosts a single-node ISIQ?

Q5. If we already have a Docker infrastructure, can we add ISIQ to it if the infrastructure meets the minimum hardware/software requirements?

Q6. To build up an HA solution, can we simply double our single-node ISIQ?

Q7. Are any special steps required to create a swarm cluster?

Q8. I see that ISIQ uses Kafka. We already have a Kafka-based environment. Can we plug ISIQ into that environment?

Q9. Are there restrictions on which OIDC provider we can leverage with ISIQ?

Q10. If we have no OIDC provider, can we use IGI to satisfy ISIQ's OIDC requirement?

Q11. I see that ISIQ has "pause" and "resume" buttons for its queues. Is there a way to reset? For example, if I want to reinitialize my IGI database in a test environment, how would I also delete relevant Kafka topics that ISIQ had populated?

Q12. Can I purge (or drop) ISIQ and restart from scratch with a sync between ISIM and IGI, even if both are partially in sync?

Q13. After ISIQ has loaded ISIM data into IGI, how do I know my two products are fully synchronized?

Q14. Given Docker's flexibility, could we adopt an elastic computing style where we add a server during initial ISIM-to-IGI load to speed up the process, and then later remove the server during "steady-state" operations between ISIM and IGI?

Q15. Won't the Kafka queues in ISIQ grow over time? If so, what should our maintenance procedure be? Should we erase messages after a while?

Q16. In the Kafka streams model, if I lose a consumer and it has to be re-created, does everything get replayed from the start? Example scenario: A user is created in January, and deleted in March. It's now June and I have to add or reset a consumer. If the replay logic steps through the queue, won't it re-add the deleted user and then remove the user again?

Q17. If there are different streams for different ISIM objects, are the streams sync'd? In other words, if I create an Org container, and then create a person in that container, it will reach ISIQ in two different streams. How do I ensure one arrives before the other?

Q18. Are there recommendations for managing ISIQ's log growth and guarding against excessive disk space usage?

Answer

A1. The latest version of ISIQ can always be found on its starter kit web page: https://www.ibm.com/support/pages/ibm-security-information-queue-starter-kit

A2. Not in the sense of being a specialized point-to-point solution like ISIGADI. ISIQ is a general-purpose data exchange broker. It consists of several services running in separate containers in a cluster of Docker nodes. One of ISIQ's first use cases is to exchange data between ISIM and IGI, but that's just one use case.

A3. Yes, you can execute all ISIQ containers on a swarm with one Docker host. However, you risk loss of availability if the host fails.

A4. Docker CE 18.03 running on CentOS, Debian, Fedora, or Ubuntu with x86_64/amd64, 8GB RAM, 2 vCPU, 25GB free disk space per node (you'll need more RAM/HD space on a single node). Docker EE supports additional Linux distributions: CentOS, Oracle Linux, RHEL, SLES, and Ubuntu. See https://docs.docker.com/ee/supported-platforms/#on-premises - Note: ISIQ runs on Linux only. There is Docker EE for Windows, but it's not an ISIQ-supported platform currently.

A5. There are challenges to this approach. For example, if you use the ISIQ logging stack, it includes logspout, which reads in the Docker logs of all containers on the host and sends them to logstash. If you're running many other containers, it could create a logging bottleneck. There are configuration options that might help, but for now we suggest a dedicated Docker server instance for ISIQ.

A6. We recommend you specify an odd number of hosts (which is why the ISIQ-supplied YAML files assume a three-node cluster) since distributed algorithms use majority elections.

A7. Run the command `docker swarm init` to initialize the swarm, and then run `docker swarm join` to add extra hosts. Some ISIQ services need to maintain state via Docker volumes. In a cluster, you must either pin those services to a particular host, or else set up a shared file system so that state data can be read from whichever node the service starts on. For more information about this subject, see the "Stateful Services" section of the ISIQ Deployment Guide.

A8. There are configuration assumptions in ISIQ's use of Kafka that might not match your existing environment.

Examples:

a) ISIQ uses a naming convention for isolating topics by product. ISIQ topic names could end up clashing with your topic names.

b) Although ISIQ relies on TLS connections behind Docker's network fabric, ISIQ does not use TLS between stack layers, which might be inconsistent with your security configuration.

The upshot: This capability is probably out of scope. If you want to exchange data with existing Kafka deployments, we suggest you wait until ISIQ delivers replication technology.

A9. In principle, ISIQ can use any provider that supports OIDC. We tested quite a few in-house, but not all. Since every provider we tried (IBM w3id, BlueID, Google, IGI, ISAM, Cloud Identity) worked, we expect others should work too.

A10. Yes, you can use the IGI admin provider to authenticate IGI administrators to ISIQ. If feasible, we recommend ISAM or Cloud Identity for enterprise-level authentication. For more information about this subject, see the "OpenID" section of the ISIQ Deployment Guide.

A11. In ISIQ, when you delete a configured product such as IGI, any topic data associated with that product also gets deleted.

A12. Each subscription definition on the ISIQ product dashboard page offers a "Reprocess" button that lets you replay one or more source topics from the beginning. This option can be used to ensure your producer and consumer are in sync. For more information, see the "Reprocessing Topics" section of the ISIQ User's Guide.

A13. ISIQ offers a validation tool to automate synchronization checking. For more information, see "Appendix H: Optional ISIQ Tools" in the ISIQ User's Guide.

A14. Yes, this elastic computing style should work, given due caution against losing state for stateful services (Zookeeper, Kafka broker, time-series database) when you do the removal. In other words, make sure that replication has time to create data replicas before you bring down all copies. Also, don't remove nodes that hold the only copy of data. Stateless service replicas will easily be moved among remaining nodes. Again, refer to the "Stateful Services" section of the ISIQ Deployment Guide.

A15. Default retention is set according to "log compaction" (see http://cloudurable.com/blog/kafka-architecture-log-compaction/index.html for an explanation). Simply put, it means that after a sufficient time period, only the last state of each object remains in the queue. For state data, this behavior makes sense because at some future time you might want to replay the data or introduce a new consumer/subscriber. For event-oriented data such as changelog data, you can use command-line tools to configure a time-based retention policy. We are considering adding APIs and console features in ISIQ to enact replays, set retention policies, etc. It's possible to set a minimum compaction lag if needed to give consumers a chance to see all states of an object (note: this isn't necessary for the IGI connector). Otherwise, they will see at least the last state. After a configurable retention period (24 hours by default), Kafka "tombstones" are removed and no object reference remains.

A16. As previously mentioned, under the default log compaction policy, only the last reference to each object is retained. In the example cited, after sufficient time, nothing would be replayed once the object gets compacted. Even tombstones are cleared after all other references are compacted and the retention period (default 24 hours) elapses. Note: depending on volume of activity, other references might get compacted somewhat unpredictably up to a time limit set by Kafka's log.roll configuration (7 days by default). The process happens whenever Kafka goes through partition segment rotation, which occurs when a time or segment size limit is reached.

A17. The ISIQ connector waits before it delivers to IGI an "incomplete" object, specifically, one that's missing data the object depends on. Eventually (as determined by the CONNECT_DEPENDENCY_WAIT_TIMEOUT environment variable value in ISIQ's connect-stack.yml file), if the dependencies still aren't resolved, ISIQ delivers the object to IGI with a stub for the missing data. In that situation, you can implement rules in IGI to attempt a graceful handling of the missing data.

A18. The ISIQ-supplied YAML files contain "logging:" definitions to control the number and size of Docker log files per ISIQ stack. These definitions take precedence over any system-wide Docker log rotation settings that you configured. For more information, see the "Docker Log Rotation" section of the ISIQ Deployment Guide.

[{"Business Unit":{"code":"BU008","label":"Security"},"Product":{"code":"SSCMMF","label":"IBM Security Information Queue"},"Component":"ISIQ","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}}]

Product Synonym

ISIQ

Document Information

Modified date:
10 August 2020

UID

ibm10733903