z/OS Communications Server: IP Sockets Application Programming Interface Guide and Reference
Previous topic | Next topic | Contents | Contact z/OS | Library | PDF


Access to shared storage areas in a concurrent server program

z/OS Communications Server: IP Sockets Application Programming Interface Guide and Reference
SC27-3660-00

If two tasks access the same storage area, you need full control over the use of the storage area unless the storage is read-only. If the storage area is used to pass parameters between the tasks, you must serialize access to the shared resource (the storage area).

In an MVS™ environment, you can use MVS latching services or traditional enqueue and dequeue system calls to access the shared resource. For MVS latching services, use the ISGLOBT and ISGLREL callable services. In assembler, use the ENQ and DEQ macros for enqueue and dequeue functions.

Figure 1 illustrates access to a shared storage area.

Figure 1. Serialized access to a shared storage area
Diagram that shows the process for task 1 and task 2 to serialize the access to a shared storage area.

The following steps describe this process.

  1. At time t1, Task 1 issues a serialize request by means of an enqueue call. On the enqueue() call it passes two character fields to uniquely identify the resource in question. The literal value of these two fields does not matter; the other tasks must use these same values when they access this storage area. As no other task has issued an enqueue for the resource in question, Task 1 gets access to it and continues to modify the storage area.
  2. At time t2, Task 2 needs to access the same storage area, and issues an enqueue() call using the same resource names used by Task 1. Because Task 1 has enqueued, Task 2 is placed in a wait and stays there until Task 1 releases the resource.
  3. At time t3, Task 1 releases the resource with a dequeue system() call, and Task 2 is immediately taken out of its wait and begins to modify the shared storage area.
  4. At time t4, Task 2 has finished updating the shared storage area and releases the resource with a dequeue system() call. (In this example, we assumed we need serialized access only when the tasks need to update information in the shared storage area.)
There are situations in which this assumption does not suffice. If you use a storage area to pass parameters to some kind of service task inside your address space, you must ensure that the service task has read the information and acted on it before another task in your address space tries to pass information to the service task using the same storage area, like running a log or trace. This is illustrated in Figure 2.
Figure 2. Synchronized use of a common service task
Diagram that shows the process for task 1 and task 2 to synchronize the usage of a common service task.

Follow these steps to synchronize a common service task:

  1. At time t1, Task 1 gains access to the common storage area to implicitly use the service task in question.
  2. At time t2, Task 2 also needs to use the service task services, but it is placed into a wait, because Task 1 already has the resource.
  3. At time t3, Task 1 has finished placing values into the common storage area, and signals the service task to start processing it. This is done with a POST system call. Immediately following this call, Task 1 enters a wait, where it stays until the service task has completed its processing. The service task starts, processes the data in the common storage, and prints.
  4. At time t4, the service task has finished its work and signals to Task 1 that Task 1 can continue, while the service task enters a new wait and waits for a new work request.
  5. At time t5, Task 1 releases the lock it obtained at time t1, and Task 2 is immediately taken out of its wait and starts filling its values into the common storage area before posting the same service task to process a new request.

This technique is relatively simple. It can be made more complicated, and more efficient, by using internal request queues so the requesting task does not need to wait for the service task to complete the active request.

When you use the enqueue system call, you have the option to test whether a resource is available. In some situations, you might choose this to avoid the wait at a particular point in your processing, so you can divert to some other actions when the resource is not available.

Go to the previous page Go to the next page




Copyright IBM Corporation 1990, 2014