I typically start many logical screens when using ISPF. I reckon most system programmers also do so and tend to use the same set of logical screens for their sessions. So up to now we have to start each screen manually. ISPF allows a user to have up to 32 logical screens but there is no automation for creating these logical screens at ISPF startup.
Now with z/OS V2R1 ISPF allows you to define a set of logical screens that are automatically started when ISPF is invoked.
To enable this support you have to at z/OS V2R1. When you start ISPF you specify a name of a varialbe on the ISPF start command.
Define your own variable
Use the default variable ZSTART
The variable must contain the identifier ISPF, followed by the command delimiter, followed by the command stack used to start the logical screens.
For example if I choose a variable name MYSTART then I could define the variable and assign the following values:
The name of the variable is specified as an option with the ISPF or ISPSTART command, for example:
If a variable name is not specified with ISPF or ISPSTART, the default profile variable ZSTART is used for the initial command stack.
If ZSTART is not found or does not contain the ISPF identifier, ISPF starts normally.
You can add a variable or modify ZSTART from Dialog Test -> Variables (7.3)
For example, here I'll update the ZSTART variable to start the following screens: DSLIST, SDSF, Command Shell and switch to DSLIST
Now when I start ISPF (specifiying no variables on the start command) from the TSO READY screen using just ISPF, all the logical screens defined in the ZSTART variable are started.
You can bypass the start up of any defined logical screens defined in the ZSTART variable by using the new BASIC keyword when starting ISPF.
New XALL command
I've almost forgotten about the new XALL command (thanks Yves Colliard for the suggestion to include this new feature). Take a look at the comments section to see how Yves has started ISPF sessions using REXX.
At the end of the day you're ready to logoff ISPF and end all of you logical screens. This could take many keystrokes and being the lazy sysprog that I am, a bit of patience. However with z/OS V2R1 there's a handy new command, XALL, which will attempt to terminate all of the ISPF logical screens for you.
New =XALL command provided to help terminate all logical screens with one command.
=X command is propagated to every logical session to terminate each application that supports =X
If =X not supported, termination process halts on that logical screen
Once that logical screen is terminated =XALL processing can be continued for each remaining logical screen
So if I have several logical screens open and I want a fast exit, I type =XALL on the command line, like so:
And with a bit of luck, all the screens support the =x command, then I am dumped back out to TSO.
For more information on System z and the z/OS operation system see the following IBM Redbooks publications:
This week's guest blogger is Ravi Kumar. Ravi is a Senior Managing Consultant at IBM (Analytics Platform, North American Lab Services). Ravi is a Distinguished IT Specialist (Open Group certified) with more than 23 years of I/T experience. He has a Masters degree in Business Administration (MBA) from University of Nebraska, Lincoln. He had contributed to 7 other redbooks in the areas of Database, Analytics Accelerator and Information Management tools. His social profile can be
viewed at: http://www.linkedin.com/in/ravikalyanasundaram
IBM SPSS Modeler is a powerful analytic tool that supports all phases of data analytics process, including data preparation, model building, deployment, and model maintenance. You can leverage SPSS Modeler to build analytical models, which can be used in statistical analysis, data mining and machine learning. The data scientists can work with user-friendly SPSS Modeler client interface to access mainframe data with the same level of ease as that of data from any other platform they are accustomed to. SPSS Modeler can also take advantage of in-database transformation and in-database modeling using IBM DB2 Analytics Accelerator for z/OS (IDAA) as the data analytics hub on z/OS.
Until recently, z Systems did not offer an efficient solution in the area of complex mathematical processing. So, in the past, you may have resorted to the idea of offloading operational data (that is a snapshot from a prior point in time) from z Systems to a distributed platform in order to implement machine learning, and those solutions often resulted in obsolete and unreliable results in addition to the unwanted security exposures.
Now, with IBM DB2 Analytics Accelerator you can enable Machine Learning on your OLTP applications that produce and consume z Systems data, simultaneously accelerating the execution of data transformation and analytical modeling processes with the power and performance of MPP (Massively Parallel Processing) architecture in IBM Netezza appliance. All without offloading data from z Systems to distributed environments (which by the way, also eliminates a potential data breach situation).
In-transactional scoring using the Predictive models created with the above approach can scale with your DB2 for z/OS transactional environment. This is accomplished through in-database scoring using SPSS Scoring Adapter for DB2 for z/OS, which perform real-time scoring on your predictive models to quickly reveal what's interesting in your data. When the predictive model is published in SPSS, the Scoring Adapter for DB2 z/OS uses PACK/UNPACK functions for efficient parameter move and can create an SQL statement with HUMSPSS.SCORE_COMPONENT UDF. This generated SQL statement can be embedded in your OLTP application. The other popular alternative is to generate scoring model in open-standard PMML (Predictive Model Markup Language) format. The score can then be combined with your business rules to make real-time decisions on your DB2 for z/OS data from within your mainframe applications. You may also resort to vendor tool called Zementis that uses the generated PMML to implement in-application scoring in CICS and Java applications accessing DB2 for z/OS.
The above approach easily enables your OLTP and batch applications accessing mainframe data with early machine learning capability to learn hidden patterns in your operational data using mathematical modeling algorithms that are readily available with IDAA (as INZA stored procedures that entirely runs on the Accelerator). With IDAA V5.1, you can utilize five major predictive analytics algorithms viz., K-Means, Naive Bayes, Decision Tree, Regression Tree, and Two-step.
Unsupervised Learning algorithms like K-Means and Two-step uses descriptive statistics to analyze the natural patterns and relationships that occur within your operational data on DB2 for z/OS. Unsupervised learning models can identify clusters of similar records and/or relationships between different fields within an accelerated DB2 for z/OS table. For example, K-Means and Two-Step clustering algorithms (available through stored procedures like INZA.KMEANS and INZA.TWOSTEP) can enable Machine Learning in areas like market segmentation, geostatistics, market basket analysis (by association learning) and so on.
Supervised Learning uses historic/training data to construct decision trees and the constructed tree is then used to predict future values. Classification technique can be used to identify which group or type a new record, that is being inserted into your DB2 for z/OS table, belongs to based on key characteristic values on its fields. Regression technique can be used to predict future values for a given field based on past historic values. Algorithms like Naive Bayes, Decision Tree, and Regression Tree can be used to solve classification and regression problems. Thus the predictive models using supervised learning algorithms (available through stored procedures like INZA.DECTREE, INZA.REGTREE, and INZA.NAIVEBAYES) can be used to predict whether a customer will buy or leave, credit card fraud, up-selling opportunities, voters responsiveness to different types of election campaigns and so on.
Summary: Neuroscientists say that pattern recognition and emotional tagging help humans with quick decision making. Algorithms are a big part of machine learning and these algorithms can aid the executives with more and more evidence based decision making using hot operational data on z/OS. The executives can now combine modern machines' processing power with their own ingenuity to avoid flawed decisions that are sometimes caused by emotional tagging.
The IBM z13, like its predecessors, is designed from the chip level up to support data processing. This includes a strong, fast I/O infrastructure, cache on the chip to bring data close to processing power, security and compression capabilities of the coprocessors and I/O features, and the 99.999% data availability design of the coupling technologies.
The figure below shows ten easy steps for implementing an I/O configuration for your z13. The numbered steps are described after the figure.
a. When planning to migrate to a z13, the IBM Technical Support team can help you define a configuration design that meets your needs. The configuration is then used during the ordering process. b. The IBM order for the configuration is created and passed to the manufacturing process. c. The manufacturing process creates a configuration file that is stored at the IBM Resource Link website. This configuration file describes the hardware being ordered. This data is available for download by the client installation team. d. A New Order report is created that shows the configuration summary of what is being ordered along with the Customer Control Number (CCN). The CCN can be used to retrieve CFReport (a data file that contains a listing of hardware configuration and changes for a central processor complex (CPC)) from Resource Link.
Make sure that you have the current PSP Bucket installed. Also, run the SMP/E report with fix category (FIXCAT) exceptions to determine whether any Program Temporary Fixes (PTFs) must be applied. Ensure that you have the most current physical channel ID (PCHID) report and CCN from your IBM service representative. Have extra cables (fiber optic and copper) available just in case some get damaged as they are being relocated.
When you plan your configuration, consider this information:
– Naming standards
– FICON switch and port redundancy
– Adequate I/O paths to your devices for performance
– OSA Channel Path Identifier (CHPID) configuration for network and console communications
– Coupling facility connections internally and to other systems.
Because the z13 server does not support attachment to the IBM Sysplex Timer, you must consider how the z13 will receive its time source. A z13 cannot join a CTN that includes a z10 or before as a member. Since the z10 was the last server that supported the IBM Sysplex Timer (9037) connectivity, the z13 cannot be configured as a member of a mixed CTN. The z13 can only join an STP-only CTN. When you are planning to replace a z196 or zEC12 with a new z13, plan the replacement of channels that are not supported on z13. You must carefully plan how to replace those, for instance, ISC-3 to HCA3-O or ICA SR for connectivity between z13 and z13. You might need to increase CF storage size when you replace z196 or zEC12 with z13. Coupling Facility Control Code (CFCC) level 20 requirements may be different from CFCC level 19 and earlier. Use the CFSizer Tool to get the new CF storage requirements.
The existing z196 or zEC12 I/O configuration is used as a starting point for using Hardware Configuration Definition (HCD). The z196 or zEC12 production input/output definition file (IODF) is used as input to HCD to create a work IODF that becomes the base of the new z13 configuration. When the new z13 configuration is added and the obsolete hardware is deleted, a validated version of the configuration is saved in a z13 validated work IODF.
5. Check a. From the validated work IODF, create a file that contains the z13 IOCP statements. This IOCP statements file is transferred to the workstation used for the CHPID Mapping Tool (CMT). Hardware Configuration Manager (HCM) can also be used here to transfer the IOCP deck to and from the CMT.
b. The configuration file that is created by the IBM Manufacturing process in step 1d is downloaded from Resource Link to the CMT workstation. The CHPID Mapping Tool (CMT) uses the input data from the files to map logical channels to physical ones on the new z13 hardware. You might have to make decisions in response to the following situations, among others: Resolving situations in which the limitations on the purchased hardware cause a single point of failure (SPoF). You might must purchase more hardware to resolve some SPoF situations. Prioritizing certain hardware items over others.
c. After the CMT processing finishes, the IOCP statements contain the physical channels to logical channels assignment that is based on the actual purchased hardware configuration. The CHPID Mapping Tool (CMT) also creates configuration reports to be used by the IBM service representative and the installation team. The file that contains the updated IOCP statements created by the CMT, which now contains the physical channels assignment, is transferred to the host system.
d. Use HCD, the validated work IODF file created in step 5a, and the IOCP statements updated by the CMT to apply the physical channel assignments created by the CMT to the configuration data in the work IODF.
After the physical channel data is migrated into the work IODF, a z13 production IODF is created and the final IOCP statements can be generated. The installation team uses the configuration data from the z13 production IODF when the final power-on reset is done, yielding a z13 with an I/O configuration ready to be used.
IODFs that are modifying existing configurations can be tested in most cases to verify that the IODF is making the intended changes.
8. Available a. If you are upgrading an existing z196 or zEC12, you might be able to use HCD to write an IOCDS to your system in preparation for the upgrade. If you can write an IOCDS to your current system in preparation for upgrade, do so and let the IBM service representative know which IOCDS to use. b. If the z196 or zEC12 is not network connected to the CPC where HCD is running, or if you are not upgrading or cannot write an IOCDS in preparation for the upgrade, use HCD to produce an IOCP input file. Download this input file to a USB flash drive.
The new production IODF can be applied to the z13 in these ways:
– Using the power-on reset process
– Using the Dynamic IODF Activate process
Communicating new and changed configurations to operations and the appropriate users and departments is important.
So you’ve built a killer application. It’s useful. It’s novel. It’s clever. Surely it’s going to be a huge success – fame and fortune await. Or…. do they?
As IBM Distinguished Engineer Frank De Gilio tells it at the SHARE Orlando conference, usefulness, cleverness and novelty in today’s market are not enough. There are three other important factors to consider:
Is it fast?
Is it efficient?
Is it easy to use?
Never has this applied more than it applies to today’s world, and the role that mainframe applications play in that world. We have many big monolithic mainframe applications and they all live in the data center. The aggregation point is the mainframe servers in that data center. But those days are gone. The days of terminals, and even latops, is disappearing. Mobile is the new aggregation point, and time from development to production has moved from months to days.
This new world is the API economy. Monolithic applications are broken down into smaller pieces – functionality that you can call – known as services. This is the microservices architecture at work – the approach of designing applications as collections of smaller, independent services.
Frank De Gilio describes how this new business programming model can be divided into two roles:
Hardcores – These are the people who understand how the mainframe systems work
It is these Scripters that services of today need to appeal to. Scripters don’t care about platforms, they care about how fast, efficient, and easy to use your service is.
Unleashing current business applications as services provides big advantages, putting existing capabilities into the hands of new users. Combining cloud service model with z/OS parallel system is a winner!
There are times when you have no MCS consoles available and your only option is to use the HMC systems console. Unfortunately, the systems console is not very intuitive to use and doesn't present an interface similar to a traditional MCS console. z/OS Version 2 Release 1 comes to the rescue with support for the HMC integrated 3270 console, offering us a new type of MCS-like console called the HMCS. Let's take a look at this new HMCS console type.
The HMCS console is available during IPL and before and after SMCS availability
OSA-ICC attached MCS consoles not needed to bridge gap between NIP and SMCS availability
No additional H/W needed
The HMCS interface is identical to the operator interface on NIP, MCS and SMCS consoles.
Eliminates operator learning curve
Eliminates potential errors caused by unfamiliar and seldom used interfaces
The HMCS console can be used in emergency situations
VARY command not used to activate the console
Operator at HMC drags the Integrated 3270 icon to the LPAR icon and presses the “enter” key
There are many advantages to using the HMCS over the HMC systems console. The following table highlights a few of these.
Here are the general attributes and some characteristics for the HMCS.
Only one per z/OS image
Screen size is 43 rows by 80 columns
Supports seven colors and extended highlighting (reverse video, blinking, underscore) controlled by MPFLSTxx Parmlib specifications
As with SMCS consoles, SD (status display) and MS (message stream) modes are NOT supported
As with MCS consoles, messages will be displayed on the HMCS console if the operator has not yet logged on and LOGON is REQUIRED or AUTO
Commandswill be rejected until a LOGON has been accepted
LOGON(OPTIONAL) can be specified for the HMCS console
The following commands are NOT supported for HMCS consoles:
VARY CN(consolename),ONLINE[,SYSTEM] [,FORCE]
Methods to delete the HMCS definition:
Use the IEARELCN utility
A VARY CN(consolename),OFFLINE or a RESET CN(consolename) command must first be issued to release the console name ENQ.
Use the SETCON DELETE command
HMCS console must be offline
As with MCS, if the HMCS console has LOGON(REQUIRED), but has not been logged on and is in the SYNCHDEST list, synchronous WTOR processing will NOT display the message on the HMCS console.
We do not delay the potential action of the synchronous WTOR by displaying it on a console that is not logged on when it is required to be online.
If the “Integrated 3270 console” icon on the HMC is attached to a z/OS LPAR image and the z/OS is IPLed (LOAD profile activated) then:
z/OS will use the HMCS console for NIP messages
No CONSOLxx definitions are needed for the NIP usage
Once NIP is over, the HMCS console can no longer be used if there are no CONSOLxx definitions for it
System selection of the console to use during NIP:
HMCS (Integrated 3270 console)
Devices defined in the IODF via NIPCON specifications
If an HMCS CONSOLxx definition exists and
the console was not used at NIP
the console is placed in “standby” (I'll discuss the new MCS console standby state in the next blog entry)
the console was used at NIP
the console becomes active as a z/OS operator console
Identical to NIP → MCS processing
If an HCMS CONSOLxx definition does not exist, the HMCS cannot be used after NIP
To activate an HMCS console:
Attach the Integrated 3270 icon to the z/OS LPAR icon
The TSO/E REXX external function call LISTDSI (list data set information) is used to retrieve information about a data set's attributes. z/OS V2R1 introduces some enhancements to the function.
Today I'll give you some updates on the following LISTDSI enhancements:
More information provided on multi-volume data sets
New multivol or nomultivol keyword
More information provided on muli-volume data sets
LISTDSI now provides information on all volumes of a multi-volume data set, not just the first.
The number of volumes is returned in variable SYSNUMVOLS and the list of volumes in SYSVOLUMES.
SYSNUMVOLS The number of volumes for a multi-volume dataset.
SYSVOLUMES List of volumes, up to SYSNUMVOLS, delimited by a space. Up to 412 characters. The first six characters will always match SYSVOLUME.
The existing SYSVOLUME returns the name of the first volume.
Here's an example:
MULTIVOL or NOMULTIVOL keyword
The new multivol or nomultivol argument indicates whether size calculations should include all volumes when a data set’s allocation includes more than one volume.
If the argument multivol is passed to the LISTDSI function then:
variables that include size calculations will include the statistics for all volumes rather than just the first one.
If multivol is not specifed on the function (default) or nomultivol is specified then:
the calculations are done only for the first volume.
Here's an example where the code on the left uses nomultivol and the code on the uses multivol:
As you can see the multivol (right) includes the allocation statistics for all volumes.
For PDSE data sets:
The new SYSALLOCPAGES variable returns the number of pages allocated to a PDSE
The new SYSUSEDPERCENT variable returns the percentage of pages used.
These variables along with the existing SYSUSEDPAGES are always returned for PDSE data set.
SYSALLOCPAGES Indicates the number of pages allocated to a PDSE
SYSUSEDPERCENT Indicates a percentage of pages used out of the pages allocated for a PDSE. A number from 0 to 100, rounded down to the nearest integer value.
Here's an example:
If available, LISTDSI will return additional information about a data set created on an EAV (extended address volume) volume.
For a data set created on an EAV volume (EATTR=OPT), LISTDSI will return:
dataset creation time in the variable SYSCREATETIME
the creating jobname in the variable SYSCREATEJOB
the creating step name in the variable SYSCREATESTEP
SYSCREATETIME Indicates the time a data set was created in the format hh:mm:sswhere hh is hours since midnight, mm is minutes since midnight, and ss is seconds since midnight. This variable is only set for EAV data sets and can be used together with the SYSCREATE variable to determine the date and time when a data set was created.
SYSCREATEJOB For EAV data sets, if available, indicates the name of the job that created the data set.
SYSCREATESTEP For EAV data sets, if available, indicates the name of the job step that created the data set.
So for example, assuming the listed dataset was created by a job through JCL submission and allocated on an EAV volume:
For more information on System z and the z/OS operation system see the following IBM Redbooks publications:
The combination of versatility of the connectivity options, the use of open standards, and the separation of data processing and I/O operations.
Data is produced on an unprecedented scale and access to that data must be quick, safe and guarantee integrity. Data is an organization's most valuable asset in the digital age. As the growth continues it is essential that the technology improves to keep the organization competitive.
The most efficient IT infrastructures that handle today's workloads usually have well balanced systems with superior data processing and I/O capabilities that are responsive and reliable. Such I/O capabilities have been a standard component of the IBM mainframe architecture since first introduced with the IBM S/360 in 1964. Over the past 50+ years I/O technologies have advanced significantly and so have the mainframe I/O capabilities. From the original parallel channels, where I/O devices were connected directly using two copper cables (called bus cable and tag cable) to today's Fibre Connection (FICON®), where optical transmitters, Fiber Channel switches/directors, and fiber-optic cables transport data at link rates of 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps. A bus cable only carried one byte of information each way and a tag cable indicated the meaning of the data on the bus cable.
In addition, z Systems platforms have a unique channel subsystem that delivers high I/O bandwidth. The channel subsystem (CSS) was added to the IBM mainframe architecture to provide a pipeline through which data can be exchanged between systems or between a system and external devices via storage area networks and local area networks. The CSS is the channel path management layer that enables communication to and from system memory and peripheral devices at very fast rates.
The z Systems platforms have dedicated system assist processors (SAPs) in addition to the general purpose and specialty processors. I/O requests are handled by the SAPs, freeing up the general purpose and specialty processors to do other work. The IBM z13 offers up to 24 SAPs, supporting millions of I/O operations per second. This is also possible because all I/O features offered on the z Systems platforms offload some of the I/O operations to the hardware, using licensed internal code (LIC). The result is a significant improvement in both latency and bandwidth for transporting data.
In the z13, I/O features are plugged into an industry standard PCIe I/O drawer with PCIe Gen3 interconnects, delivering throughput speeds of 16 GBps to and from the I/O features. Different types of I/O features are available for each channel or link type and they can be installed or replaced concurrently, so no disruption to the production environment. Each PCIe I/O drawer can support up to 320 FICON channels, which provide unmatched bandwidth to back-end storage systems, while up to 96 OSA-Express ports allow for direct high-speed Ethernet connectivity.
Other I/O features used for system-to-system communications are Integrated Coupling Adapter (ICA SR) and 10GbE RoCE Express (for IBM z/OS®-to-z/OS communications), while HiperSockets technology can be used for communications between logical partitions within the z Systems platform.
If you would like to learn more about z Systems connectivity options, typical uses, coexistence, and relative merits of the available I/O features, go to:
Flash Express introduces SSD technology to the IBM zEnterprise EC12 server, which is implemented by using Flash SSDs mounted in PCIe Flash Express feature cards. This blog entry provides an brief overview and an example of allocating Flash Express storage to a z/OS partition.
Flash Express Overview
Flash Express is an innovative solution available on zEC12 designed to help improve availability and performance to provide a higher level of quality of service. It is designed to automatically improve availability for key workloads at critical processing times, and improve access time for critical business z/OS workloads. It can also reduce latency time during diagnostic collection (dump operations).
Flash memory is a non-volatile computer storage technology. It was introduced on the market decades ago. Flash memory is commonly used today in memory cards, USB flash drives, solid-state drives (SSDs), and similar products for general storage and transfer of data. Until recently, the high cost per gigabyte and limited capacity of SSDs restricted deployment of these drives to specific applications. Recent advances in SSD technology and economies of scale have driven down the cost of SSDs, making them a viable storage option for I/O intensive enterprise applications.
An SSD, sometimes called a solid-state disk or electronic disk, is a data storage device that uses integrated circuit assemblies as memory to store data persistently. SSD technology uses electronic interfaces compatible with traditional block I/O hard disk drives. SSDs do not employ any moving mechanical components. This characteristic distinguishes them from traditional magnetic disks such as hard disk drives (HDDs), which are electromechanical devices that contain spinning disks and movable read/write heads. With no seek time or rotational delays, SSDs can deliver substantially better I/O performance than HDDs. Flash SSDs demonstrate latencies that are 10 - 50 times lower than the fastest HDDs, often enabling dramatically improved I/O response times.
For additional technical details on Flash Express see the following IBM Redbooks Publications:
Assignment of Flash Express storage to LPARs is done through the HMC or Support Element using the Manage Flash Allocation panel (from the Configuration task). This panel allows the user to allocate Flash Express storage in increments of 16GB to any partition defined in IOCDS A0, A1, A2 and A3. An example of the panel is show in the following image:
Allocating Flash Express storage to partitions
There are a few points to be aware of when allocating Flash Express storage to a partition:
When an allocation is first defined, you will need to set the initial and maximum allocation in 16GB increments.
A new allocation will be detected by an active z/OS LPAR but no SCM memory will be varied online to z/OS.
An SCM allocation configured will be put online to the z/OS image assigned to the partition at IPL time unless the z/OS image is configured to not do so.
z/OS will allow more memory to be configured online, up to the maximum GB defined in this panel.
Any extra memory varied online or offline by z/OS will be reflected in the allocated GB in this panel if refreshed.
Minimum amounts are allocated from the available pool so can not be overallocated.
Maximum amounts can be overallocated up to 5696GB.
Maximum amounts must be greater than or equal to the initial amounts.
To allocate Flash Express storage to a partition select add allocation action from the Select Action drop down menu from the Manage Flash Allocation Panel. The New Flash Allocation Panel is displayed as shown in the following image:
Enter the partition manually using the New option or select the partition using the Use Existing option and drop down menu. Only partitions which do not have an existing allocation are shown with the Use existing drop down menu. The new allocation panel can not be used to modify an already existing allocation for a partition. Enter the allocation values for the specified partition name.
The allocations fields are:
Initial (GB) = Enter the initial Flash allocation to be used for the logical partition.
Maximum (GB) = Enter the maximum Flash allocation to be used for the logical partition.
Storage increment (GB) = Displays the Flash increment value.
Available (GB) = Displays the amount of Flash memory currently available
In the following example, we will allocate Flash Express storage for the active z/OS image named SC76 in the partition named A03 (Channel Subsystem 0, Multiple Image Facility ID 3). The RSM Enablement Offering support is installed. From SC76, we issue the MVS command D IPLINFO,PAGESCM, with the results shown below.
IEE255I SYSTEM PARAMETER 'PAGESCM': NOT_SPECIFIED
Since no PAGESCM parameter was specified, this indicates that the default value of ALL is used (reserves all SCM for paging at IPL time). In other words, if a Flash Express storage allocation is defined for the LPAR and PAGESCM=ALL is specified (or left to default), then at IPL time the initial amount of Flash Express storage specified will be used automatically by z/OS for paging. Likewise, if a specific amount is specified, this amount will be made available for paging.
However, if a Flash Express allocation is added when a z/OS image is active, then z/OS will detect the allocation but does not automatically vary the storage online for use. The CONFIG SCM MVS operator command has to be issued to vary the Flash Express storage online for use as paging space for the z/OS LPAR.
When you dynamically allocate and configure SCM storage online to z/OS, z/OS will start using SCM for local paging. You will still see percentages for the local page datasets. If you IPL, you will see that z/OS primarily uses SCM for local paging, so your local page datasets should be 0% or a lot lower than SCM storage.
To add a Flash Storage allocation for partition A03, from the Manage Flash Allocation task, we select an existing partition and then select the add allocation action from the Select action drop down menu. The New Flash Allocation Panel appears. From the Use Existing drop down menu we select partition A03 and enter Initial and Maximum amounts of 16GB, as shown in the following image:
Click OK to define the allocation. A panel pop up will appear to indicate that the allocation was successfully added, as shown in the following image:
The new allocation will be displayed on the Manage Flash Allocation Partitions table, as shown in the following image:
When the allocation is defined through the HMC or SE for an active z/OS image, the allocation will be detected. On the z/OS image’s (SC76) console the message IAR034I is displayed:
IAR034I ADDITIONAL STORAGE-CLASS MEMORY DETECTED
From SC76, we issue the enhanced D ASM and D M commands to display Flash Express SCM related information and status. Each command’s result displayed below:
IEE200I 15.50.35 DISPLAY ASM 451
TYPE FULL STAT DEV DATASET NAME
PLPA 100% FULL 8136 PAGE.SC76.PLPA
COMMON 23% OK 8136 PAGE.SC76.COMMON
LOCAL 0% OK 8136 PAGE.SC76.LOCAL1
PAGEDEL COMMAND IS NOT ACTIVE
IEE174I 15.50.55 DISPLAY M 455
STORAGE-CLASS MEMORY STATUS
SCM INCREMENT SIZE IS 16G
IEE174I 15.51.04 DISPLAY M 457
STORAGE-CLASS MEMORY STATUS - INCREMENT DETAIL
ONLINE: 0G OFFLINE-AVAILABLE: 16G PENDING OFFLINE: 0G
SCM INCREMENT SIZE IS 16G
From these commands we see that 16GB of Flash Express storage is available (defined) but not in use (offline-available).
To vary the storage online to our example LPAR, we issue the CONFIG SCM(xxG),ONLINE command, as shown below along with results. The amount of storage configured online must be specified according to the supported increment size. From the displays above we see the supported increment size is 16G. Since we’ve only allocated 16G for this z/OS image we vary the entire (16G) amount online.
We again issue the D ASM and D M commands to display the status of the Flash Express storage and see that the 16GB initial value is now online and available.
IEE200I 16.07.10 DISPLAY ASM 500
TYPE FULL STAT DEV DATASET NAME
PLPA 100% FULL 8136 PAGE.SC76.PLPA
COMMON 23% OK 8136 PAGE.SC76.COMMON
LOCAL 0% OK 8136 PAGE.SC76.LOCAL1
SCM 0% OK N/A N/A
PAGEDEL COMMAND IS NOT ACTIVE
IEE207I 16.07.22 DISPLAY ASM 502
STATUS FULL SIZE USED IN-ERROR
IN-USE 0% 4,194,304 0 0
IEE174I 16.07.32 DISPLAY M 504
STORAGE-CLASS MEMORY STATUS
0% IN USE
SCM INCREMENT SIZE IS 16G
IEE174I 16.07.45 DISPLAY M 511
STORAGE-CLASS MEMORY STATUS - INCREMENT DETAIL
ADDRESS IN USE STATUS
0G 0% ONLINE
ONLINE: 16G OFFLINE-AVAILABLE: 0G PENDING OFFLINE: 0G
0% IN USE
SCM INCREMENT SIZE IS 16G
Prior to adding the Flash Express allocation to this z/OS image (SC76) at IPL time the following message is issued indicating that although PAGESCM=ALL was specified (left to default in our example), no SCM memory (Flash Express storage) is available and brought
IAR031I USE OF STORAGE-CLASS MEMORY FOR PAGING IS ENABLED - PAGESCM=ALL ,
Subsequent to adding the Flash Express allocation, at the next IPL, the following message is issued, indicating that PAGESCM=ALL was specified (left to default in our example) and all available (initial amount specified for the allocation) storage is brought online.
IAR031I USE OF STORAGE-CLASS MEMORY FOR PAGING IS ENABLED - PAGESCM=ALL
Today, we’re delighted to share the latest member of the IBM z Systems family: the IBM z13s. We think you will like it. A lot.
The z13s delivers many exciting possibilities over its predecessor, the IBM zBC12.
The short list includes:
Accelerated data and transaction serving
Integrated analytics for insight
Access to the API economy
An agile application development and operations environment
Efficient, scalable, and secure cloud services
End-to-end security for data and transactions
The high levels of virtualization provide options for cloud deployment to assist with such areas as application development and testing. The hypervisor is key for virtualization and the z13s supports both hardware and software hypervisors (PR/SM, KVM, and z/VM).
The underlying architecture has expanded to enable new solutions such as integrated analytics to bring valuable opportunities for your business while support existing applications.
N10: 1 CPC drawer with max. of 10 customizable PUs
N20: 1 or 2 CPC drawers with max. of 20 customizable PUs
Up to 4 TB
The z13s offers 26 capacity levels (times 6 CPs) for 156 settings. Plus, models are offered for either all-IFL or all-ICF configurations.
Analytical vector processing
Redesigned larger caches
Enhanced accelerators for data compression and cryptography
For enterprises aiming to move their IT infrastructures in closer alignment with their business plan, the z13s offers unparalleled levels of flexibility through virtualization, analytical insight, security. Enterprise-wide agility will help you embrace the challenges of the exploding on-demand digital age.
Integration and connectivity in a large enterprise are hefty concerns. Lack of either of these components can disable systems from 24x7 functionality – and affect your bottom line. Adding a level of complexity are the enterprise-wide applications and data stores that also require integration and connectivity. Application developers, architects, systems integrators, and systems programmers face these complexities daily.
As IBM® IMS™ 13 is introduced, and with the delivery of IMS Open Database Manager (ODBM) and JDBC™ type-2 and type-4 drivers, connectivity is extended still further, allowing for new integration possibilities.
Three things to know about IMS:
IMS 13 offers an enterprise-wide IT integration of your systems, based on z/OS® hosted solutions (IBM System z®) and multi-tiered platforms with distributed computing encompassing IMS.
IMS delivers the lowest cost transaction and hierarchical database management system for mission critical online transaction processing (OLTP).
IMS can address your need for proven performance, data integrity, and workload distribution, for enterprise-wide implementation.
Point of interest: In a recent study, IMS 13 processed 100K transactions per second (tps).
Additionally, many components are available for IMS-based environments that allow connectivity from distributed and mainframe-centric applications to IMS and its databases, and also from mainframe IMS applications to call out to obtain information from various data stores.
The IMS environment: The IMS environment can seamlessly integrate your computing infrastructure (hardware, software, and the related integration services) with vital business applications. IMS is:
• Integrated • Open • Virtualized • Autonomic computing
The technologies and products that support IMS connectivity are:
• Java EE Connector Architecture (JCA) • Java Message Service (JMS) • Web services and SOAP
• IMS TMRA • WebSphere MQ and IMS Bridge • IBM Business Process Manager
The components of IMS, and the technologies and products that support IMS connectivity, are explained in detail in IMS Integration and Connectivity Across the Enterprise (see link below).
Total cost of ownership (TCO): IT architects and application design and development teams have many opportunities available related to modernization and enterprise integration challenges. The key challenge is developing solutions that combine stability, high availability, capacity, and performance, while simplifying application development and reducing the total cost of ownership (TCO).
IMS ODBM, IMS Open Transaction Manager Access (OTMA), and IMS Enterprise Suite integrate with other IBM software to create interconnected, intelligent systems that redefine Enterprise, lower your TCO, and deliver the qualities of service that you demand.
Join us: Our view is that integration and connectivity is an exciting journey. Join us.
Today’s blog posting is from an IBM Redbooks publication project that was led by Paolo Bruni, an IMS Information Management Project Leader at the International Technical Support Organization (ITSO) based in the Silicon Valley Lab. He has authored several IBM Redbooks publications about IMS and related tools. During his many years with IBM, in development and in the field, Paolo has worked mostly on database systems.
IMS Integration and Connectivity Across the Enterprise
Richard Lewis is an IBM employee who is part of the IBM Washington System Center, Advanced Technical Support Organization supporting z/VM and Linux on System z.
In August of this year IBM announced the acquisition of CSL International, a leading provider of virtualization management technology. As part of the sales enablement strategy for this new IBM technology, a team at the International Technical Support Organization (ITSO) in Poughkeepsie, NY, is beginning work to produce a Redbooks publication designed to help customers implement this new technology, formerly called CSL-WAVE and now named IBM Wave for z/VM. IBM Wave is a provisioning and productivity management solution for simplifying the control and usage of virtual Linux servers running on the IBM z/VM operating system.
This is the first week of the project, so the team has been spending time building a table of contents for the book, and becoming familiar with the technology. Wednesday we installed IBM Wave in the first member of a four member z/VM 6.3 Single System Image cluster.
We began by executing the z/VM and Directory Maintenance Facility (DIRMAINT) commands necessary to prepare for the installation of CSL-WAVE. Our environment had much of the preliminary setup already done, such as configuration of the z/VM System Management API (SMAPI), and configuration of the DIRMAINT EXTENT CONTROL file. In addition, the Linux virtual machine to host the IBM Wave knowledge base, web server and background task scheduler was already created for us. The Linux distribution was SUSE and the level was SLES 11 SP2.
Once all of the preliminary steps were completed we proceeded to install the IBM Wave rpm file. This went very smoothly and lasted only a couple of minutes. At the end of the rpm install we had the web server, knowledge base and background task scheduler up and running.
The next step was to point a browser at the IP address for the WAVESRV virtual machine (running the knowledge base, web server, and background task scheduler) to download the Java web start GUI client. The web page displayed provides a link for the GUI client; however, it also provides a really convenient tool to test whether or not the z/VM SMAPI is properly configured. This is a capability that many z/VM customers have requested for quite some time. When configuring the z/VM SMAPI there isn’t any convenient method of checking that all is running properly. The tool from IBM Wave provides just that capability.
As can be seen in Figure 2, the tool will connect to the z/VM System Management API and test the network connectivity as well as the capability to execute API requests that use the DIRMAINT facility.
After running the test tool, we invoked the CSL-WAVE GUI. The GUI prompts for the definition of a userid and password that will become a super user for further configuration. From this point we needed to configure the license, and define the processor and systems for CSL-WAVE to manage. Details of that process are best left for another installment. Even with plenty of time to discuss what was happening we were completely installed in less than 3 hours. Quite remarkable.
Qi Ye is a System z Senior IT Specialist working in GMU ATS as a System z architect in China. He has 8+ years of experience in System z technical support. He mainly focuses on System z hardware, z/OS, DB2 for z/OS, z/VM and Linux on System z. He currently provides pre-sales System z technical support to GMTs, including GCG.
This is my second time in Poughkeepsie working on an IBM Redbooks residency. I really enjoyed the residency to work with a group of smart people from different part of the world. It's a wonderful experience.
In this residency, I was working on a new IBM Redbooks project named Set up Linux on IBM System z for Production. From the name of this Redbooks publication, you've might already have guessed the contents of this book. Yes! In this book, we discuss all the considerations for moving business applications into production on the Linux on System z platform. It includes considerations of H/W planning, storage planning, Linux system planning, performance, software planning, accounting, zCloud and checklists, etc.
We also provided several suggested Linux architectures based on our experiences worldwide. Let me give you a couple of examples to help you understand what we were working on for this Redbooks publication. For example, based on the experiences from our worldwide clients, we introduced several of the most common Linux on System z architectures. In each architecture, we discussed characteristics such as reliability, availability and serviceability (RAS) , network, management, maintenance, etc. Readers can reference our suggested architectures based on their own requirements.
As another example, Linux on System z can support different I/O channel types (FICON, zHPF and FCP) and corresponding disk types (ECKD and SCSI). We compared the different channel and disk types to help our readers understand their characteristics and differences. We hope we can help our readers select the most suitable architecture and understand the factors that need to considered before putting business applications into production. We also have provided some checklists in the book to assist our readers in planning to move to the Linux on System z platform.
In this residency, we had five people from different countries, including U.S., Brazil and China. They are JB Mills, Livio Sousa, David Sousa and Saulo Silva. Of course, we have our ITSO project leader Lydia Parziale. It's really a good experience to work with those smart people. They are really awesome. We worked together to discuss our client experiences, the book outlines and technical details. We were sitting around in a meeting room to design the scenarios, discussing the technical practice and even debating the most appropriate wording of a statement. I believe that is the beauty of a Redbooks residency. People from different countries with different backgrounds and expertise working together to contribute their knowledge, experiences and learn from each other as well. During the residency, we worked together; we ate together; we played together on the weekends. Having worked with those guys for four weeks, we’ve also built good friendships. I really like those guys! This is another benefit we have in the residency. And that friendship will continue in my life.
Poughkeepsie is a beautiful place. I love this place very much. It has beautiful seasons, especially autumn. Alongside the IBM office and hotel, it’s Hudson river. During the weekend, we can see beautiful scenes beside the river banks or on the bridges, including a pedestrian bridge.
I’m not sure if this is the longest pedestrian walkway in the world, but I’m sure this is the longest pedestrian bridge Ive ever walked before.
On one of the weekends, I accompanied my residence team members on a visit to a famous cheesecake factory -Carlos Bake Shop in New Jersey. The taste of Red Velvet was so wonderful that I cannot forget it. Here is the Red Velvet.
There are lots of places to visit in New York City. Times Square, Statue of Liberty, Rockefeller Center, Wall street, 5th Avenue, Empire State Building and NY Central Park are definitely the landmarks of NY City. Of course, I think the museums in NY City are one of most attractive places that deserved to be visited. With an IBM badge, you can visit some museums freely. If you like visiting museums, you might spend a whole day in a museum. The museums in NY are really deserving of a visit. We also visited an air carrier museum. It’s also a very good place that deserved to visit.
I also went to the West Point Museum. It’s about 35 miles away from the Residence Inn hotel where I stayed. I really enjoyed the views when I was driving to West Point. I also recommend visiting that place.
Oh, wait, I almost forget to mention another exciting experience in Poughkeepsie. That is shopping! Yes.... There is a biggest outlets in the east coast of North America - Woodbury Commons. It’s about 35 miles away from IBM Poughkeepsie office. I can say you definitely can find what you want there. Shopping paradise? It is! I was always buying lots of stuffs for my family and friends. Explore it by yourself!
If you want me to summarize the experience of a Redbooks residency, I would like to say “Working hard, play hard!”. You can contribute there; you can learn much there; you can make good friends there; you can play hard there.
Did you know that all IT vendor products must go through rigorous qualification testing before they are supported on IBM z Systems® platforms?
To acquire qualification, vendors obtain licensed IBM patents, intellectual property, and know-how. This licensing provides vendors access to the proprietary IBM protocols and applications that are used on z Systems platforms.
In this post we will look at the criteria and processes required to successfully complete the IBM z Systems Geographically Dispersed Parallel Sysplex™ (GDPS®) qualification testing for Dense Wavelength Division Multiplexing (DWDM) vendors.
GDPS is an enterprise-wide continuous availability (CA) and disaster recovery (DR) solution that can manage recovery from planned and unplanned outages across distributed servers and z Systems platforms. GDPS can be configured in either a single site or in a multi-site configuration. It is designed to manage remote copy configuration between storage subsystems, automate Parallel Sysplex operational tasks, and affect failure recovery.
GDPS qualification testing for z Systems platforms is conducted at the IBM Vendor Solutions Connectivity (VSC) Lab in Poughkeepsie, New York.
IBM proprietary software and microcode utility test suites form a part of the GDPS qualification tests. They drive the various GDPS components and protocols to the full data rate of each link type that is transported by the DWDM equipment. This level of testing ensures that the maximum channel utilization is achieved and tested to levels well beyond typical client environments.
The test suites are used for verification of z Systems architecture functionality. The functionality test suites must be completely error free to be considered sucessful; all components within the complex are subject to this standard. Any errors detected during this testing are captured and analyzed by the test suites.
The test suites are also used for verification of z Systems architecture recovery by creating various fault and error conditions. The recovery tests check for the correct detection of a fault or error condition by the attached subsystems, and ensure that the recovery adheres to the z Systems architecture.
Below depicts the environment that is used for DWDM vendor qualification testing.
The IBM Redbooks team recently published IBM Redpapers related to z Systems qualified DWDM vendor products for GDPS solutions with Server Time Protocol (STP). The papers describe the z Systems qualification process and the applicable environments, protocols, and topologies that were tested.
The team members that wrote the papers are subject matter experts from the IBM VSC Lab: Pasquale Catalano and Andrew Crimmins, and Bill White (IBM Redbooks team leader).
For more information about IBM Redpapers related to z Systems qualified DWDM vendor products, go to:
Eduardo Simoes Franco is a Consulting IT Specialist. His focus is on supporting and promoting IT Solutions
on the IBM System z platform. He has fifteen years of experience with IT Solutions and Linux,
starting in IBM as Linux on system z Specialist. He has CLA and LPI Certification.
Many people today ask whether the Mainframe -- now called IBM System z -- is ready for new technologies such as Cloud and Big Data. Only a little analysis is needed to see that Linux on System z is a perfect match for both Cloud and Big Data. Now 50 years young, this incredible machine -- or it's better to say "technology" -- gives us surprises day after day. Even the most skeptical IT managers can see the value of Linux on System z.
But what makes the System z so fantastic, what makes such a big difference? Here's a short list:
The fastest I/O in the industry -the CPUs and fast I/O being able to process many terabytes of data per second
Hipersocket infinity network band that uses memory to communicate with virtual servers
A proactive management environment
Flexibility brought by the zBX hybrid solution
The list goes on and on, but those are some of my favorites. In addition to these characteristics, System z has something truly innovative : the ability to
In the early 2000's, the mainframe was joined with the most promising operating system on the market - Linux. Together, they have evolved and learned to work as a team and today make an awesome pair! System z processors have improved to a clock speed of 4.2 GHZ for zBC12 and 5.5 GHz for zEC12, making Linux even more comfortable. The Penguin, meanwhile, has brought hundreds of open source applications to System z, making the mainframe even more versatile. With the stability, performance, safety and scalability of System z, and with the efficiency, effectiveness and resilience of Linux, we have an ideal platform to meet the demands of the ever-changing IT world. Whether you're a young IT professional or have years of experience, whether you're interested in Cloud or Big Data or both, Linux on System z will meet your needs.
IBM invests extensively in z/VM, the rare jewel created in the 70's as a virtualization environment. Linux feels right at home on z/VM, as if it was born to work on System z hardware . z/VM is highly optimized and stable due to the fact that its hypervisor is running on microcode, without overlapping layers of software. But if z/VM is so old, it probably has an old fashioned interface, right? Wrong! IBM Wave for z/VM, the newest member of the System z family of products, complements z/VM with a graphical and intuitive interface which facilitates the daily management of the IT environment. With IBM Wave, it's easy to create new servers and maintain the health of existing ones.
Here's the big question: Can I save money running Linux on System z from a Total Cost of Ownership (TCO)1 perspective?
If we examine the background of the System z platform, with z/VM and Linux, we see that the number of software licenses required is reduced because System z has an incredible specialized processor to run Linux, known as an Integrated Facility for Linux (IFL), that is equivalent to multiple x86 cores in a single core.
Due to the large number of software products today that are licensed according to the number cores that are used, you will notice the difference when, for example, the number of software licenses can be reduced from 10 cores for your x86 systems to a single license on System z (IFL 1 = 1 core = 1 license).
1 For a more accurate TCO study, contact an IBM representative .
The swapbar is a row on the physical screen displaying point-and-shoot fields that help the user navigate between ISPF logical screens. It is enabled or disabled by issuing the swapbar command from ISPF. It's very useful if you have many logical screens open in ISPF as it allows you to quickly switch from one screen to another.
However, you can easily forget about using the swapbar because the swapbar fields can get “lost” amongst the fields for an ISPF panel.
In z/OS V2R1, a function is provided that allows a user to modify the format of the swapbar display to make it more ‘visible’.
Here how a standard swapbar display appears:
With z/OS V2R1 the swapbar command is enhanced to display a pop-up panel where the user can specify options for customizing the swapbar format.
Issuing the swapbar / command will display the pop panel which offers some customization options to increase visibility of the swap bar.
Here's the pop up panel (Tailor SWAPBAR display).
If the option for showing the swapbar divider line is selected , a divider line is displayed between the ISPF logical screen and the swapbar
It is possible to specify color and hilite settings for the entire swapbar or for a single swapbar entry.
Entering an S in the update action field causes the entire swapbar to use the specified color and hilite settings.
The settings for the entire swapbar are saved in the user's system profile.
Entering a C in the update action field causes just the swapbar entry for the current logical screen to use the specified color and hilite settings.
The settings for a swapbar entry are only in effect for the current ISPF session.
Entering a D in the update action field clears the color and hilite settings for the swapbar entry for the current logical screen.
The changes to the swapbar settings take effect on exiting the popup panel.
Here's an example of the swapbar with a dividing line, white text and no hilite.
Here's an example with a dividing line, with red text and no hilite.
And finally, my personal favorite, the swapbar with a dividing line, with yellow text and REVERSE hilite.
For more information on System z and the z/OS operation system see the following IBM Redbooks publications: