Modificado em por JamesPistilli
The IBM z13, like its predecessors, is designed from the chip level up to support data processing. This includes a strong, fast I/O infrastructure, cache on the chip to bring data close to processing power, security and compression capabilities of the coprocessors and I/O features, and the 99.999% data availability design of the coupling technologies.
The figure below shows ten easy steps for implementing an I/O configuration for your z13. The numbered steps are described after the figure.
a. When planning to migrate to a z13, the IBM Technical Support team can help you define a configuration design that meets your needs. The configuration is then used during the ordering process.
b. The IBM order for the configuration is created and passed to the manufacturing process.
c. The manufacturing process creates a configuration file that is stored at the IBM Resource Link website. This configuration file describes the hardware being ordered. This data is available for download by the client installation team.
d. A New Order report is created that shows the configuration summary of what is being ordered along with the Customer Control Number (CCN). The CCN can be used to retrieve CFReport (a data file that contains a listing of hardware configuration and changes for a central processor complex (CPC)) from Resource Link.
Make sure that you have the current PSP Bucket installed. Also, run the SMP/E report with fix category (FIXCAT) exceptions to determine whether any Program Temporary Fixes (PTFs) must be applied. Ensure that you have the most current physical channel ID (PCHID) report and CCN from your IBM service representative. Have extra cables (fiber optic and copper) available just in case some get damaged as they are being relocated.
When you plan your configuration, consider this information:
– Naming standards
– FICON switch and port redundancy
– Adequate I/O paths to your devices for performance
– OSA Channel Path Identifier (CHPID) configuration for network and console communications
– Coupling facility connections internally and to other systems.
Because the z13 server does not support attachment to the IBM Sysplex Timer, you must consider how the z13 will receive its time source. A z13 cannot join a CTN that includes a z10 or before as a member. Since the z10 was the last server that supported the IBM Sysplex Timer (9037) connectivity, the z13 cannot be configured as a member of a mixed CTN. The z13 can only join an STP-only CTN. When you are planning to replace a z196 or zEC12 with a new z13, plan the replacement of channels that are not supported on z13. You must carefully plan how to replace those, for instance, ISC-3 to HCA3-O or ICA SR for connectivity between z13 and z13. You might need to increase CF storage size when you replace z196 or zEC12 with z13. Coupling Facility Control Code (CFCC) level 20 requirements may be different from CFCC level 19 and earlier. Use the CFSizer Tool to get the new CF storage requirements.
The existing z196 or zEC12 I/O configuration is used as a starting point for using Hardware Configuration Definition (HCD). The z196 or zEC12 production input/output definition file (IODF) is used as input to HCD to create a work IODF that becomes the base of the new z13 configuration. When the new z13 configuration is added and the obsolete hardware is deleted, a validated version of the configuration is saved in a z13 validated work IODF.
a. From the validated work IODF, create a file that contains the z13 IOCP statements. This IOCP statements file is transferred to the workstation used for the CHPID Mapping Tool (CMT). Hardware Configuration Manager (HCM) can also be used here to transfer the IOCP deck to and from the CMT.
b. The configuration file that is created by the IBM Manufacturing process in step 1d is downloaded from Resource Link to the CMT workstation. The CHPID Mapping Tool (CMT) uses the input data from the files to map logical channels to physical ones on the new z13 hardware. You might have to make decisions in response to the following situations, among others: Resolving situations in which the limitations on the purchased hardware cause a single point of failure (SPoF). You might must purchase more hardware to resolve some SPoF situations. Prioritizing certain hardware items over others.
c. After the CMT processing finishes, the IOCP statements contain the physical channels to logical channels assignment that is based on the actual purchased hardware configuration. The CHPID Mapping Tool (CMT) also creates configuration reports to be used by the IBM service representative and the installation team. The file that contains the updated IOCP statements created by the CMT, which now contains the physical channels assignment, is transferred to the host system.
d. Use HCD, the validated work IODF file created in step 5a, and the IOCP statements updated by the CMT to apply the physical channel assignments created by the CMT to the configuration data in the work IODF.
After the physical channel data is migrated into the work IODF, a z13 production IODF is created and the final IOCP statements can be generated. The installation team uses the configuration data from the z13 production IODF when the final power-on reset is done, yielding a z13 with an I/O configuration ready to be used.
IODFs that are modifying existing configurations can be tested in most cases to verify that the IODF is making the intended changes.
a. If you are upgrading an existing z196 or zEC12, you might be able to use HCD to write an IOCDS to your system in preparation for the upgrade. If you can write an IOCDS to your current system in preparation for upgrade, do so and let the IBM service representative know which IOCDS to use.
b. If the z196 or zEC12 is not network connected to the CPC where HCD is running, or if you are not upgrading or cannot write an IOCDS in preparation for the upgrade, use HCD to produce an IOCP input file. Download this input file to a USB flash drive.
The new production IODF can be applied to the z13 in these ways:
– Using the power-on reset process
– Using the Dynamic IODF Activate process
Communicating new and changed configurations to operations and the appropriate users and departments is important.
For more information refer to IBM z13 Configuration Setup.
Contributed by Daniel Nussbaummueller
There have been many changes in the past 25 years in our IT world that have led to the need for autonomics in our database environment, especially in DB2 on z/OS. But while always talking about the solutions, the question arises – how do you actually implement them?
Each company may have different priorities which dictate the order of the implementation steps. Company A may need to apply intelligence to their reorg utilities as their top priority while Company B may need to address utility standards because of the impending retirement of the support person for their homegrown DB2 utility generator. Regardless of the order in which you start, IBM provides the software for a comprehensive autonomic environment that addresses the business problems that most companies face: limited expertise, greater application availability, or the need to control costs by moving work to off peak hours.
Here we will show you how to move from the traditional steps into a modernized autonomic environment, by implementing an active strategy for your DB2 Maintenance Tasks, following these five steps:
Step 1: Collect the metrics and related statistics for utility maintenance
First of all, you have to collect all relevant statistical data on your DB2 objects. This data can be used to filter out objects that are physically disorganized. Your goal is to execute run maintenance by exception and filter out wasting of resources attributed to running of utilities. IBM provides two DB2 stored procedures that collects statistics about objects we have defined via a profile and will generate an alert if the statistics exceed criteria, placing the alert into a table, and performs the RUNSTATS for your optimizer needs.
Step 2: Group your objects
Grouping your DB2 objects can be achieved in several ways. DB2 Automation Tool provides a function called Object Profiles which provides more flexibility and functions for object grouping – called Object Profiles. Using these Object Profiles, you can include objects on which you want to run utilities, as well as exclude objects that you want the utilities to ignore. Object Profiles are similar to DB2 TEMPLATEs. They allow table spaces and index spaces to be chosen for processing in much the same way.
Step 3: Create exceptions and thresholds for utilities
The next step to implement an active autonomic strategy is to run all your maintenance by exception filtering. The DB2 Automation Tool provides a function called the Exception Profile. This definition contains the conditions under which users want to run utilities. When combined with Object Profiles and Utility Profiles, the Exception Profiles act as a filter against the objects specified in the Object Profile.
Step 4: Build optimized utility JCL and jobs
Before execution, first you have to build the optimized utility JCLs and jobs. Regarding this, Job Profiles are used to connect the different profiles which are created in the DB2 Automation Tool. A Job Profile is the master profile and associates all the profiles - Utility Profiles, Object Profiles, and Exception Profiles - together. The combined profiles, which are headed by the Job Profile, form the basis of a DB2 Automation Tool task. We can submit this task manually or schedule it by using the DB2 administration task.
Step 5: Execute the jobs in a predefined maintenance window
Today, a typical maintenance strategy has pre-defined jobs in a job scheduler. These jobs are run in maintenance widows weekly, monthly and quarterly. With the Autonomics Framework, you can leverage your own batch scheduler for spawning evaluation jobs as well as starting the Autonomics Director procedure at any time during your maintenance window.
After following these steps, transforming your passive into an active autonomic environment, the corrective actions are taken automatically by the system – i.e. monitoring and analyzing the related metrics to pro-actively make recommendations and even execute them. These are tasks typically done by a DBA. With these automated basic administration tasks you give DBAs freedom to work on higher business value tasks. And more important they do not rely on old, homegrown processes which are difficult to maintain and keep up with new DB2 versions.
And how about you - have you already moved from passive to active strategy in your environment? What benefits have you seen? Tell us which experiences you gained regarding the change process.
For more information see the IBM Redbooks publication Modernize Your DB2 for z/OS Maintenance with Utility Autonomics.
Modificado em por lydiap
Bill White is an IBM Redbooks Project Leader for z Systems Hardware, Networking, and Connectivity. He works with technical experts from around the globe to produce books, papers, guides, and blogs.
The IBM z Systems platform offers a framework for standards and open source, which are key to making virtualization effective, from creating and managing virtual machines through building and automating a cloud environment.
Kernel-based virtual machine (KVM) is an open source virtualization technology that turns the Linux kernel into an enterprise-class software hypervisor. KVM for IBM z Systems uses hardware virtualization support that is built into the z Systems platform, known as IBM Processor Resource/Systems Manager™ (PR/SM™). This means that KVM for IBM z can do things such as scheduling tasks, dispatching CPUs, managing memory, and interacting with I/O resources (storage and network) within the z Systems platform.
1. What is the importance of KVM for IBM z?
KVM for IBM z uses the common Linux-based tools and interfaces, while taking advantage of the robust scalability, reliability, availability, and high throughput that are inherent to the z Systems platform. And those strengths have been developed and refined on the z Systems platform over several decades.
The z Systems platform also has a long history of providing security for applications and sensitive data in virtual environments. It is the most securable platform in the industry, with security integrated throughout the stack (in hardware, firmware, and software).
In addition, KVM for IBM z is capable of managing and administering multiple virtual machines, which allows thousands of Linux-based workloads to run simultaneously on a single z Systems platform.
2. What is the advantage of using KVM for IBM z?
KVM for IBM z is an easy-to-deploy and simple-to-use hypervisor that integrates virtualization capabilities to the IT infrastructure, this includes:
- Enabling the sharing of CPU and I/O (storage and networking) resources by virtual machines
- Allowing for the over-commitment of CPU, memory, and swapping of inactive memory
- Supporting live virtual machine relocation (workload migration) with minimal impact
- Permitting dynamic addition and deletion of virtual I/O devices
- Supporting policy based, goal oriented performance management and monitoring of virtual CPU resources
3. How do you manage a KVM for IBM z environment?
KVM for IBM z Systems provides standard Linux and KVM interfaces for management and operational control of the environment, such as:
- The command line interface (CLI) is a common, familiar Linux interface environment used to issue commands and interact with the KVM hypervisor. The user issues a series of successive lines of commands to change or control the environment.
- Libvirt is open source software that resides on KVM and many other hypervisors to provide low-level virtualization capabilities that interface with KVM through a CLI called virsh.
- An open source tool called Nagios can be used to monitor the KVM for IBM z environment.
4. What is the high-level architecture of KVM for IBM z?
KVM for IBM z runs in a z Systems logical partition (LPAR) and creates virtual machines as Linux processes. The Linux processes use a modified version of another open source module, known as a quick emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the virtual machine.
The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can schedule virtual machines on real CPUs and manage their access to real memory. QEMU runs in a user space and implements virtual machines using KVM module functionality.
QEMU virtualizes real storage and network resources for a virtual machine, which in turn uses drivers (virtio_blk and virtio_net) to access these virtualized storage and network resources as shown in Figure 1.
Figure 1. KVM for IBM z Systems reference architecture
5. What are some key design points when designing a KVM for IBM z infrastructure?
With KVM for IBM z Systems, you will need to plan and design the virtualized environments in which you build and run the virtual machines. Things to consider include:
- KVM supports CPU and memory over-commitment, so using Nagios to monitor virtual CPUs and memory usage is important as the virtual machines increase in numbers.
- A common preferred networking practice is to isolate management traffic from user traffic to ensure sensitive data is kept separate and secure.
- Different storage infrastructures and protocols are supported with KVM for IBM z, you will need to design the storage architecture to complement your environment.
- KVM for IBM z provides standard Linux and KVM interfaces for management. The way in which your management tools will interact with the virtualized pool of resources needs to be planned out.
These key design points will help you get started with implementing your KVM for IBM z infrastructure. To learn more, see “Getting Started with KVM for IBM z Systems”, SG24-8332 at: http://www.redbooks.ibm.com/redpieces/abstracts/sg248332.html?Open
Modificado em por KaranITSO
Flash Express introduces SSD technology to the IBM zEnterprise EC12 server, which is implemented by using Flash SSDs mounted in PCIe Flash Express feature cards. This blog entry provides an brief overview and an example of allocating Flash Express storage to a z/OS partition.
Flash Express Overview
Flash Express is an innovative solution available on zEC12 designed to help improve availability and performance to provide a higher level of quality of service. It is designed to automatically improve availability for key workloads at critical processing times, and improve access time for critical business z/OS workloads. It can also reduce latency time during diagnostic collection (dump operations).
Flash memory is a non-volatile computer storage technology. It was introduced on the market decades ago. Flash memory is commonly used today in memory cards, USB flash drives, solid-state drives (SSDs), and similar products for general storage and transfer of data. Until recently, the high cost per gigabyte and limited capacity of SSDs restricted deployment of these drives to specific applications. Recent advances in SSD technology and economies of scale have driven down the cost of SSDs, making them a viable storage option for I/O intensive enterprise applications.
An SSD, sometimes called a solid-state disk or electronic disk, is a data storage device that uses integrated circuit assemblies as memory to store data persistently. SSD technology uses electronic interfaces compatible with traditional block I/O hard disk drives. SSDs do not employ any moving mechanical components. This characteristic distinguishes them from traditional magnetic disks such as hard disk drives (HDDs), which are electromechanical devices that contain spinning disks and movable read/write heads. With no seek time or rotational delays, SSDs can deliver substantially better I/O performance than HDDs. Flash SSDs demonstrate latencies that are 10 - 50 times lower than the fastest HDDs, often enabling dramatically improved I/O response times.
For additional technical details on Flash Express see the following IBM Redbooks Publications:
IBM zEnterprise EC12 Technical Guide
IBM zEnterprise EC12 Configuration Setup
Manage Flash Allocation
Assignment of Flash Express storage to LPARs is done through the HMC or Support Element using the Manage Flash Allocation panel (from the Configuration task). This panel allows the user to allocate Flash Express storage in increments of 16GB to any partition defined in IOCDS A0, A1, A2 and A3. An example of the panel is show in the following image:
Allocating Flash Express storage to partitions
There are a few points to be aware of when allocating Flash Express storage to a partition:
When an allocation is first defined, you will need to set the initial and maximum allocation in 16GB increments.
A new allocation will be detected by an active z/OS LPAR but no SCM memory will be varied online to z/OS.
An SCM allocation configured will be put online to the z/OS image assigned to the partition at IPL time unless the z/OS image is configured to not do so.
z/OS will allow more memory to be configured online, up to the maximum GB defined in this panel.
Any extra memory varied online or offline by z/OS will be reflected in the allocated GB in this panel if refreshed.
Minimum amounts are allocated from the available pool so can not be overallocated.
Maximum amounts can be overallocated up to 5696GB.
Maximum amounts must be greater than or equal to the initial amounts.
To allocate Flash Express storage to a partition select add allocation action from the Select Action drop down menu from the Manage Flash Allocation Panel. The New Flash Allocation Panel is displayed as shown in the following image:
Enter the partition manually using the New option or select the partition using the Use Existing option and drop down menu. Only partitions which do not have an existing allocation are shown with the Use existing drop down menu. The new allocation panel can not be used to modify an already existing allocation for a partition. Enter the allocation values for the specified partition name.
The allocations fields are:
Initial (GB) = Enter the initial Flash allocation to be used for the logical partition.
Maximum (GB) = Enter the maximum Flash allocation to be used for the logical partition.
Storage increment (GB) = Displays the Flash increment value.
Available (GB) = Displays the amount of Flash memory currently available
In the following example, we will allocate Flash Express storage for the active z/OS image named SC76 in the partition named A03 (Channel Subsystem 0, Multiple Image Facility ID 3). The RSM Enablement Offering support is installed. From SC76, we issue the MVS command D IPLINFO,PAGESCM, with the results shown below.
IEE255I SYSTEM PARAMETER 'PAGESCM': NOT_SPECIFIED
Since no PAGESCM parameter was specified, this indicates that the default value of ALL is used (reserves all SCM for paging at IPL time). In other words, if a Flash Express storage allocation is defined for the LPAR and PAGESCM=ALL is specified (or left to default), then at IPL time the initial amount of Flash Express storage specified will be used automatically by z/OS for paging. Likewise, if a specific amount is specified, this amount will be made available for paging.
However, if a Flash Express allocation is added when a z/OS image is active, then z/OS will detect the allocation but does not automatically vary the storage online for use. The CONFIG SCM MVS operator command has to be issued to vary the Flash Express storage online for use as paging space for the z/OS LPAR.
When you dynamically allocate and configure SCM storage online to z/OS, z/OS will start using SCM for local paging. You will still see percentages for the local page datasets. If you IPL, you will see that z/OS primarily uses SCM for local paging, so your local page datasets should be 0% or a lot lower than SCM storage.
To add a Flash Storage allocation for partition A03, from the Manage Flash Allocation task, we select an existing partition and then select the add allocation action from the Select action drop down menu. The New Flash Allocation Panel appears. From the Use Existing drop down menu we select partition A03 and enter Initial and Maximum amounts of 16GB, as shown in the following image:
Click OK to define the allocation. A panel pop up will appear to indicate that the allocation was successfully added, as shown in the following image:
The new allocation will be displayed on the Manage Flash Allocation Partitions table, as shown in the following image:
When the allocation is defined through the HMC or SE for an active z/OS image, the allocation will be detected. On the z/OS image’s (SC76) console the message IAR034I is displayed:
IAR034I ADDITIONAL STORAGE-CLASS MEMORY DETECTED
From SC76, we issue the enhanced D ASM and D M commands to display Flash Express SCM related information and status. Each command’s result displayed below:
IEE200I 15.50.35 DISPLAY ASM 451
TYPE FULL STAT DEV DATASET NAME
PLPA 100% FULL 8136 PAGE.SC76.PLPA
COMMON 23% OK 8136 PAGE.SC76.COMMON
LOCAL 0% OK 8136 PAGE.SC76.LOCAL1
PAGEDEL COMMAND IS NOT ACTIVE
IEE174I 15.50.55 DISPLAY M 455
STORAGE-CLASS MEMORY STATUS
SCM INCREMENT SIZE IS 16G
IEE174I 15.51.04 DISPLAY M 457
STORAGE-CLASS MEMORY STATUS - INCREMENT DETAIL
ONLINE: 0G OFFLINE-AVAILABLE: 16G PENDING OFFLINE: 0G
SCM INCREMENT SIZE IS 16G
From these commands we see that 16GB of Flash Express storage is available (defined) but not in use (offline-available).
To vary the storage online to our example LPAR, we issue the CONFIG SCM(xxG),ONLINE command, as shown below along with results. The amount of storage configured online must be specified according to the supported increment size. From the displays above we see the supported increment size is 16G. Since we’ve only allocated 16G for this z/OS image we vary the entire (16G) amount online.
IEE195I SCM LOCATIONS 0G TO 16G ONLINE
IEE712I CONFIG PROCESSING COMPLETE
We again issue the D ASM and D M commands to display the status of the Flash Express storage and see that the 16GB initial value is now online and available.
IEE200I 16.07.10 DISPLAY ASM 500
TYPE FULL STAT DEV DATASET NAME
PLPA 100% FULL 8136 PAGE.SC76.PLPA
COMMON 23% OK 8136 PAGE.SC76.COMMON
LOCAL 0% OK 8136 PAGE.SC76.LOCAL1
SCM 0% OK N/A N/A
PAGEDEL COMMAND IS NOT ACTIVE
IEE207I 16.07.22 DISPLAY ASM 502
STATUS FULL SIZE USED IN-ERROR
IN-USE 0% 4,194,304 0 0
IEE174I 16.07.32 DISPLAY M 504
STORAGE-CLASS MEMORY STATUS
0% IN USE
SCM INCREMENT SIZE IS 16G
IEE174I 16.07.45 DISPLAY M 511
STORAGE-CLASS MEMORY STATUS - INCREMENT DETAIL
ADDRESS IN USE STATUS
0G 0% ONLINE
ONLINE: 16G OFFLINE-AVAILABLE: 0G PENDING OFFLINE: 0G
0% IN USE
SCM INCREMENT SIZE IS 16G
Prior to adding the Flash Express allocation to this z/OS image (SC76) at IPL time the following message is issued indicating that although PAGESCM=ALL was specified (left to default in our example), no SCM memory (Flash Express storage) is available and brought
IAR031I USE OF STORAGE-CLASS MEMORY FOR PAGING IS ENABLED - PAGESCM=ALL ,
Subsequent to adding the Flash Express allocation, at the next IPL, the following message is issued, indicating that PAGESCM=ALL was specified (left to default in our example) and all available (initial amount specified for the allocation) storage is brought online.
IAR031I USE OF STORAGE-CLASS MEMORY FOR PAGING IS ENABLED - PAGESCM=ALL
My next residency that will be starting on April 2, 2013 is about how to set up a Linux on System z environment for production. I've been really busy examining and clarifying the objectives for this residency. I wanted to make sure I pulled together a team of people that have real world, live customer experiences.
The aim of this residency is to examine the setup of an LPAR using shared CPUs with memory for a production environment and another LPAR that shares some CPUs but also has a dedicated one for production. Running in z/VM mode, it can virtualize servers and based on z/VM shares, it can prioritize and control their resources. The size of the LPAR or z/VM Resources depends on the workload so Im hoping to use a WebSphere Application Server environment and a DB2 environment to calculate the resources for Java workloads and the database workload.
I also want to examine the network decisions that need to be made regarding the use of VSWITCHes, shared OSA or Hipersockets and the Hiperpav, or FCP/SCSI attachment used in conjunction with an SVC storage controller with their associated performance and throughput expectations.
Some of the topics I would like this team to explore are things like, how do you size the z/VM environment itself? How do you decide on Linux guests and their sizing and what decisions need to be made if you're using a database, a web server and applications - especially Java applications.
Im hoping that this Redbooks publication will show the power of System z Virtualization and flexibility in sharing resources in a flexible production environment!
Modificado em por lydiap
Karen Reed is a certified IT systems engineering professional with experience in a broad range of hardware and software architectures. She has 20 years of experience in planning and implementation support of IBM system monitoring, analytics and automation software on System z, supporting clients in the western USA. Her areas of expertise include analyzing client business needs and designing IT solutions, and project management. She presented IBM software solutions for systems management and automation at SHARE, CMG and IBM Cloud conferences.
Today’s applications are built upon an infrastructure of servers, routers, disk storage, and network components. The complexity of this IT infrastructure has an impact on application availability, increasing the likelihood of an outage.
There are few events that impact a company like having an IT outage, and then finding the incident reported across the internet and newspapers. Customers, employees, and suppliers expect to be able to do business with you around the clock, and from around the world. Maintaining high availability in normal day-to-day operations is fundamental for success. To improve the availability of business operations a risk analysis of critical applications will help identify single points of failure.
A single point of failure (SPOF) exists when a hardware or software component of a system can potentially make an application unavailable to users. Highly available systems tend to avoid a single point of failure by using redundancy in every operation.
Consider the following diagram of and end-to-end application. Users access the application and database servers through the internet, routers and firewalls. Each piece of the path is required to complete a transaction.
Applications such as this, with no redundancy in hardware and network components rely on the durability of each piece of hardware. However, hardware does not run forever and networks can fail due to weather, construction, and load. A risk analysis of the components will identify those most likely to fail, the cost of duplicating each component, and the potential increase in availability. End-to-end applications, common in today’s environment, can have long pathways that traverse many pieces of hardware, network and software. Companies depending on complex IT infrastructure should prepare for small and large disasters.
In every organization, there are critical applications (necessary for daily business), and other applications that while necessary, but can withstand some down time. The focus for improving availability and IT resiliency should be to look at those critical applications and their infrastructure first. Increasing availability by avoiding painful outages helps businesses thrive.
Modificado em por MartinKeen
If your system displays a WTOR, it waits for someone to enter a reply. What happens if nobody takes care of that because your operators are busy with other business tasks and don't watch the consoles continuously? Your job will be delayed which finally can result in a delay of your business workload.
Especially for synchronous WTORs, an unanswered reply can lead to a system outage and to a business critical situation.
z/OS has two answers for such situations: Auto-Reply and enhanced synchronous WTOR notification.
Auto-Reply is a kind of simple automation because the system can answer a WTOR for itself after a given number of seconds or minutes has expired.
Enhanced synchronous WTOR notification is a new way for how z/OS can handle synchronous WTORs.
Before, a synchronous WTOR is displayed only on one console and stays there forever. This can also be the HMC integrated console. The system stops operation, including the continuous update of the status in the COUPLE data set, until the WTOR is replied to. If no one is aware of such a WTOR the system can be forced out of the sysplex because Sysplex Failure Management (SFM) can think this system is no longer responsible.
Now, such a WTOR will be displayed with a red background to claim attention from an operator. Further the WTOR jumps to a different console after 2 minutes to claim attention again. The figure below shows the new red colored console displaying a synchronous WTOR.
Now the operators are able to recognize a synchronous WTOR, find the right console and react to such a WTOR.
Find out more about this feature, and others in z/OS V2.1 in the IBM Redbooks publication Key Functions in z/OS Version 2 Release 1.
Robert Schulz is a Senior IT Specialist with IBM, in Austria. He has 25 years of experience in mainframe system programming in z/OS, z/VM and Linux for System z. Robert is supporting client installations in architecture, implementations and problem determination. His areas of expertise include z/OS Parallel System, z/VM SSI, GDPS and system availability, Communications Server and storage management.
Modificado em por KaranITSO
I typically start many logical screens when using ISPF. I reckon most system programmers also do so and tend to use the same set of logical screens for their sessions. So up to now we have to start each screen manually. ISPF allows a user to have up to 32 logical screens but there is no automation for creating these logical screens at ISPF startup.
Now with z/OS V2R1 ISPF allows you to define a set of logical screens that are automatically started when ISPF is invoked.
To enable this support you have to at z/OS V2R1. When you start ISPF you specify a name of a varialbe on the ISPF start command.
Define your own variable
Use the default variable ZSTART
The variable must contain the identifier ISPF, followed by the command delimiter, followed by the command stack used to start the logical screens.
For example if I choose a variable name MYSTART then I could define the variable and assign the following values:
Variable name: MYSTART
Variable contents: ISPF;2;START 3.4;START 10;START S;LOG;SWAP 1
The name of the variable is specified as an option with the ISPF or ISPSTART command, for example:
If a variable name is not specified with ISPF or ISPSTART, the default profile variable ZSTART is used for the initial command stack.
If ZSTART is not found or does not contain the ISPF identifier, ISPF starts normally.
You can add a variable or modify ZSTART from Dialog Test -> Variables (7.3)
For example, here I'll update the ZSTART variable to start the following screens: DSLIST, SDSF, Command Shell and switch to DSLIST
Now when I start ISPF (specifiying no variables on the start command) from the TSO READY screen using just ISPF, all the logical screens defined in the ZSTART variable are started.
You can bypass the start up of any defined logical screens defined in the ZSTART variable by using the new BASIC keyword when starting ISPF.
New XALL command
I've almost forgotten about the new XALL command (thanks Yves Colliard for the suggestion to include this new feature). Take a look at the comments section to see how Yves has started ISPF sessions using REXX.
At the end of the day you're ready to logoff ISPF and end all of you logical screens. This could take many keystrokes and being the lazy sysprog that I am, a bit of patience. However with z/OS V2R1 there's a handy new command, XALL, which will attempt to terminate all of the ISPF logical screens for you.
New =XALL command provided to help terminate all logical screens with one command.
=X command is propagated to every logical session to terminate each application that supports =X
If =X not supported, termination process halts on that logical screen
Once that logical screen is terminated =XALL processing can be continued for each remaining logical screen
So if I have several logical screens open and I want a fast exit, I type =XALL on the command line, like so:
And with a bit of luck, all the screens support the =x command, then I am dumped back out to TSO.
For more information on System z and the z/OS operation system see the following IBM Redbooks publications:
ABCs of z/OS System Programming: Volume 1, SG24-6981-02
ABCs of z/OS System Programming Volume 2, SG24-6982-02
ABCs of z/OS System Programming Volume 3, SG24-6983-03
ABCs of z/OS System Programming: Volume 4, SG24-6984-00
ABCs of z/OS System Programming: Volume 5, SG24-6985-02
ABCs of z/OS System Programming Volume 6, SG24-6986-00
ABCs of z/OS System Programming Volume 7, SG24-6987-01
ABCs of z/OS System Programming Volume 8, SG24-6988-01
ABCs of z/OS System Programming: Volume 9, SG24-6989-05
ABCs of z/OS System Programming Volume 10, SG24-6990-04
ABCs of z/OS System Programming Volume 11, SG24-6327-0
ABCs of z/OS System Programming Volume 12, SG24-7621-00
ABCs of z/OS System Programming Volume 13, SG24-7717-01
Modificado em por lydiap
In the course of an IT career, many of us may have sat at our desks looking at a sluggish application and wondered, "If I increase the amount of memory here or there, will this improve performance?" And, hopefully, your next thoughts would have been about the impact on I/O operations and cost, CPU usage, and transaction response times.
Although the magnitude of these changes can vary widely based on a number of factors, including potential I/Os to be eliminated, resource contention, workload, configuration, and tuning, you should carefully consider whether your environment could benefit from the addition of more memory to your software functions.
Significant performance benefits can be experienced by increasing the amount of memory assigned to various functions in the IBM® z/OS® software stack, operating system, and middleware products. IBM DB2® and IBM MQ buffer pools, dump services, and large page exploitation are just a few of the functions whose ease of use and performance can be improved when more memory is made available to them.
Recently, an IBM Redbooks Redpaper was published that can help you to examine the performance implications of increasing memory in the following areas:
- DB2 buffer pools
- DB2 tuning
- IBM Cognos® Dynamic Cubes
- MDM with larger DB2 buffer pools
- Java heaps and Garbage Collection tuning and Java large page use
- MQ v8 64-bit buffer pool tuning
- Enabling more in-memory use by IBM CICS® without paging
- TCP/IP FTP
- DFSort I/O reduction
- Fixed pages and fixed large pages
Different environments, of course, may experience a wide range of performance benefits but there does seem to be enough evidence to suggest that configuring more memory could be a positive enhancement for many installations due to reduced I/O rates, improving transaction response times, and in some cases, reduced CPU time.
To read more about this and see some examples, read the IBM Redbooks Redpaper :
Modificado em por lydiap
Richard Lewis is an IBM employee who is part of the IBM Washington System Center, Advanced Technical Support Organization supporting z/VM and Linux on System z. This is part two of a multi-post series on IBM Wave for z/VM, a systems management GUI for z/VM .This software promises to simplify the task of administering Linux guests running on z/VM. To read part one, see https://ibm.biz/BdDaBv .
Many customers who will be installing and implementing IBM Wave will no doubt have an existing z/VM and Linux for System z environment. In this context the customer most likely will already have “gold master” Linux for System z images that are cloned to create new virtual machines to be used for applications and middleware that run the business. IBM Wave easily integrates into an environment such as this by providing the capability to identify an existing virtual machine as a “prototype”.
Marking a virtual machine in this manner causes IBM Wave to ensure that the virtual machine will not be logged on to the system, and also to create a prototype directory entry to be used when creating clones of this virtual machine.
Before designating a virtual machine in this manner, it is a good idea to execute the “init for wave” process so that all clones created from this base will already have that step completed.
Once this setup work is done, creating a clone from this base is as simple as right-clicking on the prototype icon and selecting “clone from this prototype”.
Before the clone process begins you will have an opportunity to specify the name for the new virtual machine, the z/VM password for the new virtual machine, and network connectivity for the new server.
The information provided will then be used by IBM Wave to create a series of background tasks to complete the clone process. These tasks consist of creating the new virtual machine directory entry, and then copying the minidisks from the base image to the new image. You can follow the progress of these tasks through the log viewer.
When the clone process is completed, you will have a new server that can be activated and populated with applications and or middleware.
One step you might want to do at this point is to specify the group this new virtual machine should be part of. The topic of projects and grouping is quite powerful and will be the subject of another blog on IBM Wave, but in this figure, we are assigning a Site Defined Group named MyClones.
To activate the new virtual machine you simply right click on the icon and select Activate. A window to confirm this and begin the process will be displayed.
With just a few clicks and a few minutes of time you have provisioned a new server all without knowing the many individual z/VM commands to execute the process. When your new server is up and running it will already be ready for management by IBM Wave, so you will be able to gather performance data from it, as well as add resources when needed.
Modificado em por MartinKeen
Contributed by Daniel Nussbaummueller
Do you still create utility jobs manually to maintain several objects? Do you think that your maintenance jobs need to be run on a predefined frequency basis? IBM DB2 Automation Tool for z/OS helps you with these challenges.
Combining object, utility, and job profiles, DB2 Automation Tool can reduce and facilitate manual routine tasks and focus on more complex job responsibilities that add more value to your company. Additionally, when using exception profiles and DB2 Automation Tool, you can define in a utility profile when to run a utility against an object in an object profile. You select the conditions from a statistics list in the exception profile.
But instead of talking about the solution itself, we want to give you more information about what these profiles actually are, how they work and how you can use them to create an autonomic infrastructure:
Object profiles allow you to create reusable lists of objects. You can group related objects into one profile, such as all objects for a particular application, objects with similar maintenance requirements, etc. In an object profile, you can include objects on which you want to run utilities, as well as exclude objects that you want the utilities to ignore.
You can create object profiles using either the IBM Management Console for IMS and DB2 or by using the ISPF panels in automation tool. Here you can see the GUI for creating it in the IBM Management Console:
A utility profile is a collection of one or more utilities and their respective run time options. Using a similar technique to creating object profiles, we can now create a utility profile to address any particular maintenance requirement. You can select the utilities that you want to execute and “Update Utility” will allow you to specify the parameters you want to specify for that given utility. Once created a utility profile can be updated at any time to include more utilities or to change the options for a given utility.
The following list shows the utilities and functions that are available:
Exception profiles allow you to define when a utility in a utility profile should be run against an object in an object profile. You select the conditions from a statistics list in the exception profile. The exception profile is placed in the job profile with the object and utility profile. During the job build, exception processing produces a list of accepted objects and a list of rejected objects. When creating utility profiles, you can specify whether the utility is to be executed on the accepted objects, the rejected objects, or both.
There are 184 available selection criteria that we can use to select candidate objects. Also, we can provide our own criteria through a user exit interface. There are 10 supplied default exception profiles and viewing these will give you a good idea on how to create and specify your own based on your site standards:
Job profiles combine the object profiles and utility profiles (and optionally exception profiles) into a set. If no exception profile is included in the job profile, then each utility is run unconditionally on each object on the object list. You can combine multiple object profiles with multiple utility profiles, and can specify the job step order for the generated job. The combined profiles, which are headed by the job profile, form the basis of a DB2 Automation Tool task. You can submit this task manually or schedule it by using the DB2 administration task scheduler or your site’s scheduling software. The job profile will evaluate the exception profile against the objects in the object profile and when a condition is met will generate JCL and Utility statements to perform the tasks specified in the utility profile against the objects that met the condition.
To create a job profile use the ‘C’reate command on the command line:
These profiles allow you to help IT staff reduce think time to repetitive tasks and also to analyze the environment in order to run only what is needed and when it is needed, reducing the CPU utilization for maintenance jobs that do not really need to run in a defined maintenance window. So by combining object, job, exception and utility profiles with the DB Automation Tool, you can make your database environment work more efficiently.
With the addition of the Management Console and the Autonomics Director you can now not only exercise “Passive” autonomics but you can start to move into “Active” autonomics. The Management Console makes monitoring the current symptoms and automating the suggested actions easy.
And how about you – did you already created an autonomic infrastructure? What were your experiences using these profiles in DB2? Tell us what you learned while working with these products. If you want to see additional material about the process of creating the autonomic infrastructure using DB2, see the IBM Redbooks publication Modernize Your DB2 for z/OS Maintenance with Utility Autonomics.
Modificado em por MartinKeen
Data grows all the time. There´s nothing you can do; so every day we struggle and adapt new technology; new hardware, new software, combine and exploit new features and enhancements, and data… keeps growing. You can´t just say: “Hey, wait a minute there!” Because all the business is moved by this data, and in order to be successful, this data needs to be converted into information fast. There´s no point talking about the terms and definitions here, what you need to know is z/OS brings you a better way to approach Data Consolidation. Combining three features: RLS for Catalogs, Catalog Alias Constraint Relief and Catalogs larger than 4GB, you will find a good alternative to your current methods.
Before z/OS 1.12, ICF Catalogs were limited to a 4GB size. When reaching this we were supposed to split the Catalog, and pour the data in a second catalog, a third, then a fourth, and things were starting to go a little bit out of control. A similar thing happened prior z/OS 1.13 for Aliases, where there was a limit that went between 3000 and 3500 aliases per user Catalog.
With z/OS 1.13, you might think the problems were resolved by implementing Extended Addressability for the ICF Catalog using a BCS structure and also turning on the EXTENDEDALIAS feature to increase the limit of aliases per User Catalog to 500,000 in the same catalog. But in fact, it wasn’t enough.
Larger ICF Catalogs means more entries and more GRS enqueue contentions when updating the BCS records. The enqueue wait time significantly impacts the performance of any large catalogs, especially during the batch jobs window. Now, Record Level Sharing for accessing ICF catalogs is introduced in z/OS V2.1 to meet requirements for better performance and availability. Benchmark shows explicit time reduction and lower CPU usage when running massive DELETE jobs.
Now, it is your time to think of those 3 wonderful Catalog enhancements to avoid disruptive and dangerous splitting or reorganization and reduce management effort by using less user catalogs. Check out the latest IBM Redbooks publication: Implementing Key Functions in z/OS Version 2 Release 1. You will find out the use cases and solutions you need!
Guillermo Cosimo is a z/OS System Programmer at Banco Galicia in Buenos Aires, Argentina. He has 7 years of experience in the Mainframe architecture. His areas of expertise include z/OS, USS, DFSMS, zFS, REXX, SMPe, DR. He holds a graduate degree in Systems Engineering from the Universidad Abierta Interamericana.
Zhou Hui is a Senior I/T Specialist from STG Lab Services of IBM China. He joined IBM China in 2005 and has 9 years of experience in IBM mainframe and storage product field. His areas of expertise include z/OS, z/VM, Parallel Sysplex, System z hardware, DS8000 and GDPS solution. He has been providing technical service to major China banking clients to deliver the latest System z hardware and z/OS products.
Contributed by Anja Jessica Paessler
Last weekend I spent quite some time thinking about how to create a blog post to get people as excited about DB2 for z/OS temporal data management as I am. After a while I decided to call an old friend to take a break and maybe receive new input. Somehow we ended up playing an old childhood game in which you put each letter of your name in a different line and then find a word for each letter that describes you. In the end you have a list of attributes describing your personality. So why not trying this to describe something technical such as temporal data management as well? Here is what I came up with:
- Time-based data management that can help businesses manage the increasing amounts of data and retention requirements
- Enables you to accurately track information and data changes over time.
- Makes it easy to insert, update, delete and query data in the past, present or future by using new and standardized SQL syntax.
- Provides an efficient and cost-effective way to address auditing and compliance requirements.
- Opportunity to have multiple stored versions for every logical row.
- Remembers all past versions of rows in a table. If we are talking about a bank account for example, DB2 for z/OS temporal data management will help you to provide a detailed history of their accounts to your customers – and not by using additional tables with triggers or stored procedures as is current practice.
- Application development, maintenance and management can be simplified.
- Leverage DB2 for z/OS temporal data management to obey regulations and fulfill customer needs, no matter if you are in the insurance, financial, retail, human resources or any other sector.
As you can see DB2 for z/OS temporal data management provides many ways to help you and your customers to successfully face today’s business challenges by recording and maintaining ever increasing amounts of data.
Eager to learn more about DB2 for z/OS temporal data management? Read the IBM Redbooks publication Managing Ever Increasing Amounts of Data with DB2 for z/OS Using Temporal Data Management, Archive Transparency, and the DB2 Analytics Accelerator.
Modificado em por lydiap
When running in a virtualized environment, any reasonable administrator tries to reduce the time needed for standard tasks. In the early days of Linux on z/VM, this resulted in a procedure using golden images and cloning. This procedure simplified the deployment of Linux to new z/VM guest systems and has served many administrators well for a long time. However, over time, the Linux systems changed. With the introduction of newer technologies such as systemd on Linux, a number of problems came about that made the once so nifty feature of cloning golden images more and more difficult.
Problem: Make the image golden
During first bootup, Linux creates unique data at lots of locations. The number and location depends on the installed software. It requires detailed knowledge about the software used to make sure, that all these strings are
recreated during the first bootup of the cloned machine.
Unfortunately, there is no means to detect the needed changes available in the system. However having some of those places not updated can result in security issues and data corruption of the involved clones later on. A clone
that works in the first place is not necessarily done right.
This issue is not new, it already existed with SLES11 and RHEL6, however it became worse with the introduction of systemd and its machine id. It is therefore recommended, to move away from deploying clones to use either automated installation or the imaging software kiwi.
Solution: do not create the unique data in the first place
The actual problem exists only, because cloning relies on the configuration of a readily booted system. This system then is cleaned up and prepared for the actual cloning process. After cleanup, it is also called "golden image". All of the files needed within the production system are already created during the first startup of this system. The cleanup process must take care to remove all data from the system that should be uniqe. This data has then to be recreated during the first bootup of the clone.
The only reliable solution to accomplish this is, to avoid the creation of the unique data in the first place. This means, the golden image never should have been booted before cloning new virtual machines. To avoid issues, you may want to use automated installations as described in "The Virtualization Cookbook for z/VM 6.3, RHEL 7.1 and SLES 12". However if you have to rely on readily build images, the creation of virtual appliances is the way to go.
This is where the imaging software KIWI steps in.
Instead of creating a golden image to clone, a virtual appliance is created. This virtual appliance is never booted during the image creation process. The deployment of the virtual appliance is very similar to the one of a golden image: It is copied to a new disk, and given several parameters to finalize its configuration during the first startup.
If your business processes requires you to test a readily built image, this is also possible with the virtual appliance. However, needed changes to the image must be done the the KIWI configuration, and will only be available with the next iteration of a newly created image of the virtual appliance. You don't apply the changes to the live system, but to the configuration of
the virtual appliance.
This procedure can simplify automations. For example, to provide an image with all updates installed, you will just need to provide the update repositories during the image creation. After new updates are available that you need in your golden image, just repeat the building process, and the resulting image will contain all the updates. This also results in more
secure systems at the time of redeployment compared to deploying the updates only after starting the original image.
With KIWI, the setup of virtual appliance is defined in a set of configuration files and overlay files to the image. It also allows to run certain scripts during the various steps of the image building and first bootup of the image. The actual configuration is described in the "The Virtualization Cookbook for IBM z Systems Volume 3: SUSE Linux Enterprise Server 12, SG24-8890-00". Further
information is found at https://doc.opensuse.org/projects/kiwi/doc/
Our IBM Redbooks blogger, Berthold Gunreben, is a Build Service Engineer at SUSE in Germany. He has 14 years of professional experience in Linux and is responsible for the administration of the mainframe system at SUSE. Besides his expertise with Linux on z Systems, he is also a Mainframe System Specialist certified by the European Mainframe Academy: http://www.mainframe-academy.de. His areas of expertise include High Availability on Linux, Realtime Linux, Automatic Deployments, Storage Administration on the IBM DS8000®, Virtualization Systems with Xen, KVM, and z/VM, as well as documentation. Berthold has written extensively in many of the SUSE manuals.
Modificado em por lydiap
Serkan Sahin is a Chief Architect for IBM Strategic Outsourcing Service Delivery in the Middle East and Africa. He has more than 18 years of professional experience in the IT Industry. He is an experienced Architect, System Engineer and IT Consultant who has been developing multi-component wall to wall complex IT infrastructure solutions. He has worked for IBM since 1999. He has been an IBM and Open Group Certified Architect as a Technology Architect since 2008. He is a certified IBM instructor for Architectural Thinking, Architecting for Performance Engineering and Technical Leadership College professional for internal classes in IBM for professional give back. He has both a computer science and industrial electronics degree.
It is very enjoyable to talk about System z and Cloud because System z itself was made for cloud 50 years ago. The name “Cloud” just wasn't invented until recent years. The System z machine itself is Cloud. Today System z is an important component for any company that wants to build an enterprise level cloud solution.
The IBM Enterprise Cloud System includes software, storage and server technologies in one simple, flexible, and secure factory integrated solution and there is no similar solution on the market. This helps IT organizations deliver cloud services with the ability to rapidly deploy a trusted, scaled OpenStack based Linux cloud environment with System z quality of services and you have the ability from the start to scale up to 6,000 virtual machines with 3TB of memory in a single footprint whether it is a public, private or hybrid cloud solution. Nowadays, everybody focuses on how a cloud solution can manage well and that means focus is actually on “Cloud Orchestration”. (Cloud orchestration is all about managing the interconnections and interactions among cloud-based and on-premises business units.)
IBM System z provides dynamic provisioning and cloud orchestration for business critical workloads with added monitoring, performance and data backup, full virtualization and Cloud Management software with IBM Cloud Management Suite for System z.
Additionally IBM z/VM is an industry leading and proven hypervisor and is the heart of the Enterprise Cloud Solution. z/VM provides impressive horizontal and vertical scalability, rapid server provisioning, rock solid workload isolation, and the ability to virtualize key system resource management
IBM Wave for z/VM provides an intuitive virtualization management platform for managing the Enterprise Cloud Systems.
The management software layer (Cloud Orchestration) is the only visible part of the iceberg (the Cloud solution) cost structure as shown in the figure below. I know discussion is necessary and I do not say not required but I say it does not drive your main cost structure. Your cost is still hidden under the invisible iceberg part beneath your computing power infrastructure architecture.
What does it mean when we say that System z can handle 6,000 virtual machine workloads and 3TB memory requirement in a “single footprint”? It means you can “reduce your hidden costs” dramatically.
I can do just a very simple calculation here with a basic assumption: If you require 500 servers on your cloud system, and each virtual server will require at least 2 Virtual CPU (vCPU) and each server will need 6GB RAM, your requirement snapshot basically is:
If you try to size the same requirement in an x86 environment, lets see what happens: x86 baremetal hypervisor on a single x86 machine can give a maximum 64vCPU today with 1TB RAM on a single server. When you do the simple calculation again you find that you will need:
It looks very easy. But wait - before coming to any conclusion, see what the real result is when we consider a real architecture for a virtualization solution.
Your first barrier on x86 virtualization is “Constaint”. Ask yourself this before starting your solution: What is my limitation with x86 virtualization? First you need to consider best practices because this platform does not give the freedom of capacity planning like System z. You cannot overcommit your capacity and utilize 100%. You must understand best performance criteria.
For example, if your workload requires more CPU power (more than 6 vCPU) best practices dictates that you will not find this power on x86 virtualization because of performance and scalability limitations which should be a very important consideration in your solution. Ooops! Do I need to learn more? YES…
If you require more CPU power, first you need to have a reserve of hardware for a dedicated virtual image or you need to consider a physical alternative to be set aside for your workload.
"Come on", you may say," how much hidden cost can there be in this infrastructure solution? It is really cheap when I want to buy a single x86 server." You’re right when you consider only a single machine but most likely, your infrastructure will grow incrementally ad-hoc . Let's change our calculation to be a little bit more complex and see what happens:
For reducing hardware, the first investment cost I choose is a “2 x socket” 2u x86 platform with 2.9GHz maximum core speed and 8 cores. Now your new calculation will be:
2.9 Ghz = 1vCPU
2 socket x 8 core = 32 core for each server
32 core x 2.9 GHz = 92 GHz cpu power
92 GHz / 2vPU= 46 vCPU can position each server (without any redundancy and overcommit calculation)
Result is now 22 servers required …
Assume power consumption to be 900watts for each server with a single power supply and position redundancy. I now require two power supplies for each server and I need to take into consideration a 1.8 Kwh power load for each server. Another assumption based on my previous experience, this server can consume 10,000 Kwh/year power under full load which means 1.14 Kwh energy spend for each hour. And again I assume server is 30Kg with full hardware capacity futures. Let’s say the Data center has sqm/kg= 600 kg, 1 rack = 2 square meters and watt/rack= 15000 watts.
Your new calculation will be:
15 Kwh each rack / 1.8 Kwh = 8 server/rack
8 server x 1.14 Kwh = 9.12 Kwh per rack
You need 3-rack space (6 square meter)
Hourly you will spend 27.36 Kwh power (without cooling electricity consumption)
Example = 0.25-cent Kwh means you need $6.80 per hour for only the server’s electricity. Your yearly electricity cost will be $60,000 USD for only 22 servers!
Why did I do this calculation?
A single system zEC12 is about 5.5 Kwh electric spent under the same load and only a 3.16 square meters footprint is needed. Which means your footprint cost is almost half and your electricity cost is 5 times less per year for only this basic example! In real life, it is more complex than this example and now I suggest that you think again about your Cloud Solution.