Before you begin creating and provisioning virtual machines for your business applications, ensure your KVM hypervisor is capable of sustaining these applications. All the virtual machines will rely on the hypervisor's integrity and availability. The three key areas to initially address are:
Protecting data and resources by providing secure tracking, audit trails, alert mechanisms, and reporting
Managing and monitoring resources, and offer interfaces for configuring or modifying virtual machines as needed
Backing up data and executing data recovery aligned to the application's needs.
Protecting data and resources
KVM hypervisor security is critical because it typically has access to all of the virtual machines' resources under its control. If the hypervisor is compromised, an unauthorized user could potentially gain access to confidential data. Good security practices are essential for establishing business trust. How do you do this? Several open source and commercial tools can help effect good security practices and policies. To establish consistency, the same tools available to secure a KVM environment can also secure Linux virtual machines.
Some of the key software components to secure the KVM hypervisor are:
FirewallD for network security
LDAP for centralized authentication
SELinux for access control policies that confine access to data
Linux Audit to provide detailed audit trail information you might not find in the system log.
In addition, there is support for cryptographic hardware in the IBM z Systems platform that can perform DES, TDES, AES, RSA, SHA-1, and SHA-2 cryptographic operations. CP assist for cryptographic functions (CPACF) instructions are available to KVM for IBM z and its Linux virtual machines when the kernel modules are loaded.
Managing and monitoring resources
The Linux ecosystem offers open source and commercial monitoring tools by which the KVM for IBM z resources can be managed and monitored. There are three primary methods that can be used:
The Linux shell in KVM for IBM z is available to handle most any resource configuration. CPUs can be configured on or off, memory can be enabled or disabled and storage devices and network interfaces can be added or removed.
Utilizing the IBM z Systems HMC, either with DPM mode or with standard PR/SM mode additional processors or memory can be dynamically added to a logical partition. With DPM mode, additional storage devices and network interfaces can be added or configured dynamically.
Kimchi’s management interface for KVM is http based. It allows for KVM network and storage resource management.
KVM for IBM z provides a number of built in open source monitoring packages such as nagios monitoring plugs, snmp agents, standard libvirt APIs, sar, systemtap, and many more. And if you find what was provided does not exactly fit your needs, KVM for IBM z Systems does provide an SDK. The SDK has the compilers and development libraries need to build perform builds of additional software projects.
Libvirt is a library of open source APIs that includes a daemon and management tools, that are installed with KVM for IBM z. You can create, delete, run, stop, and manage your virtual servers using the virsh command. Besides virsh, there is a graphical tool called Virtual Machine Manager or more commonly “virt-manager”. Virt-manager can handle most of the common lifecycle functions of a virtual server, including installation. It also has basic monitoring, console access, and resource management of the virtual server and some KVM host resources.
Many open source tools are typically included in Linux distributions, and if they are not included you can build them from source. To maintain a consistent approach, chose tools that manage both the KVM hypervisor and it's virtual machines.
Backing up data and executing data recovery
A KVM for IBM z environment can be backed up in a number of ways, therefore when designing your backup and recovery strategy consider the following questions:
Should the virtual machines to be up and running or require them to be shutdown during the backup and recovery?
How is the disk storage provisioned to the virtual machine?
What is the recovery point objective (RPO)?
What is the recovery time objective (RTO) ?
The KVM hypervisor and virtual machine backups can be categorized as:
The core operating system disk needed for boot
The additional storage used to host image files and system logs
Key configuration files such as for networking and virtual machine definitions
There are multiple ways to back up each of these categories. The core operating system disk could in its most basic form be backed up via Linux dd commands from another system. You might want to do this right after installation. You could also utilize FlashCopy or disk mirroring technologies to create a consistent point in time copy without taking down the KVM hypervisor or virtual machine. To exploit FlashCopy or similar technology, there typically is a requirement to install some command line interface program to direct the FlashCopy operation and to have network connectivity to the console of the storage subsystem.
The additional storage used to host image files could also use FlashCopy or disk mirroring, but other options exist as well. A QCOW2 snapshot or a LVM snapshot are examples of other options that may help you minimize downtime.
Key configuration files such as the KVM hypervisor network definitions, Open vSwitch definitions, zipl.conf, zfcp.conf and others could be backed up via file based tools such as rsync. The amount of storage these files take is relatively small.
It may also be useful to have partition, volume group, LVM, and file system information captured and recorded in the event you need to perform a recovery. This information could be easily gathered on a regular basis and transmitted to a remote archive.
Another option would be to utilize file level backups either with open source tools like rsync or commercial tools like IBM Tivoli® Storage Manager (TSM). If a virtual machine were destroyed one approach might be to provision a new base Linux and restore all the files from the most recent backup, rather than using disk image level backups and restores.
Part of the planning for backup and recovery also needs to consider the middleware. For example a database would typically utilize its own utilities in order to provide backups without any or minimal down time. A comprehensive backup and recovery strategy typically involves multiple backup methods and the recovery from those backups should be regularly tested.
To help make you plan and deploy a successful and effective environment, read Getting Started with KVM for IBM z Systems, SG24-8332 at:
Bill White is an IBM Redbooks Project Leader for IBM z Systems. He works with technical experts from around the globe to produce technical enablement content.
This week's guest blogger is Ravi Kumar. Ravi is a Senior Managing Consultant at IBM (Analytics Platform, North American Lab Services). Ravi is a Distinguished IT Specialist (Open Group certified) with more than 23 years of I/T experience. He has a Masters degree in Business Administration (MBA) from University of Nebraska, Lincoln. He had contributed to 7 other redbooks in the areas of Database, Analytics Accelerator and Information Management tools. His social profile can be
viewed at: http://www.linkedin.com/in/ravikalyanasundaram
IBM SPSS Modeler is a powerful analytic tool that supports all phases of data analytics process, including data preparation, model building, deployment, and model maintenance. You can leverage SPSS Modeler to build analytical models, which can be used in statistical analysis, data mining and machine learning. The data scientists can work with user-friendly SPSS Modeler client interface to access mainframe data with the same level of ease as that of data from any other platform they are accustomed to. SPSS Modeler can also take advantage of in-database transformation and in-database modeling using IBM DB2 Analytics Accelerator for z/OS (IDAA) as the data analytics hub on z/OS.
Until recently, z Systems did not offer an efficient solution in the area of complex mathematical processing. So, in the past, you may have resorted to the idea of offloading operational data (that is a snapshot from a prior point in time) from z Systems to a distributed platform in order to implement machine learning, and those solutions often resulted in obsolete and unreliable results in addition to the unwanted security exposures.
Now, with IBM DB2 Analytics Accelerator you can enable Machine Learning on your OLTP applications that produce and consume z Systems data, simultaneously accelerating the execution of data transformation and analytical modeling processes with the power and performance of MPP (Massively Parallel Processing) architecture in IBM Netezza appliance. All without offloading data from z Systems to distributed environments (which by the way, also eliminates a potential data breach situation).
In-transactional scoring using the Predictive models created with the above approach can scale with your DB2 for z/OS transactional environment. This is accomplished through in-database scoring using SPSS Scoring Adapter for DB2 for z/OS, which perform real-time scoring on your predictive models to quickly reveal what's interesting in your data. When the predictive model is published in SPSS, the Scoring Adapter for DB2 z/OS uses PACK/UNPACK functions for efficient parameter move and can create an SQL statement with HUMSPSS.SCORE_COMPONENT UDF. This generated SQL statement can be embedded in your OLTP application. The other popular alternative is to generate scoring model in open-standard PMML (Predictive Model Markup Language) format. The score can then be combined with your business rules to make real-time decisions on your DB2 for z/OS data from within your mainframe applications. You may also resort to vendor tool called Zementis that uses the generated PMML to implement in-application scoring in CICS and Java applications accessing DB2 for z/OS.
The above approach easily enables your OLTP and batch applications accessing mainframe data with early machine learning capability to learn hidden patterns in your operational data using mathematical modeling algorithms that are readily available with IDAA (as INZA stored procedures that entirely runs on the Accelerator). With IDAA V5.1, you can utilize five major predictive analytics algorithms viz., K-Means, Naive Bayes, Decision Tree, Regression Tree, and Two-step.
Unsupervised Learning algorithms like K-Means and Two-step uses descriptive statistics to analyze the natural patterns and relationships that occur within your operational data on DB2 for z/OS. Unsupervised learning models can identify clusters of similar records and/or relationships between different fields within an accelerated DB2 for z/OS table. For example, K-Means and Two-Step clustering algorithms (available through stored procedures like INZA.KMEANS and INZA.TWOSTEP) can enable Machine Learning in areas like market segmentation, geostatistics, market basket analysis (by association learning) and so on.
Supervised Learning uses historic/training data to construct decision trees and the constructed tree is then used to predict future values. Classification technique can be used to identify which group or type a new record, that is being inserted into your DB2 for z/OS table, belongs to based on key characteristic values on its fields. Regression technique can be used to predict future values for a given field based on past historic values. Algorithms like Naive Bayes, Decision Tree, and Regression Tree can be used to solve classification and regression problems. Thus the predictive models using supervised learning algorithms (available through stored procedures like INZA.DECTREE, INZA.REGTREE, and INZA.NAIVEBAYES) can be used to predict whether a customer will buy or leave, credit card fraud, up-selling opportunities, voters responsiveness to different types of election campaigns and so on.
Summary: Neuroscientists say that pattern recognition and emotional tagging help humans with quick decision making. Algorithms are a big part of machine learning and these algorithms can aid the executives with more and more evidence based decision making using hot operational data on z/OS. The executives can now combine modern machines' processing power with their own ingenuity to avoid flawed decisions that are sometimes caused by emotional tagging.
Today, we’re delighted to share the latest member of the IBM z Systems family: the IBM z13s. We think you will like it. A lot.
The z13s delivers many exciting possibilities over its predecessor, the IBM zBC12.
The short list includes:
Accelerated data and transaction serving
Integrated analytics for insight
Access to the API economy
An agile application development and operations environment
Efficient, scalable, and secure cloud services
End-to-end security for data and transactions
The high levels of virtualization provide options for cloud deployment to assist with such areas as application development and testing. The hypervisor is key for virtualization and the z13s supports both hardware and software hypervisors (PR/SM, KVM, and z/VM).
The underlying architecture has expanded to enable new solutions such as integrated analytics to bring valuable opportunities for your business while support existing applications.
N10: 1 CPC drawer with max. of 10 customizable PUs
N20: 1 or 2 CPC drawers with max. of 20 customizable PUs
Up to 4 TB
The z13s offers 26 capacity levels (times 6 CPs) for 156 settings. Plus, models are offered for either all-IFL or all-ICF configurations.
Analytical vector processing
Redesigned larger caches
Enhanced accelerators for data compression and cryptography
For enterprises aiming to move their IT infrastructures in closer alignment with their business plan, the z13s offers unparalleled levels of flexibility through virtualization, analytical insight, security. Enterprise-wide agility will help you embrace the challenges of the exploding on-demand digital age.
In the course of an IT career, many of us may have sat at our desks looking at a sluggish application and wondered, "If I increase the amount of memory here or there, will this improve performance?" And, hopefully, your next thoughts would have been about the impact on I/O operations and cost, CPU usage, and transaction response times.
Although the magnitude of these changes can vary widely based on a number of factors, including potential I/Os to be eliminated, resource contention, workload, configuration, and tuning, you should carefully consider whether your environment could benefit from the addition of more memory to your software functions.
Significant performance benefits can be experienced by increasing the amount of memory assigned to various functions in the IBM® z/OS® software stack, operating system, and middleware products. IBM DB2® and IBM MQ buffer pools, dump services, and large page exploitation are just a few of the functions whose ease of use and performance can be improved when more memory is made available to them.
Recently, an IBM Redbooks Redpaper was published that can help you to examine the performance implications of increasing memory in the following areas:
DB2 buffer pools
IBM Cognos® Dynamic Cubes
MDM with larger DB2 buffer pools
Java heaps and Garbage Collection tuning and Java large page use
MQ v8 64-bit buffer pool tuning
Enabling more in-memory use by IBM CICS® without paging
DFSort I/O reduction
Fixed pages and fixed large pages
Different environments, of course, may experience a wide range of performance benefits but there does seem to be enough evidence to suggest that configuring more memory could be a positive enhancement for many installations due to reduced I/O rates, improving transaction response times, and in some cases, reduced CPU time.
To read more about this and see some examples, read the IBM Redbooks Redpaper :
A year and a half ago, IBM Wave for z/VM came onto the scene to provide a simplified and cost effective way for companies to harness the consolidation capabilities of the IBM z Systems platform and its ability to host workloads of tens of thousands of commodity servers. In December 2015, an IBM Redbooks residency started running to make important updates to the IBM Redbooks publication, IBM Wave for z/VM Installation,Implementation, and Exploitation, SG24-8192. IBM Wave Release 2 further expands the capabilities by delivering increased support for Linux distributions and devices, as well as additional enterprise-grade security and performance enhancements.
Some of the updates in this book include instructions on how to do a bare metal installation from Red Hat Enterprise Linux Servers using IBM Wave for z/VM.
Additionally, this IBM Redbooks publication includes a new chapter that describes IBM Wave / BTS parameters that might influence performance and resource usage. This chapter discusses:
The IBM Wave Parameters window
The BTS Manager window
How to restart the Background Task Scheduler (the BTS)
And how to produce a dump of the BTS
We’ve also included an appendix in this version of the IBM Redbooks publication that includes, among other things, IBM Wave for z/VM flow charts that can assist you in planning, preparation, installation, and setup of your IBM Wave for z/VM system.
For to download this IBM Redbooks publication, see:
Bill White is an IBM Redbooks Project Leader for z Systems Hardware, Networking, and Connectivity. He works with technical experts from around the globe to produce books, papers, guides, and blogs.
The IBM z Systems platform offers a framework for standards and open source, which are key to making virtualization effective, from creating and managing virtual machines through building and automating a cloud environment.
Kernel-based virtual machine (KVM) is an open source virtualization technology that turns the Linux kernel into an enterprise-class software hypervisor. KVM for IBM z Systems uses hardware virtualization support that is built into the z Systems platform, known as IBM Processor Resource/Systems Manager™ (PR/SM™). This means that KVM for IBM z can do things such as scheduling tasks, dispatching CPUs, managing memory, and interacting with I/O resources (storage and network) within the z Systems platform.
1. What is the importance of KVM for IBM z?
KVM for IBM z uses the common Linux-based tools and interfaces, while taking advantage of the robust scalability, reliability, availability, and high throughput that are inherent to the z Systems platform. And those strengths have been developed and refined on the z Systems platform over several decades.
The z Systems platform also has a long history of providing security for applications and sensitive data in virtual environments. It is the most securable platform in the industry, with security integrated throughout the stack (in hardware, firmware, and software).
In addition, KVM for IBM z is capable of managing and administering multiple virtual machines, which allows thousands of Linux-based workloads to run simultaneously on a single z Systems platform.
2. What is the advantage of using KVM for IBM z?
KVM for IBM z is an easy-to-deploy and simple-to-use hypervisor that integrates virtualization capabilities to the IT infrastructure, this includes:
Enabling the sharing of CPU and I/O (storage and networking) resources by virtual machines
Allowing for the over-commitment of CPU, memory, and swapping of inactive memory
Supporting live virtual machine relocation (workload migration) with minimal impact
Permitting dynamic addition and deletion of virtual I/O devices
Supporting policy based, goal oriented performance management and monitoring of virtual CPU resources
3. How do you manage a KVM for IBM z environment?
KVM for IBM z Systems provides standard Linux and KVM interfaces for management and operational control of the environment, such as:
The command line interface (CLI) is a common, familiar Linux interface environment used to issue commands and interact with the KVM hypervisor. The user issues a series of successive lines of commands to change or control the environment.
Libvirt is open source software that resides on KVM and many other hypervisors to provide low-level virtualization capabilities that interface with KVM through a CLI called virsh.
An open source tool called Nagios can be used to monitor the KVM for IBM z environment.
4. What is the high-level architecture of KVM for IBM z?
KVM for IBM z runs in a z Systems logical partition (LPAR) and creates virtual machines as Linux processes. The Linux processes use a modified version of another open source module, known as a quick emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the virtual machine.
The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can schedule virtual machines on real CPUs and manage their access to real memory. QEMU runs in a user space and implements virtual machines using KVM module functionality.
QEMU virtualizes real storage and network resources for a virtual machine, which in turn uses drivers (virtio_blk and virtio_net) to access these virtualized storage and network resources as shown in Figure 1.
Figure 1. KVM for IBM z Systems reference architecture
5. What are some key design points when designing a KVM for IBM z infrastructure?
With KVM for IBM z Systems, you will need to plan and design the virtualized environments in which you build and run the virtual machines. Things to consider include:
KVM supports CPU and memory over-commitment, so using Nagios to monitor virtual CPUs and memory usage is important as the virtual machines increase in numbers.
A common preferred networking practice is to isolate management traffic from user traffic to ensure sensitive data is kept separate and secure.
Different storage infrastructures and protocols are supported with KVM for IBM z, you will need to design the storage architecture to complement your environment.
KVM for IBM z provides standard Linux and KVM interfaces for management. The way in which your management tools will interact with the virtualized pool of resources needs to be planned out.
The biggest reason to split the books is that this will allow us to update books as new versions come along instead of waiting. It will allow for our resident teams to work more in depth on each volume to provide a deeper dive into the content of each volume. Additionally, if you only want to learn more about one of the volumes, you can just download that volume. It's a more streamlined way of getting and finding the content you need, when you need it.
What are your thoughts on going forward with this publication? Should we merge them back together in the next iteration or keep them separate?
And by the way, if you are looking for the previous version of the IBM Redbooks publication, The Virtualization Cookbook for z/VM 6.3, RHEL 6.4, and SLES 11 SP3? You can now find it here:
When running in a virtualized environment, any reasonable administrator tries to reduce the time needed for standard tasks. In the early days of Linux on z/VM, this resulted in a procedure using golden images and cloning. This procedure simplified the deployment of Linux to new z/VM guest systems and has served many administrators well for a long time. However, over time, the Linux systems changed. With the introduction of newer technologies such as systemd on Linux, a number of problems came about that made the once so nifty feature of cloning golden images more and more difficult.
Problem: Make the image golden
During first bootup, Linux creates unique data at lots of locations. The number and location depends on the installed software. It requires detailed knowledge about the software used to make sure, that all these strings are
recreated during the first bootup of the cloned machine.
Unfortunately, there is no means to detect the needed changes available in the system. However having some of those places not updated can result in security issues and data corruption of the involved clones later on. A clone
that works in the first place is not necessarily done right.
This issue is not new, it already existed with SLES11 and RHEL6, however it became worse with the introduction of systemd and its machine id. It is therefore recommended, to move away from deploying clones to use either automated installation or the imaging software kiwi.
Solution: do not create the unique data in the first place
The actual problem exists only, because cloning relies on the configuration of a readily booted system. This system then is cleaned up and prepared for the actual cloning process. After cleanup, it is also called "golden image". All of the files needed within the production system are already created during the first startup of this system. The cleanup process must take care to remove all data from the system that should be uniqe. This data has then to be recreated during the first bootup of the clone.
The only reliable solution to accomplish this is, to avoid the creation of the unique data in the first place. This means, the golden image never should have been booted before cloning new virtual machines. To avoid issues, you may want to use automated installations as described in "The Virtualization Cookbook for z/VM 6.3, RHEL 7.1 and SLES 12". However if you have to rely on readily build images, the creation of virtual appliances is the way to go.
This is where the imaging software KIWI steps in.
Instead of creating a golden image to clone, a virtual appliance is created. This virtual appliance is never booted during the image creation process. The deployment of the virtual appliance is very similar to the one of a golden image: It is copied to a new disk, and given several parameters to finalize its configuration during the first startup.
If your business processes requires you to test a readily built image, this is also possible with the virtual appliance. However, needed changes to the image must be done the the KIWI configuration, and will only be available with the next iteration of a newly created image of the virtual appliance. You don't apply the changes to the live system, but to the configuration of
the virtual appliance.
This procedure can simplify automations. For example, to provide an image with all updates installed, you will just need to provide the update repositories during the image creation. After new updates are available that you need in your golden image, just repeat the building process, and the resulting image will contain all the updates. This also results in more
secure systems at the time of redeployment compared to deploying the updates only after starting the original image.
Our IBM Redbooks blogger, Berthold Gunreben, is a Build Service Engineer at SUSE in Germany. He has 14 years of professional experience in Linux and is responsible for the administration of the mainframe system at SUSE. Besides his expertise with Linux on z Systems, he is also a Mainframe System Specialist certified by the European Mainframe Academy: http://www.mainframe-academy.de. His areas of expertise include High Availability on Linux, Realtime Linux, Automatic Deployments, Storage Administration on the IBM DS8000®, Virtualization Systems with Xen, KVM, and z/VM, as well as documentation. Berthold has written extensively in many of the SUSE manuals.
We would like to introduce you to this exciting new release of the operating system by sharing our IBM Redbooks content with you.
The IBM Redbooks team brought together expertise from around the world to help you discover and explore the potential of z/OS V2R2. Let IBM Redbooks guide you through the opportunities that the new release of the operating system can bring to your business.
We have modularized the content to help you pick and choose subjects which pique your interest. We suggest you start with The IBM z/OS V2R2: IBM Redbooks Content Guide to understand how we have categorized the topics and the related content.
If you already see the topic you want then download it immediately!
I couldn’t help but notice that there was a lot of talking about me lately. And you all know how it goes when people start talking about you. Rumors come up about what you can do and what you can’t. Sometimes people even seem to know you better then you do yourself. So I figured I can’t let this go any further.
But before I go into detail about why you really should be using me, I want to talk some business. I mean, you all see the increasing importance of analytics in business. You need it for your organization’s success, for your customer’s satisfaction and for your own decision-making. But data alone is not enough. You can have all the data you want on your customers, but if it takes you too long to use the data to your advantage you’re at a disadvantage against your competitors. They might leave you behind if you don’t react to the market fast enough. Telling by experience and from all the data I’ve seen, we live in a fast world. Processes have to be adjusted to situations; decisions have to been made quickly.
The last couple of weeks I came across that word “agile” over and over again. Sounds like a great concept to me. But most of all it sounds like people realized that for your business two things are essential: knowledge of what is going on, and using this knowledge to react appropriately. What I can do for you is to form a self-managing hybrid environment together with DB2 for z/OS to address the needs of business intelligence and analytic processing workloads. I even can do that while continuing to run mission-critical transaction processing and analytical workload concurrently and efficiently. I leverage the power of zEnterprise, DB2 for z/OS and Netezza technology which makes it possible for you to integrate analytics insights into operational process to drive business critical analytics resulting in exceptional business value. Come on, you have to admit this is impressive, isn’t it?
Still not convinced of my capabilities? OK, what if I tell you that I can also do all of the following:
Supporting running complex queries on very large volumes of data
Accelerating analytic query response times
Lowering cost of storing, managing and processing historical data
Minimizing the need to have data marts for performance
Reducing capacity requirements on z systems
Reducing operational cost and risk
In combination with DB2 and z/OS I can accelerate data-intensive and complex queries in a DB2 for z/OS highly secure and available environment
And I am quite flexible too! I support the Accelerator-shadow table, the Accelerator-Archive table and the Accelerator-only table. You want to know what that means? Well, explaining all of it right now would go beyond the scope of a blog post and I have a date for lunch. But I heard there would be a Redbooks publication to give you all the information you need… Check it out and we will talk about it later!
Last weekend I spent quite some time thinking about how to create a blog post to get people as excited about DB2 for z/OS temporal data management as I am. After a while I decided to call an old friend to take a break and maybe receive new input. Somehow we ended up playing an old childhood game in which you put each letter of your name in a different line and then find a word for each letter that describes you. In the end you have a list of attributes describing your personality. So why not trying this to describe something technical such as temporal data management as well? Here is what I came up with:
Time-based data management that can help businesses manage the increasing amounts of data and retention requirements
Enables you to accurately track information and data changes over time.
Makes it easy to insert, update, delete and query data in the past, present or future by using new and standardized SQL syntax.
Provides an efficient and cost-effective way to address auditing and compliance requirements.
Opportunity to have multiple stored versions for every logical row.
Remembers all past versions of rows in a table. If we are talking about a bank account for example, DB2 for z/OS temporal data management will help you to provide a detailed history of their accounts to your customers – and not by using additional tables with triggers or stored procedures as is current practice.
Application development, maintenance and management can be simplified.
Leverage DB2 for z/OS temporal data management to obey regulations and fulfill customer needs, no matter if you are in the insurance, financial, retail, human resources or any other sector.
As you can see DB2 for z/OS temporal data management provides many ways to help you and your customers to successfully face today’s business challenges by recording and maintaining ever increasing amounts of data.
So you’ve built a killer application. It’s useful. It’s novel. It’s clever. Surely it’s going to be a huge success – fame and fortune await. Or…. do they?
As IBM Distinguished Engineer Frank De Gilio tells it at the SHARE Orlando conference, usefulness, cleverness and novelty in today’s market are not enough. There are three other important factors to consider:
Is it fast?
Is it efficient?
Is it easy to use?
Never has this applied more than it applies to today’s world, and the role that mainframe applications play in that world. We have many big monolithic mainframe applications and they all live in the data center. The aggregation point is the mainframe servers in that data center. But those days are gone. The days of terminals, and even latops, is disappearing. Mobile is the new aggregation point, and time from development to production has moved from months to days.
This new world is the API economy. Monolithic applications are broken down into smaller pieces – functionality that you can call – known as services. This is the microservices architecture at work – the approach of designing applications as collections of smaller, independent services.
Frank De Gilio describes how this new business programming model can be divided into two roles:
Hardcores – These are the people who understand how the mainframe systems work
It is these Scripters that services of today need to appeal to. Scripters don’t care about platforms, they care about how fast, efficient, and easy to use your service is.
Unleashing current business applications as services provides big advantages, putting existing capabilities into the hands of new users. Combining cloud service model with z/OS parallel system is a winner!
Do you still create utility jobs manually to maintain several objects? Do you think that your maintenance jobs need to be run on a predefined frequency basis? IBM DB2 Automation Tool for z/OS helps you with these challenges.
Combining object, utility, and job profiles, DB2 Automation Tool can reduce and facilitate manual routine tasks and focus on more complex job responsibilities that add more value to your company. Additionally, when using exception profiles and DB2 Automation Tool, you can define in a utility profile when to run a utility against an object in an object profile. You select the conditions from a statistics list in the exception profile.
But instead of talking about the solution itself, we want to give you more information about what these profiles actually are, how they work and how you can use them to create an autonomic infrastructure:
Object profiles allow you to create reusable lists of objects. You can group related objects into one profile, such as all objects for a particular application, objects with similar maintenance requirements, etc. In an object profile, you can include objects on which you want to run utilities, as well as exclude objects that you want the utilities to ignore.
You can create object profiles using either the IBM Management Console for IMS and DB2 or by using the ISPF panels in automation tool. Here you can see the GUI for creating it in the IBM Management Console:
A utility profile is a collection of one or more utilities and their respective run time options. Using a similar technique to creating object profiles, we can now create a utility profile to address any particular maintenance requirement. You can select the utilities that you want to execute and “Update Utility” will allow you to specify the parameters you want to specify for that given utility. Once created a utility profile can be updated at any time to include more utilities or to change the options for a given utility.
The following list shows the utilities and functions that are available:
Exception profiles allow you to define when a utility in a utility profile should be run against an object in an object profile. You select the conditions from a statistics list in the exception profile. The exception profile is placed in the job profile with the object and utility profile. During the job build, exception processing produces a list of accepted objects and a list of rejected objects. When creating utility profiles, you can specify whether the utility is to be executed on the accepted objects, the rejected objects, or both.
There are 184 available selection criteria that we can use to select candidate objects. Also, we can provide our own criteria through a user exit interface. There are 10 supplied default exception profiles and viewing these will give you a good idea on how to create and specify your own based on your site standards:
Job profiles combine the object profiles and utility profiles (and optionally exception profiles) into a set. If no exception profile is included in the job profile, then each utility is run unconditionally on each object on the object list. You can combine multiple object profiles with multiple utility profiles, and can specify the job step order for the generated job. The combined profiles, which are headed by the job profile, form the basis of a DB2 Automation Tool task. You can submit this task manually or schedule it by using the DB2 administration task scheduler or your site’s scheduling software. The job profile will evaluate the exception profile against the objects in the object profile and when a condition is met will generate JCL and Utility statements to perform the tasks specified in the utility profile against the objects that met the condition.
To create a job profile use the ‘C’reate command on the command line:
These profiles allow you to help IT staff reduce think time to repetitive tasks and also to analyze the environment in order to run only what is needed and when it is needed, reducing the CPU utilization for maintenance jobs that do not really need to run in a defined maintenance window. So by combining object, job, exception and utility profiles with the DB Automation Tool, you can make your database environment work more efficiently.
With the addition of the Management Console and the Autonomics Director you can now not only exercise “Passive” autonomics but you can start to move into “Active” autonomics. The Management Console makes monitoring the current symptoms and automating the suggested actions easy.
And how about you – did you already created an autonomic infrastructure? What were your experiences using these profiles in DB2? Tell us what you learned while working with these products. If you want to see additional material about the process of creating the autonomic infrastructure using DB2, see the IBM Redbooks publication Modernize Your DB2 for z/OS Maintenance with Utility Autonomics.
There have been many changes in the past 25 years in our IT world that have led to the need for autonomics in our database environment, especially in DB2 on z/OS. But while always talking about the solutions, the question arises – how do you actually implement them?
Each company may have different priorities which dictate the order of the implementation steps. Company A may need to apply intelligence to their reorg utilities as their top priority while Company B may need to address utility standards because of the impending retirement of the support person for their homegrown DB2 utility generator. Regardless of the order in which you start, IBM provides the software for a comprehensive autonomic environment that addresses the business problems that most companies face: limited expertise, greater application availability, or the need to control costs by moving work to off peak hours.
Here we will show you how to move from the traditional steps into a modernized autonomic environment, by implementing an active strategy for your DB2 Maintenance Tasks, following these five steps:
Step 1: Collect the metrics and related statistics for utility maintenance
First of all, you have to collect all relevant statistical data on your DB2 objects. This data can be used to filter out objects that are physically disorganized. Your goal is to execute run maintenance by exception and filter out wasting of resources attributed to running of utilities. IBM provides two DB2 stored procedures that collects statistics about objects we have defined via a profile and will generate an alert if the statistics exceed criteria, placing the alert into a table, and performs the RUNSTATS for your optimizer needs.
Step 2: Group your objects
Grouping your DB2 objects can be achieved in several ways. DB2 Automation Tool provides a function called Object Profiles which provides more flexibility and functions for object grouping – called Object Profiles. Using these Object Profiles, you can include objects on which you want to run utilities, as well as exclude objects that you want the utilities to ignore. Object Profiles are similar to DB2 TEMPLATEs. They allow table spaces and index spaces to be chosen for processing in much the same way.
Step 3: Create exceptions and thresholds for utilities
The next step to implement an active autonomic strategy is to run all your maintenance by exception filtering. The DB2 Automation Tool provides a function called the Exception Profile. This definition contains the conditions under which users want to run utilities. When combined with Object Profiles and Utility Profiles, the Exception Profiles act as a filter against the objects specified in the Object Profile.
Step 4: Build optimized utility JCL and jobs
Before execution, first you have to build the optimized utility JCLs and jobs. Regarding this, Job Profiles are used to connect the different profiles which are created in the DB2 Automation Tool. A Job Profile is the master profile and associates all the profiles - Utility Profiles, Object Profiles, and Exception Profiles - together. The combined profiles, which are headed by the Job Profile, form the basis of a DB2 Automation Tool task. We can submit this task manually or schedule it by using the DB2 administration task.
Step 5: Execute the jobs in a predefined maintenance window
Today, a typical maintenance strategy has pre-defined jobs in a job scheduler. These jobs are run in maintenance widows weekly, monthly and quarterly. With the Autonomics Framework, you can leverage your own batch scheduler for spawning evaluation jobs as well as starting the Autonomics Director procedure at any time during your maintenance window.
After following these steps, transforming your passive into an active autonomic environment, the corrective actions are taken automatically by the system – i.e. monitoring and analyzing the related metrics to pro-actively make recommendations and even execute them. These are tasks typically done by a DBA. With these automated basic administration tasks you give DBAs freedom to work on higher business value tasks. And more important they do not rely on old, homegrown processes which are difficult to maintain and keep up with new DB2 versions.
And how about you - have you already moved from passive to active strategy in your environment? What benefits have you seen? Tell us which experiences you gained regarding the change process.
We have just released A Practical Guide to ICF Catalogs. This IBM® Redbooks® publication is a book (90 pages) and is intended to give you a broad understanding of Integrated Catalog Facility (ICF) catalog environments. It includes suggestions for design, planning, and deployment activities that will help you create and maintain a balanced and efficient catalog environment.
Four scenarios are provided to illustrate typical activities that are associated with an organization's catalog requirements.
Please provide us with your feedback on the guide and if you would like additional topics to be addressed or any other information which may help you.
The world is becoming more digitized and interconnected, which opens the door to emerging threats, leaks and attacks. The average cost of a security breach in 2013 was $5.4 million US dollars1 (USD)! Not to mention the impact a security breach can have on your company’s reputation and brand. Can your customers trust that their personal data is secure?
Analytics, mobile, social, and cloud computing all have one thing in common: They need a platform that has a deeply integrated security stack.
The IBM z13, IBM’s newest addition to the mainframe, has a rich and proven legacy of delivering a secure infrastructure. It’s built on a set of hardware security capabilities that include multi-state operation modes, storage key isolation, high speed standards-based encryption, logical partitions as well as many other features and benefits. The operating systems that can be run on the IBM z13 include z/OS, with SSL (a component of the z/OS Communications Server) to support both SSL and TLS protocols, z/VM, and Linux on z Systems.
Back in October, 2014, I wrote a blog announcing a residency to write an IBM Redpaper on End-to-end security using z Systems.
In that blog post, I discussed the reasons why reviewing security in your enterprise was so important. The cost of poor security or even loose security is just too great. In November, I ran the residency and that IBM Redpaper is currently in draft. It provides a high level architectural overview of how to secure your z Systems enterprise from end to end.
The IBM z13, like its predecessors, is designed from the chip level up to support data processing. This includes a strong, fast I/O infrastructure, cache on the chip to bring data close to processing power, security and compression capabilities of the coprocessors and I/O features, and the 99.999% data availability design of the coupling technologies.
The figure below shows ten easy steps for implementing an I/O configuration for your z13. The numbered steps are described after the figure.
a. When planning to migrate to a z13, the IBM Technical Support team can help you define a configuration design that meets your needs. The configuration is then used during the ordering process. b. The IBM order for the configuration is created and passed to the manufacturing process. c. The manufacturing process creates a configuration file that is stored at the IBM Resource Link website. This configuration file describes the hardware being ordered. This data is available for download by the client installation team. d. A New Order report is created that shows the configuration summary of what is being ordered along with the Customer Control Number (CCN). The CNN can be used to retrieve CFReport (a data file that contains a listing of hardware configuration and changes for a central processor complex (CPC)) from Resource Link.
Make sure that you have the current PSP Bucket installed. Also, run the SMP/E report with fix category (FIXCAT) exceptions to determine whether any Program Temporary Fixes (PTFs) must be applied. Ensure that you have the most current physical channel ID (PCHID) report and CCN from your IBM service representative. Have extra cables (fiber optic and copper) available just in case some get damaged as they are being relocated.
When you plan your configuration, consider this information:
– Naming standards
– FICON switch and port redundancy
– Adequate I/O paths to your devices for performance
– OSA Channel Path Identifier (CHPID) configuration for network and console communications
– Coupling facility connections internally and to other systems.
Because the z13 server does not support attachment to the IBM Sysplex Timer, you must consider how the z13 will receive its time source. A z13 cannot join a CTN that includes a z10 or before as a member. Since the z10 was the last server that supported the IBM Sysplex Timer (9037) connectivity, the z13 cannot be configured as a member of a mixed CTN. The z13 can only join an STP-only CTN. When you are planning to replace a z196 or zEC12 with a new z13, plan the replacement of channels that are not supported on z13. You must carefully plan how to replace those, for instance, ISC-3 to HCA3-O or ICA SR for connectivity between z13 and z13. You might need to increase CF storage size when you replace z196 or zEC12 with z13. Coupling Facility Control Code (CFCC) level 20 requirements may be different from CFCC level 19 and earlier. Use the CFSizer Tool to get the new CF storage requirements.
The existing z196 or zEC12 I/O configuration is used as a starting point for using Hardware Configuration Definition (HCD). The z196 or zEC12 production input/output definition file (IODF) is used as input to HCD to create a work IODF that becomes the base of the new z13 configuration. When the new z13 configuration is added and the obsolete hardware is deleted, a validated version of the configuration is saved in a z13 validated work IODF.
5. Check a. From the validated work IODF, create a file that contains the z13 IOCP statements. This IOCP statements file is transferred to the workstation used for the CHPID Mapping Tool (CMT). Hardware Configuration Manager (HCM) can also be used here to transfer the IOCP deck to and from the CMT.
b. The configuration file that is created by the IBM Manufacturing process in step 1d is downloaded from Resource Link to the CMT workstation. The CHPID Mapping Tool (CMT) uses the input data from the files to map logical channels to physical ones on the new z13 hardware. You might have to make decisions in response to the following situations, among others: Resolving situations in which the limitations on the purchased hardware cause a single point of failure (SPoF). You might must purchase more hardware to resolve some SPoF situations. Prioritizing certain hardware items over others.
c. After the CMT processing finishes, the IOCP statements contain the physical channels to logical channels assignment that is based on the actual purchased hardware configuration. The CHPID Mapping Tool (CMT) also creates configuration reports to be used by the IBM service representative and the installation team. The file that contains the updated IOCP statements created by the CMT, which now contains the physical channels assignment, is transferred to the host system.
d. Use HCD, the validated work IODF file created in step 5a, and the IOCP statements updated by the CMT to apply the physical channel assignments created by the CMT to the configuration data in the work IODF.
After the physical channel data is migrated into the work IODF, a z13 production IODF is created and the final IOCP statements can be generated. The installation team uses the configuration data from the z13 production IODF when the final power-on reset is done, yielding a z13 with an I/O configuration ready to be used.
IODFs that are modifying existing configurations can be tested in most cases to verify that the IODF is making the intended changes.
8. Available a. If you are upgrading an existing z196 or zEC12, you might be able to use HCD to write an IOCDS to your system in preparation for the upgrade. If you can write an IOCDS to your current system in preparation for upgrade, do so and let the IBM service representative know which IOCDS to use. b. If the z196 or zEC12 is not network connected to the CPC where HCD is running, or if you are not upgrading or cannot write an IOCDS in preparation for the upgrade, use HCD to produce an IOCP input file. Download this input file to a USB flash drive.
The new production IODF can be applied to the z13 in these ways:
– Using the power-on reset process
– Using the Dynamic IODF Activate process
Communicating new and changed configurations to operations and the appropriate users and departments is important.
Runtime Diagnostics is a “point-in-time” diagnostics tool that detects problems that could be contributing to system issues RIGHT NOW with the goal of finding the problems in sixty seconds or less. It has been available since V1R12 and as of V1R13, diagnoses seven types of problems which are documented in this IBM Redbooks Publication http://www.redbooks.ibm.com/abstracts/sg248070.html?Open.
Is that all? Of course not! We are pleased to announce our eighth diagnostic event: JES2 Health Exceptions. This event is available starting in z/OS V2R1 with APAR OA46531. This will be added to the IBM Redbooks Publication at a later date but we wanted to make you aware of this option as soon as possible.
Runtime Diagnostics gathers information about the JES2 subsystem from the JES2 subsystem interface (SSI). Runtime Diagnostics analyzes the information received, determines a possible corrective action, and presents it to the caller on the system console, the hardcopy log, and optionally, to a sequential dataset. An example of a possible event appears below:
ACTION: $S TO ENABLE JES2 TO START PROCESSING NEW WORK.
The analysis for this new event is automatically incorporated into the existing processing. After Runtime Diagnostics has been started, issue the command F HZR,ANALYZE to now have all eight types of analysis performed. Additional information on this new event will also be available in z/OS V2R2 Problem Management when it is available.
Runtime Diagnostics is easy to use and there when you need it, but since it isn't needed very often, it's also easy to forget, so always remember the value of these tools - and that you have them almost literally at the end of your fingertips!
Data is growing at an unprecedented rate. With the introduction of mobile technologies and social media into our everyday lives new sources of data are eagerly captured and stored by organizations to improve their understanding of customer behaviors and needs. The need for personalization is paramount in remaining competitive and with establishing a unique connection with the customer.
The increasing volumes of data must be properly captured, stored, and managed for multiple processes to gain value and advantage from the data. The increased processing will demand more storage resources and therefore an increase in capacity is needed to satisfy the ever increasing amounts of data.
One possible solution is to use the software compression algorithms, but there is an increase in CPU utilization and therefore, it is necessary to trade off the savings in storage against the costs associate with increased CPU utilization. An alternative is to use a hardware accelerator that performs data compression and decompression with lower CPU consumption than software algorithms, and still be able to achieve savings in storage occupancy levels.
With the IBM z13, IBM zEC12, and IBM zBC12 a hardware feature called IBM zEnterprise Data Compression (zEDC) is offered. The zEDC Express delivers an integrated solution to help reduce CPU consumption, optimize performance of compression-related tasks, and enable more efficient use of storage resources, while providing a lower cost of computing and also helping to optimize the cross-platform exchange of data.
Data is the fundamental building block for organizations and the focus on capture, storage and management are key for the organization to continue to offer customers a viable and inviting relationship.
The previous four posts in this blog series have touched on the demands of the modern workload generated by the acceptance of mobile devices into many aspects of our lives. We looked at the changes this has brought to organizations and how they might respond to the new order. But while we may have technical solutions available to satiate the demand, what changes are necessary in the capacity planning process?
Mobile devices have created the consumer expectations of immediacy in terms of information delivery, ease of use, media quality, and personalization. The requests for information need Systems of Engagement (SOE) and Systems of Record (SOR). The data necessary to fulfill the request may not reside in one particular organization but across several and must be drawn together quickly to present the requested information in an accurate, secure and timely manner. The consumer is not constrained by device or contact availability (as in ATM locations or call centers) and mobile technologies have empowered consumers to be able to issue multiple requests on a personal mobile device whenever they wish.
One outcome of this empowerment is the change in traditional IT workload patterns. External circumstances may trigger a sudden increase in transaction volumes, collaborations may increase the transaction count or perhaps cause issues if there is a failure in the intertwined processes. Marketing departments may push for more dynamic campaigns in addition to their published plan. The result can lead to erratic workload patterns causing overloads and temporary demands for inadequate IT resources.
The scope of coverage for capacity planners must expand to accommodate the new challenges. The answer lies not just in technology capability but also in the management of the technology which in turn requires an understanding of mapping the business applications of the organization onto business goals, organizational structure, and the IT infrastructure and capability of both the organization and its partners.
The IBM z13 is ideally positioned to meet the challenges. To help produce an effective plan the following areas might be considered:
The value of virtualization
The impact of specialty engines, SMT and SIMD
Real time analytics.
The speed of I/O
Large memory benefits
SOE and SOR integration
The above capabilities will help to form, protect, and manage your capacity plan. However, it is suggested that input to the plan and new organization-wide commitment to capacity plan ought to include the following:
A capacity planning team structure to include IT, the business, and collaborative partners
Close liaison with marketing to establish expectations
Review of internal and external SLAs
Expansion of analytics for business and infrastructure
Automatic response to busy periods resource allocation
Understanding what resources lie within and outside of the organization
Processes to encourage accountability and understanding in the extended capacity planning team
See IBM Redbooks for more information on the IBM z13 capabilities.
The combination of versatility of the connectivity options, the use of open standards, and the separation of data processing and I/O operations.
Data is produced on an unprecedented scale and access to that data must be quick, safe and guarantee integrity. Data is an organization's most valuable asset in the digital age. As the growth continues it is essential that the technology improves to keep the organization competitive.
The most efficient IT infrastructures that handle today's workloads usually have well balanced systems with superior data processing and I/O capabilities that are responsive and reliable. Such I/O capabilities have been a standard component of the IBM mainframe architecture since first introduced with the IBM S/360 in 1964. Over the past 50+ years I/O technologies have advanced significantly and so have the mainframe I/O capabilities. From the original parallel channels, where I/O devices were connected directly using two copper cables (called bus cable and tag cable) to today's Fibre Connection (FICON®), where optical transmitters, Fiber Channel switches/directors, and fiber-optic cables transport data at link rates of 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps. A bus cable only carried one byte of information each way and a tag cable indicated the meaning of the data on the bus cable.
In addition, z Systems platforms have a unique channel subsystem that delivers high I/O bandwidth. The channel subsystem (CSS) was added to the IBM mainframe architecture to provide a pipeline through which data can be exchanged between systems or between a system and external devices via storage area networks and local area networks. The CSS is the channel path management layer that enables communication to and from system memory and peripheral devices at very fast rates.
The z Systems platforms have dedicated system assist processors (SAPs) in addition to the general purpose and specialty processors. I/O requests are handled by the SAPs, freeing up the general purpose and specialty processors to do other work. The IBM z13 offers up to 24 SAPs, supporting millions of I/O operations per second. This is also possible because all I/O features offered on the z Systems platforms offload some of the I/O operations to the hardware, using licensed internal code (LIC). The result is a significant improvement in both latency and bandwidth for transporting data.
In the z13, I/O features are plugged into an industry standard PCIe I/O drawer with PCIe Gen3 interconnects, delivering throughput speeds of 16 GBps to and from the I/O features. Different types of I/O features are available for each channel or link type and they can be installed or replaced concurrently, so no disruption to the production environment. Each PCIe I/O drawer can support up to 320 FICON channels, which provide unmatched bandwidth to back-end storage systems, while up to 96 OSA-Express ports allow for direct high-speed Ethernet connectivity.
Other I/O features used for system-to-system communications are Integrated Coupling Adapter (ICA SR) and 10GbE RoCE Express (for IBM z/OS®-to-z/OS communications), while HiperSockets technology can be used for communications between logical partitions within the z Systems platform.
If you would like to learn more about z Systems connectivity options, typical uses, coexistence, and relative merits of the available I/O features, go to:
Did you know that all IT vendor products must go through rigorous qualification testing before they are supported on IBM z Systems® platforms?
To acquire qualification, vendors obtain licensed IBM patents, intellectual property, and know-how. This licensing provides vendors access to the proprietary IBM protocols and applications that are used on z Systems platforms.
In this post we will look at the criteria and processes required to successfully complete the IBM z Systems Geographically Dispersed Parallel Sysplex™ (GDPS®) qualification testing for Dense Wavelength Division Multiplexing (DWDM) vendors.
GDPS is an enterprise-wide continuous availability (CA) and disaster recovery (DR) solution that can manage recovery from planned and unplanned outages across distributed servers and z Systems platforms. GDPS can be configured in either a single site or in a multi-site configuration. It is designed to manage remote copy configuration between storage subsystems, automate Parallel Sysplex operational tasks, and affect failure recovery.
GDPS qualification testing for z Systems platforms is conducted at the IBM Vendor Solutions Connectivity (VSC) Lab in Poughkeepsie, New York.
IBM proprietary software and microcode utility test suites form a part of the GDPS qualification tests. They drive the various GDPS components and protocols to the full data rate of each link type that is transported by the DWDM equipment. This level of testing ensures that the maximum channel utilization is achieved and tested to levels well beyond typical client environments.
The test suites are used for verification of z Systems architecture functionality. The functionality test suites must be completely error free to be considered sucessful; all components within the complex are subject to this standard. Any errors detected during this testing are captured and analyzed by the test suites.
The test suites are also used for verification of z Systems architecture recovery by creating various fault and error conditions. The recovery tests check for the correct detection of a fault or error condition by the attached subsystems, and ensure that the recovery adheres to the z Systems architecture.
Below depicts the environment that is used for DWDM vendor qualification testing.
The IBM Redbooks team recently published IBM Redpapers related to z Systems qualified DWDM vendor products for GDPS solutions with Server Time Protocol (STP). The papers describe the z Systems qualification process and the applicable environments, protocols, and topologies that were tested.
The team members that wrote the papers are subject matter experts from the IBM VSC Lab: Pasquale Catalano and Andrew Crimmins, and Bill White (IBM Redbooks team leader).
For more information about IBM Redpapers related to z Systems qualified DWDM vendor products, go to:
The recently announced IBM z13, positioned to meet the evolving demands of the modern workload, and at the same time, continue to improve the more traditional and well established applications. It scales from a small singular deployment instance to serving global organizations.
The IBM z13 is a combination of enhanced technologies brought together in a single footprint. The design takes full advantage of the established foundations of z Systems and builds upon them. New functionality has been added to help with processing data in a more efficient manner than ever before.
In this blog series we are touching on how technologies have changed our lives, and how, in turn, our new expectations are forcing changes onto those technologies. It's a full circle of symbiotic engagement. How things have changed since 2007 – and are still changing; rapidly redefining the way we live, our expectations, and our understandings. The IBM z13 is a product of understanding; it will meet the new demands and help us move forwards with confidence both as consumer and as an organization.
The key pillars on which our new era stand are:
Concurrent multiple operating systems in one machine
Ensuring availability and scalability
Processing large amounts of data with speed
Embracing open systems standards
Expanding the scope of analytical intelligence
Making use of Simultaneous Multi-threading (SMT), large memory, Single Instruction Multiple Data (SIMD)
Providing end-to-end security
The inclusion of mobile devices is accommodated by Systems of Engagement (SOE) and how they integrate with Systems of Record (SOR) to provide a single solution. The mobile Enterprise Application Platform (MEAP) offers the ideal environment to develop and maintain mobile device apps and communicates with the various components in the SOR for request fulfillment.
The organization's traditional workloads can integrate with mobile devices via the SOE. The advantage of the SOR and SOE approach is that the longer established routes to the SOR are also still viable and require little if any change. In terms of TCO and TCA this is a very attractive path and less disruption helps to ease the curve of the risk assessment when moving towards the mobile device apps.
A sample of the inclusion of analytics to appreciate how this might integrate into the whole scenario could be as follows:
In the case above a suspicious transaction is identified.
The advantage of a centrally managed data repository is that it can be referenced by various processes and tools to provide focused advantage through streamlined insight. Content relevance is essential to attract consumer interest and subsequent interaction. Personalization is key in the modern world; it lies at the core of the digital era. Our urge to connect as humans is strong; we are in the social digital era of connections.
The IBM z13 is perfectly positioned to take you and organizations further into this age of connections.
For further reading click on the link to be taken to the IBM Redbooks site to download these two brief but insightful publications.
Analytics is changing the face of business. Technology enablement has brought us to a new era, and the journey continues.
IBM z Systems and, in particular, IBM z13 with its new features and functions enable all of the advantages of improved performance and lowered costs, providing real-time analytics-based decisions.
With the many new features and functions of IBM z13, your organization's workloads will definitely benefit.
One of the new features/functions of the IBM z/13 includes the addition of SIMD-MASS and ATLAS libraries for IBM z/OS® and Linux on z Systems: single-instruction, multiple-data (SIMD), Mathematical Acceleration Subsystem (MASS ), and Automatically Tuned Linear Algebra Software (ATLAS).
The MASS software libraries offer improved performance over standard mathematical library routines and are thread-safe. A piece of code is thread-safe if it only manipulates shared data structures in a manner that guarantees safe execution by multiple threads at the same time. The MASS for z/OS software libraries include accelerated sets of frequently-used mathematical functions for SIMD libraries. Additionally, you can now bring MASS code running on other platforms onto z/OS.
You may ask yourself, but what does this mean in the realm of analytics? Well, it allows for the construction of richer, more complex analytics models that exploit SIMD to provide better accuracy of insight. And better accuracy of insight means more accurate linkage to customer information across channels. You want to translate that data into something meaningful to your company, and you want to make sure it is accurate!
Is there a topic you want to hear about based on analytics? Let me know in the comments below!
The rise in expectations shapes a new landscape for applications development and infrastructure management. The old must be merged with the new in a seamless fashion.
The challenge is to establish a new equilibrium by leveraging the full investment from existing applications and to build and integrate the new possibilities and technologies of mobile. However the challenge is much deeper than that, we must understand its true nature. On the surface we can expect to see the flowing demands:
Performance – engagement response time is critical
Accurate data and information
A pleasant presentation layer
Easy to use applications
Confidence in mobile security
For these demands to be met, we must delve a little deeper to uncover hidden demands. The challenges which lay just below the surface are the building blocks of the enterprise. Here we find the need to:
Capture and manage structured and unstructured data
Increase enterprise intelligence
Provide end-to-end security
Integrate consistent collaboration
Improve application availability and stability
Respond to unprecedented shifts in workload
Take advantage of collaborations opportunities for fulfillment
Manage rapidly growing amounts of data
In summary, we need a platform that is reliable, performant, scalable, responsive, and secure. One that has built-in governance to meet the demands and deliver insight so we can make informed decisions.