Enterprise Class Innovation: System z Perspectives
I was listening to a popular weekly technology podcast (hint hint) when they mentioned how Intel has made a chip for a Gateway PC which, when you need to, you can upgrade in place with a $50 upgrade card. According to PC Magazine (here), the card is used to unlock hyper-threading and additional cache. While the first reaction may be, and was on this podcast, 'wait a minute, why aren't they giving me all the capability?', one of the participants quickly added: '..wait a minute, this is a really good idea... think of the costs savings, extending the life of the processor and...'. OK, obviously I paraphrase, but you get the point. Capacity on Demand has come to the desktop.
While reading many of the news items related to this clearly shows the initial confusion reflected in the podcast I listened to, others are starting to realize the implications. How long until we have extra processors available to turn on? When do we finally get raid devices for disk that ensure we don't ever lose things on these sometimes fragile home systems? Will we ever get to the point where our desktops get dynamic snapshots to enable variable leasing schemes? ....or the ability to turn on more power for temporary periods of time?
My peers and I joke about how we have been cheating for decades by knowing about large systems and watching those capabilities stream down to distributed platforms. Now, maybe, we're able to talk about and share some tech insights with our kids or grand-kids!
"All right Grandpa, turn up the computer and turn down the thermostat, the grand-kids are coming!"
In case you have not seen the Fit for Purpose materials from your friendly IBM site or local team calling on you enterprise, there are a number tools available to help you determine 'best fit' for servers and workload. Based on insight from some external studies, they focus on workloads related to business intelligence and analytics, Web and SOA, traditional transaction, and suites like ERP, CRM, SCM etc. Best Fit has been around a long time with concepts like Balanced Systems (a nod to Ray Wicks et al), 'loved ones' ( and a tip of the hat to Seibo Freisenborg), and constant fiddling with levels and amounts of cache to keep data flowing to the big engines crunching away ---even when those engines where what we would now consider little guys.
Yep, some workloads need more or less qualities of service (QoS), or non functional requirements like availability, security, performance and so on. And some are more compute or I/O intensive, shorter or longer transactions, spread across infrastructures or limited to running on isolated or even specialized processors. This all makes sense to the technical mind, but it is a good idea to remember the power of inertia, decisions already made, and what one buddy called Fit for Politics. It is tough to make changes when decisions are often not re-examined or justified -- they get cast in stone or aligned with factions and can be perceived to be linked to career paths even. Don't forget that in you consolidation or movement plans for the z196 and zEnterprise we have talked about! And don't forget that part of the power of being able to QUICKLY move workloads across platforms in the new complex is that you can quickly try things out and over time learn to trust the idea of decisions not caste in stone, maybe not doing lots of analysis ahead of time and just freaking try it!! (Hey, there's an idea....)
On another 'how things have changed' I cleaned my office recently and found I tossed both round and square backup discs galore. Between more reliable cheaper drives, and backup schemes offsite I realized it had been awhile since I did that hours long data backup and labeling fun time. The other thing I found was a folder of magnetic shapes representing e-business server types. Ten years ago or so when the idea of creating these e-business infrastructures with customers was new I'd find a magnetic whiteboard (easier than you think for big companies) and slap these blocks on the wall with new names like web servers, application servers, portal, gateways for voice or files or B2B....and we would plan out the new world of opening up the enterprise to partners, suppliers and customers. Those concepts and server types are common place now, so I felt pretty secure in setting them free too.
...but maybe I should think about making a new set of magnet blocks with lines of business or service types, with event categories and collaboration options as discussions for the next wave of Smart systems start getting built?
Looking through the current issue of Mainframe Executive, (you are subscribed, right?) and saw a nice interview with some of the Academic Initiative students. I also noted that there will be university program representatives at Share this year to talk with mainframe shop managers in August in Boston (see z Events ). The theme continues in IBM Systems Magazine for the Mainframe
with the article: Educated for Success.
These items made me nostalgic, thinking of Dr. Seuss and "all the wonderful things they'll see!" We old fog-gees saw virtual storage, MVT, SVS, MVS, and up to z/OS and they may see operating systems so many levels of complexity and abstraction above what we have watched it boggles the mind. We abstracted platforms with middleware running anywhere and then raising the bar by abstracting run-times with the evolutionary result of early CSP and VA/GEN to the current Enterprise Generation Language: EGL.
We watched virtualization from basic storage to VM, server consolidation, and federation, and they start by taking steps on the cloud!
On the note of systems evolving, it seems I am hearing about more enterprises looking hard at long term systems that were build over decades to perform incredibly efficiently but, alas, in many cases (since they are rigid and tightly coupled), when it is time to introduce the change monster, the prospect of 'different' overwhelms them. Projected costs, time, & risks start to look pretty scary. Hey, just remember that the remodeling industry is bigger than new housing construction, and build that value case regardless of how large the 'maintenance' is to your application or infrastructure base.
And don't forget, there are many more options with componentization, messaging, event-based architectures, SOA and web services; not to mentionand modernization transformation strategies and tooling that weren't there just a few years ago. Remember VSE to MVS migrations? How about Y2K? The longer you go without changing, the bigger the bump --whatever the system.
Just remember: start small (skunk works and prototypes) , draw some good pictures (architectural models), bring extra sandwinches (resources)....maintain your perspective (humor).
Or maybe, wait for some magic that we have yet to see!!
Let's look at some more developing trends or patterns... IBM announces the Financial Markets Framework at the annual Financial Services Technology Expo in New York and not only refers to microsecond latencies and millions of transactions, but also builds on existing industry frameworks, adding feeds from disparate sources (think Info stream and stream computing from last year...) adding analytics and process extensions to address risk, regulatory and compliance areas. Do you remember the 20 plus times volume system that was announced about a year ago? Do you think that middleware is evolving still? (Do we need to mention what systems the Financial Industry runs?)
I stumbled on another source I should perhaps have known about called the Dancing Dinosaur blog, pretty interesting reading.
I found it after seeing one of the sessions at Innovate that had a demo to connect System z to smartphones, seeing the redbook, and curious to see if there were any mentions of it on the net. This ties in with a number of announcements to support phones like the recent Android support by collaboration software. Integration of the user to mainframe continues in all kinds of ways.
Well, I know this is a short one, I am off next week and then traveling. Oh, and take a look at the COBOL Cafe hub for the new Rational Developer for z Unit Test feature. It certainly adds some interesting possibilities for managing the test workload on z!
Just got back from Innovate 2010 and I encourage you to view the Keynotes; they will fire you up on innovation, the future, systems of systems and the future-- fer sure!
Take a look at the current IBM System Magazine to learn, among other things, how tape is far from dead -- 29.5 B bits per square inch demonstrated (44x today's capacity)??
The mainframe mag, zJournal, is now at the mainframezone, and the current issue has an article on mashups and Web 2.0 with the mainframe. (have you looked at CICS support for PHP?)
...and IBM has demonstrated a Graphene transistor based chip at 100Ghz -- as in single layer of carbon atoms exceeding cut off frequencies of silicon chips with the same gate length, is getting involved in more auto systems with Daimler, and there is a nice retrospective on Disney and IBM (with videos!) here.
So, I ran across a couple of interesting articles recently that, as a architect working with large systems and large enterprises made me stop and think.
First, in the Financial Times from June 8, a nice bit of thought on outsourcing, governance, and a reminder how key tech is and the importance of managing technology correctly!
Secondly, in the current issue of Strategic Finance (oops, sorry the specific article is for IMA members only it seems..) a good discussion on how to handle Idle Capacity Costs (pg 55). Without going into the detail, the point is execs at your enterprise are looking at these things.
And... what system helps manage resources to minimize excess capacity, maximize utilization?
... and oh by the way have you looked at moving to or consolidated non-System z systems on z lately?
Or would rather wait for your manager to ask why not?
Did you miss the last months birthday? Yes, COBOL turned 50 last month. I could blather on about how there are 30 billion COBOL and CICS transactions a day, or how their are still well over 1 million COBOL programmers, or about how there may be as many as 5 trillion lines of COBOL code out there running there largest institutions. Instead, let's just say Happy Birthday, to this commercially oriented business object language and, like your grandpa, remember to give it some respect!
As a field architect, I have the opportunity to run into all kinds of customers and situations. This means I get to read all kinds of technology. Recently, I had the opportunity to take a closer look at one aspect of virtualization across our server platforms in IBM and found myself very encouraged in the direction we're going. You may recall last year at this time IBM announced, and later followed through on, the acquisition of the Transitive company.
About six months later, IBM announced PowerVM, which provided the capability to consolidate sets of applications across power systems (AI, IBM i, and Linux). This included the rapid deployment of workloads in partitions, and even the live transfer of running workloads. There's a lot of detail in features about the resource sharing, the implementation of micro-partitioning where you can have as many as 10 dynamic logical partitions per processor core and so on, but the exciting thing is to see the direction of virtualization and that concepts started on System z percolate down across other platforms. I recently heard about the impact at a conference where a video running out of a partition was moved across physical machines live on the conference floor -- who wouldn't like to have seen that?
Remembering how the i-series was moved to power systems, learning that Transitive helped move Apple systems across chipsets, and seeing examples everywhere of increased management and utilization of processor resources (such as the recent z/OS enhancement for zAAP eligible workload on zIIP engines), it just gets one itching to see the next result of that virtualization acquisition in Transitive!
systemzblogger 2700017BYR Tags:  system z resilience engineering research ibm 1 Comment 2,866 Visits
I was reading the October issue of Popular Mechanics when I came across a column by Glenn Harlan Reynolds entitled Ready for Anything. In the article he defines resilience engineering to include the idea of designing and creating maintenance systems so they have some give, are able to offer extra capacity, handle sudden loads, provide plenty of warning when things begin to break down, backup systems in case they do, and so on. I immediately thought: hey this sounds a whole lot like what System z has been doing for decades!
I wondered what IBM is doing for this newly coined approach (though it was mentioned that resilience engineering was born as an academic idea in response to the 2003
In case you did not know, IBM Research is the world’s largest industrial research organization with about 3,000 scientists and engineers in eight labs spanning six countries. IBM has produced more research breakthroughs than any other company in the IT industry and has led in U.S. patents for the past 15 years. It’s always nice to see our scientists at IBM Research-- which recently celebrated the 20 year anniversary of moving atoms -- ,and the engineers across IBM, are continuing to stay ahead of the curve in designing our systems!
I had a conversation this morning with a longtime consultant who has worked with 'the mainframe' since at least the 60s and is heavily involved in both SHARE and CMG. We found ourselves shaking their heads at the durability of perceptions would suggest, in spite of overwhelming and increasing evidence, that the System z platform is far and away the most cost-effective platform in the world.
One nice recent example of continuing evidence comes from The Clipper Group, and their newsletter, The Clipper Group Navigator (April 23, 2009), which talks about how well System z fits into upcoming Cloud strategies.
This was on the heels of a session I attended that talked again about consolidation efforts. These results showed energy savings of 80%, space savings of 85%, software savings of 35%, and labor savings of 54% -- while reminding us that average servers utilization leave 85% of their capabilities unused.
Next, I took a look at the recent announcement letter, which previewed z/VM V6.1 (letter 209-207). Besides a raft of improvements related to storage, networking, and Linux enablement, my eyes perked up (while I was reading not listening), when I saw this:
Running more Linux server images on a single System z server:Considerably more images than are currently supported by the LPAR modeof operation (up to 60 on z10 EC and z10 BC) may be supported with z/VMguest support. These Linux on System z server images can be deployed onstandard processors (CPs) or IFL processors. Running multiple Linuximages on an IFL-configured z/VM system may not increase the IBMsoftware charges of your existing System z environment. Clients runningz/OS, z/VM, TPF, z/TPF, z/VSE, or Linux on System z can add z/VM V6.1on IFL processors to their environments without increasing IBM softwarecosts on the standard processors (CPs).
Then, at the end, there is a statement of General Direction that talks about a z//VM Single System Image whose intent is to allow all z/VM member systems to be managed as one system, across which workloads can be deployed. There is also something called z/VM Live Guest Relocation, aimed at moving a running Linux virtual machine from one single system image member to another.
Wow, way to move the hypervisor along!
It is easy to think of System z as just the hardware, or focus on the z/OS operating system and forget about the 40 year contribution that VM has made in virtualization; especially over the last few years with consolidating Linux systems.
With each additional step of on demand capabilities, the numbers for total cost of operations improve and the picture of how all these new 'clouds' can actually start to form becomes clearer.
You may remember, we have mentioned Hoplon Infotainment and their massively multiplayer game, Taikodom, which resides on System z and uses Linux and the gaming chip used in Sony Playstation (sometimes called the ‘gameframe’), a couple of times in this blog. (I think was a year ago in February, and again this past October.) They now have announced plans to go global and could be hosting as many as half a million users by year-end, with plans including graphic novels, a possible TV show, and other tie in activities.
The announcement includes a great You Tube video where the Founder and CEO, Tarquinio Teles, talks through the selection of System z and their gaming business model. He discusses the characteristics we are all so familiar with in System z like availability, security, reliability, creating virtual resources quickly, and running Java workload. However, perhaps the best reason for picking a z based platform is reflected in his final comment where he sums up by saying:
“…fun is something people take very seriously, and you don’t want to mess with people when they are having fun.”
Or - In the Club: More Detail
(I wrote a Blog a while ago, and it was suggested I add my more detailed version..so, here you go!)
The big boys on System z platforms have decades of experience tuning and using techniques to optimize mixed workloads and databases. Workload Manager and Static Plans for DB2 are examples of component function which are widely used on System z but not always leveraged well during integration with the distributed world. DB2 static plans are used to improve database workload performance, management, and security. Workload Manager assures high utilization, high service levels, and high value of the System z platform. How can distributed applications leverage them for their benefit? Let’s take a look at both areas.
Workload Manager looks at transactions, jobs, and even database work then lets them run and use resources based upon priorities, profiles and things like intended service levels and velocity through the system. As e-business applications came along, and the new transaction manager on the block, Websphere on z, joined CICS, IMS, and JES2 as a transaction manager and was integrated as a full member of ‘the club’. So Websphere on z workload, and its associated DB2 work, could then be managed, secured, and line up to get the kinds of resources it needed. (See Say What? for a great overview of ‘the good old days’, packages, plans and such.)
Distributed Workload Needs a Ticket:
Unfortunately, as things evolved for the e-business applications (and here we mean both Java and .NET workload) off of System z, the kind of integration necessary to achieve the same kinds of benefits sometimes needs a little help. It turns out that when requests are sent to DB2 they tend to show up in the hopper as undifferentiated pieces of work --unless the Java programmer knows to specify an indicator (as SAP does by specifying differentiating parameters such as the correlation ID). Usually though, there is no ‘handle’ for DB2 to differentiate the incoming workload (or for subsystems to handle Workload Management, Monitoring, Reporting...). So, by default, everything settles in to a low priority and a low level of service. (For DB2 the default service class is 4 --which is low!).
If you don’t have that ‘handle’, System z can’t prioritize the workload so it gets its fair share competing with all the other workloads running on System z. Without the ‘handle’, you guarantee your incoming distributed work is at the bottom of the bucket!
Another large issue affecting overall performance for distributed Java workload using DB2 on the mainframe (and by the way, .Net applications implement that scenario more than to any other IBM database), is taking advantage of the first technique mentioned above: the use of Static Plans.
In the distributed environment where the paradigm is for interpretive rather than compiled programs, it is tough to even think of techniques using a pre-compiled approach as an alternative. However, there are ways to leverage the concept of a ‘static plan’ for DB2 by distributed workloads too.
Welcome to the Party: PureQuery:
PureQuery now enables distributed applications to bind static packages set up ahead of time.
A package, also called an access plan, can be thought of as a precompiled subroutines with the information to line up execution details ahead of time, (such as: access paths, indexes, etc ). Once created, the packages can then be stored in the DB2 catalog and are bound to a program. Note: Using packages can also add a layer of abstraction and security to DB2 access.
See some good references from developerworks and DB2 Magazine;
These static packages can be created and waiting on System z in the DB2 catalog for the Java applications to ‘call’ or connect to and use through a binding process done by the Websphere distributed admin or programmer. (By the way this is done through techniques that require far less heavy lifting than an earlier technology called SQL J employed. See some good tutorials on PureQuery again in DB2Mgazine and DeveloperWorks.).
Gift Bags for All:
If both approaches are used, distributed Java workloads can then benefit from the serious ($$) performance, security, and management effects of engaging DB2 and Workload Manager, and from using those static packages waiting at the ‘Enterprise Data Hub’ (System z).
But wait a minute.. there’s more…
Okay, that sounds great, but let’s stops and thinks about this. Put another way, what if you could now have the assurance that your database access on the mainframe from distributed applications could now have much better performance, much more predictable performance, like on the level of the service level agreements they use up there?
What if the mainframe land of sometimes ‘unknown-ness ‘ for distributed folks was something that now be counted on to give you fast data from lots of data sources, like federated, like services, like real time… and across not just your infrastructure of distributed and centralized, but across partner networks??
Since many application suites are on distributed platforms, how might your plans be affected for BPM, real time intelligence, MDM….? (See the article on MDM and PureQuery on Developerworks).
These are surely some things to go back and look at… don't you think?
As mainframe guys, we know that one of the key strengths of the System z platform with the z/OS operating system is that it is designed to run to 100% utilization. Managing jobs, tasks, and transactions, he assigns workload to resources based upon priorities. Now, that means you need a discrete name for each kind of workload. Without a name, there is no ‘handle’ for Workload manager to associate things like service levels and priorities and velocities so that he can figure out whether that piece of workload deserves whatever amount resource and computing capability. There is also a long-established technique on the mainframe that involves compiling programs so everything gets resolved including the database access component -- where SQL statements get converted into access paths using the right indexes, getting to the right databases, etc. This process gets done ahead of time to make actual runtime execution more effective. Okay, that is not too exciting and most of you probably are well aware of what it means for program to get compiled ahead of execution time rather than being interpreted dynamically at runtime.
Now let’s turn to the distributed world where one of the paradigms is interpretive rather than precompiled and where (surprise, surprise) many of the technicians supporting the environment did not spend the last few decades of their lives working with the mainframe. It turns out that if you don’t know how to specifically classify your workload flowing to DB2, that all the distributed workload, (whether it is Java or.Net) gets put in to a bucket which automatically defaults to a low priority and low allocation of resources and service. If enough care is not paid to differentiate the incoming workload and provide it with the ‘handle’ we talked about above, all of the subsystems of the mainframe, (including your DB2 database, workload manager, security, monitoring and reporting components) can’t give your application the attention it deserves. Also, while there have been some techniques to accomplish it, being able to associate or buying static packages to distributed application programs is something that has not been done very much; due both to awareness and the perceived heavy lifting associated with those techniques (see SQLJ).
While there have been ways to do both of these techniques, distributed and glasshouse folks do not always talk as much as they should, and the techniques were not always particularly easy, with a low threshold level, or were just considered ‘heavy lifting’. A good example of this was something called SQLJ which, while it enables the use of static SQL, has had very low adoption rates. Now, with the advent of pureQuery (which is been out about a year) distributed workload can take advantage of the concept of static packages (that precompiled effect) and ,as a part of the larger data studio suite, pureQuery delivers techniques to the distributed technicians to more easily classify their workload as it shows up to DB2 on the mainframe.
If you do utilize these two techniques, not only is this distributed workload finally eligible for workload manager to give him whatever level of service he needs, but that workload can benefit from the efficiencies of static packages on the mainframe (since the programs now have an easier way to bind them to his deployed programs).
Okay, so now we have a ticket to the party and can use a lot of the same mechanisms that COBOL programs or batch jobs have historically used to get their fair share of resources. One of the key things in this means is that distributed workload can integrate better with the mainframe. It can more effectively use the data on the enterprise data hub. The kinds of response times available to these distributed applications from the mainframe can be far more predictable with better performance. There is even a security benefit, since using these static packages means that users are authorized at the package level and there is not a potential exposure of direct database access.
If you are an architect or an IT manager, this also has large implications for designing applications across your whole infrastructure; including applications that work with other enterprises and partners. Do you have applications that need to go to other locations? Or that would like to become real time, federating data from multiple sources, integrated into business process management flows, and relying on applications and information across platforms and networks? Are you starting to look at master data management, greater infrastructure integration with endpoint devices, or folding in new groups of users that just happened to reside on different networks, platforms are infrastructures? Having a way for your workload to integrate better with enterprise resources, level the playing field for use of resources, and achieve better performance and predictability can be a huge step in an end to and systems design.
Oh, and by the way, it should be no surprise that there are also new opportunites for zIIP engine usage.[Read More]
Large systems, as represented today by System z shops around the world, has always led the way in applied technology across industries of finance, technology, commerce, and government. It has lead recent innovation waves , including e-business, On Demand, SOA and Green technologies. On November 6, IBM's chairman, Sam, Palmisano, gave a speech at the Council on Foreign Relations as part of their corporate meetings program. The subject was introduced as proposing an increased infusion of intelligence into decision-making, but the larger issue addressed in this talk is an opportunity for leadership in the context of the current economic, financial, and political turmoil in the part technology plays in creating a smarter world.
Building on the concept of the Globally Integrated Enterprise, (see the 2006 paper), Sam talks about the window of opportunity leaders have to effect change in the current climate of receptiveness for change. Building on Tom Friedman’s observations about a flattened world in dire need of greener and more environmentally responsive approaches ( Hot Flat and Crowded and Tthe World Is Flat), he suggests the world also needs to be smarter and to accelerate the convergence between digital and physical infrastructures. Suggesting we move towards the future rather then hunkering down, Sam reminds us of the inspirational effects of taking action towards a hopeful future rather than trying to defend the past. (Something System z and large systems understands too well!)
Sam shares the current urgency to start by reviewing some arresting facts; such as how 40 to 70% of energy is wasted in poorly managed power grids, how half of the world doesn't have sanitation facilities and one in five don't have safe water to drink. Or, how a large part of the recent financial crisis was due to a lack of mechanisms to track and manage the risk, and how huge opportunities exist in managing traffic, supply chains, and especially the health care systems. Then, he reminds us of our collective technologic progress; how our planet has almost a billion transistors/person (at a cost of one millionth of a cent each!), 4 billion cell phones, 2 billion Internet users, and 30 billion RFID tags. He shines a light on the pervasive use of sensors, connected and networked to provide a growing base of intelligence and capability; representing a potential for problem solving with a huge future potential.
In short, an emerging world is revealed that is digitally aware, and intelligent or smart. If growing these capabilities is possible, affordable, and can make a significant difference in our world, then someone will do it. Sam asks: Why shouldn't it be you, your company, your country? (And naturally, IBM has examples of involvement in many areas from MRI Coils to Solar Capacitors, from Trraffic and Power Grid Management to Risk Systems).
Talking about the importance of this moment in time, Sam summarizes saying : Everyone has to come out of their lanes. It will take collaboration across government, academia and industry; with skills that are multidisciplinary, and end-to-end. (Hey, a lot like architects!) To dig out of this current situation, will require a realization that we need to move forward and create an even smarter world. As one of the audience commented, citing Peter Drucker's paper (1994 The Age of Social Transformation): There will be no "poor" countries. There will only be ignorant countries.
There will be no "poor" countries. There will only be ignorant countries.
As part of the System z and large systems community, I think we will once again be on the leading edge of this next transformation as we become more instrumented, interconnected, and intelligent; as we become a smarter world.
I had the opportunity recently to talk with Paul Wirth, an IBMer who travels the country in support of DB2 and is a real smart guy. (… and not just because he agrees with me on so many things!). I had asked to talk with Paul after some reading about a new technology called pureQuery, which is part of the relatively new Data Studio suite. The technology is a result of a cross brand initiative from IBM software aimed at the intersection of application programmers and DBA’s for effective data access. This technology’s goal is to reduce the complexity of JDBC programming and queries to relational databases, Java collections and database caches. While there is a lot of detail (see the links below) on how this technology works, there are also a couple of System z implications that Paul shared.
The difference between dynamic and static execution of SQL statements is like the difference between compiled and interpreted execution. If you build static execution packages for DB2, you get a predetermined access path, built for the best performance and predictable execution of the workload. They're also benefits related to security isolation since the application accesses the package and not the database table (see SQL injections, fraud etc.) Those shops who have used the approach of static SQL statements via the whole mechanism of DB2 packages receive long known associated benefits of not only performance, but also cost, security, monitoring, and consistency. While previously available for Java via SQLJ, it was pretty complicated to do and only a limited number of shops did so. pureQuery makes this much easier to do regardless of your Java framework or the API you're using.
One of the strengths of System z is its focus on mixed workloads, performance, and cost effectiveness as a platform. Being able to prioritize and manage workloads through workload manager is essential to accomplishing this. Unfortunately, much of the distributed Java transactions coming off the web to DB2 are only dynamic SQL workload. So, z/OS sees them as undifferentiated pieces of work unless the programmer sets the properties for the connection class-which is often not done. If you have a unique package name you can identify the application program which means it's easier to monitor, do problem determination and especially that it is eligible to manage via the z/OS Workload Manager (WLM). pureQuery gets you get those unique package names as you implement static SQL, and gives WLM the ability to assign them to specific service classes, and allow prioritization of the DB2 threads. (Note: it is not just the Java workload coming off the web, Java workloads via Websphere for z/OS, stored procedures, and CICS Java workload can potentially benefit from pureQuery via the pureQuery Runtime for z/OS).
A third interesting area to look at is stored procedures. First, if you have a simple, one statement procedure that doesn't contain business logic, rules, or network data filtering, and is used just to provide a static plan, then consider using pureQuery and rewrite the SQL statement in the Java application. The pureQuery statement is zIIP eligible, provides the static plan, and avoids the need for the stored procedure. IF the Java program is running on System z it also becomes eligible for zAAP processor use.
Next, we know that DB2 9 gives us the new, native stored procedures, which avoid the use of WLM because they run within the thread, and are zIIP eligible- when used over TCP/IP with DRDA. So, say you rewrite an external procedure (e.g. one that currently uses COBOL) using native SQL/PL. The result? A procedure which is more efficient and is zIIP eligible (DRDA).
As Paul said: “…if you think about it; pureQuery makes Java web applications behave a lot like CICS COBOL applications…” And that, as System z folks know, can be a good thing.
Whew... a lot to think about, but this is a technology to watch that holds the potential for improvements on numerous fronts. Watch this one as it moves in closer on the radar!
...and from DB2 Magaizine:Read More]
As fall starts showing some early signs here in the Midwest, I took a few days last week for cleanup chores. Today’s entry will be kind of like that, with a couple of quick notes or tidbits.
First, while I have mentioned the Academic Initiative IBM has in relation to System z, I don’t think I included any relevant pointers, so for this program that now has 500 schools participating worldwide:
Oh, and to engage with folks from IBM regarding System z skills, just send a note to: email@example.com.
Next, did anyone see the browser announcement of 'Chrome' by Google? I’ve played with it a little bit and two things pop out at me; besides the obvious business implications of stepping in this space of browser and client interfaces. First, it is kind of nice to have a quickstart icon for certain sites, but also, the idea of not having a browser task that is having problems bring down all of your sessions says some positive things about their possible awareness of an old System z design concept! Something stumbles but it doesn’t bring anyone else down with it… sound familiar?
Finally, I heard a couple of items related to System z in the last couple of weeks that you umight want to put in your virtual pocket. Did you know there are more CICS transactions every day than searches on the web? (I know, still true!) Another fun fact I heard in a customer teleconference referenced the mechanism of System z hardware executing instructions in parallel and then comparing them to make sure they come out the same. That is a great example of an availability mechanism that came from a time where they were flipping those little ferrite cores of memory and is so deep in the design we take if for granted if we stop to think about it at all. It’s a detail, but as great coaches say, details build champions. Think of John Wooden starting the first practice by teaching his players how to put on their socks and tie their shoes correctly to avoid blisters. .. and win basketball championships.[Read More]
Hi, this is Dave again. I just got back from IBM's SWITA and zITA internal University, where I was part of a team delivering the System z track. (SWITA is a pre-sales software architect, and a zITA is the same specializing in System z platform accounts). It's always good to step back and spend some time renewing oneself, talking to others who do similar jobs, and of coarse, when you deliver sessions, seeing how they are received and gauging where the platform sits with others who may not work with it every day.
One of the insights I brought away from this experience was to remember, for those of us who have spent a long time with System z (like when it was the mainframe!), is that 40+ years is a lot of time to not only layer functionality, but to lose layers reasons of why those functions were put in place. As we talked about some basic concepts like virtual storage, partitioning with PR/SM, and the original software virtualization engine z/VM, I was reminded of a friend’s report on an interview he had a few years ago.
This friend had traveled to Texas to interview at a large IT installation, and was anxious that the interview go well. Towards the end of the day, while touring the data center, his guide stepped aside and gave him the feedback he was looking for. He indicated that the hiring team had been concerned since my friend was from the north and in their experience there had been difficulties with other candidates, who just didn't seem to fit in culturally. He shared in a Texas drawl that while the candidates were qualified and generally good people, that part of the reason they ‘just didn't get it’ was not their fault, as they ‘ just didn't know any better’.
Experience, context, and exposure. It's not the fault of those who haven't been around System z that they don't remember why functional recovery routines were put in place, exactly how virtual storage works to ensure memory doesn't get stepped on by those who don't belong there, or the context that two phase commit came out of. If you are one of those who have been around System z enough to understand its design point and value, take the time like a good Texan would with an 8 pound brisket and help others ‘get it’.[Read More]