What can’t you include in these two words: Enterprise and Architecture? Over the past few years there has been a huge emphasis on innovation, and with it, a focus on new business models.Eventually, this leads to line of business managers talking to the technical arm of the company for ways to implement the new processes, functions, and capabilities.Whether you call yourself an analyst, project manager, architect, or something else, if you sit between the vision and the reality, then you are involved with modeling and modeling frameworks.
Whose View of the World?
Maybe you use the Zachman Framework, or TOGAF, or Catalysis, or MDD, or Michael Porter, or Business Scorecard.Maybe you go back to the 1960s, with Peter Checkland and Soft Systems Methodology, or are a fan of the Rational Unified Process.There are frameworks that focus on views and perspectives from a technical standpoint, or organizational development, social change, construction and problem solving.They all fight for a balance between simplicity and the tendency to elaborate one more level.Do you use a telescope or microscope? Do you crank it up or down one level?Do you bother to include society, culture, and economics?
Can It Fit in Your Head?
Whatever framework your company or customer might be using, keep in mind some simple rules of thumb.First, the point of building models is to communicate.This means you need to consider who is looking at the model, what you leave in or take out (elided), and how well your audience can in turn communicate it to someone else.Second, remember the studies that show our brains can only handle a handful of items at a time (The Magic Number Seven ).Third, do not try to get it all on one piece of paper.Think of an architect building your house, and the fact that he has a set of plans with each sheet depicting a particular viewpoint for wiring, plumbing, landscaping, etc.For each picture, diagram, artifacts, work product, or exhibit, justify its creation by its intended use for this particular project.
All of this means you do not ask your client or organization to deliver your particular framework or model to you on your first day (as I have to confess, I tended to do in my early years).Instead, try to tease out the things they use and refer to.Ask what pictures, diagrams, charts, or reference material they find essential when talking about the problem being addressed.Don’t try to vacuum cleaner up all of their information from a raid on their file cabinet and a quick peppering of questions where you try to suck their brains out.Instead, take a lesson from other agents of change in organizational development, consulting, and even psychotherapy and take time to build collaborative models together.Yep, that’s right, roll up your sleeves, grab a marker, and take more time than you thought you needed to find out ‘what they mean by that’, ‘what happens next’, and ‘who cares about this?’.Keep a simple framework in your head and start with things like financial, logical, and physical aspects. Elaborate from conceptual to more specific and concrete as you need. Don’t forget to focus on context and end results as barometers. Oh, and test models with subject matter experts, constituents, end users and implementers along the way.
How does relate to System z?
Well, I was cleaning through some files and ran across examples of both cases where systems were designed with very narrow scopes, and those, like enterprise class System z solutions, had larger perspectives, and did things like lasting better over time, scaling better, and ending up costing less in the long run.Naturally, I started to think about some things that characterized the design of those solutions.Today, when divisions and applications and infrastructures are being called on to be more efficient, and integrate together more effectively, we are all being called on to evolve a ‘brown field’ solution architecture to its next incarnation.If you are involved in leading that change, please remember that most projects still ‘fail’ against their original objectives, most requirements are gathered incorrectly, and that it’s incredibly easy to get lost in detail or jargon and fail to communicate across the business and IT chasm effectively.
Where to start…
Sme good thought frameworks I have personally found useful, that you may not have run into, include Ellen Gottesdiener (Requirements by Collaboration), David Sibbet (Graphic Facilitation and Process Methodology of The Grove), Peter Checkland (Soft Systems Methodology) and of course for business models there is always Michael Porter, Balanced Scorecard's Kalpan and Norton, and Eric Helfert for insights to financial frameworks. Look for approaches that organize large, complex systems in other fields -- you might be surprised what can be leveraged.
You may remember five years ago, when CICS get the 30 billion transactions a day level. Now, RFID tags have hit 30 billion worldwide, and there will soon be 1 trillion intelligent device endpoints; 50 million of these being Wii devices!Part of the smarter planet initiative includes expanding the concept of a truly dynamic infrastructure, which talks about merging both digital and physical infrastructures in an enterprise.A couple of weeks ago, IBM continued to add capabilities in the space by announcing new governance models, industry solutions, specialty partner programs, and products and hardware and software.
On the hardware front, there are new options for dealing with the 15 petabytes of data the world generates every day now (8 fold what is in all US libraries! ) both with the "ProtecTIER®" Deduplication Appliance(a result of the Diligent acquisition last year) which can reduce duplication for as much as a 25:1 savings, and the addition of full disk encryption on the DS8000 to our already proven tape encryption offerings.
Also included were new IBM Service Management Industry Solutions from Tivoli and a new Dynamic Infrastructure Specialty Program for which the first wave of Business Partners is already gaining certification, including Sirius, Mainline, Vicom, MicroStrategies, Agilysys and Computer Integrated Engineering System.Check out the new Dynamic Infrastructure Journal too.
With so many institutions starting to step back and look at technology from a truly enterprise-level, maybe it is time for us to rename IT to ET? Enterprise Architecture, Enterprise Messaging, and Enterprise Technology -- -- whatya think?
Now let’s turn to the distributed world where one of the paradigms is interpretive rather than precompiled and where (surprise, surprise) many of the technicians supporting the environment did not spend the last few decades of their lives working with the mainframe.It turns out that if you don’t know how to specifically classify your workload flowing to DB2, that all the distributed workload, (whether it is Java or.Net) gets put in to a bucket which automatically defaults to a low priority and low allocation of resources and service.If enough care is not paid to differentiate the incoming workload and provide it with the ‘handle’ we talked about above, all of the subsystems of the mainframe, (including your DB2 database, workload manager, security, monitoring and reporting components) can’t give your application the attention it deserves.Also, while there have been some techniques to accomplish it, being able to associate or buying static packages to distributed application programs is something that has not been done very much; due both to awareness and the perceived heavy lifting associated with those techniques (see SQLJ).
While there have been ways to do both of these techniques, distributed and glasshouse folks do not always talk as much as they should, and the techniques were not always particularly easy, with a low threshold level, or were just considered ‘heavy lifting’.A good example of this was something called SQLJ which, while it enables the use of static SQL, has had very low adoption rates.Now, with the advent of pureQuery (which is been out about a year) distributed workload can take advantage of the concept of static packages (that precompiled effect) and ,as a part of the larger data studio suite, pureQuery delivers techniques to the distributed technicians to more easily classify their workload as it shows up to DB2 on the mainframe.
If you do utilize these two techniques, not only is this distributed workload finally eligible for workload manager to give him whatever level of service he needs, but that workload can benefit from the efficiencies of static packages on the mainframe (since the programs now have an easier way to bind them to his deployed programs).
Okay, so now we have a ticket to the party and can use a lot of the same mechanisms that COBOL programs or batch jobs have historically used to get their fair share of resources. One of the key things in this means is that distributed workload can integrate better with the mainframe.It can more effectively use the data on the enterprise data hub.The kinds of response times available to these distributed applications from the mainframe can be far more predictable with better performance.There is even a security benefit, since using these static packages means that users are authorized at the package level and there is not a potential exposure of direct database access.
If you are an architect or an IT manager, this also has large implications for designing applications across your whole infrastructure; including applications that work with other enterprises and partners.Do you have applications that need to go to other locations?Or that would like to become real time, federating data from multiple sources, integrated into business process management flows, and relying on applications and information across platforms and networks?Are you starting to look at master data management, greater infrastructure integration with endpoint devices, or folding in new groups of users that just happened to reside on different networks, platforms are infrastructures?Having a way for your workload to integrate better with enterprise resources, level the playing field for use of resources, and achieve better performance and predictability can be a huge step in an end to and systems design.
Oh, and by the way, it should be no surprise that there are also new opportunites for zIIP engine usage.
You may remember, I mentioned that there were a number of articles in progress on the system z10 Server.In the interim, the IBM Systems Journal and the Journal of Research and Development have merged, and those z10 articles are now available in issue #53 at this site.I was going to dedicate this blog to highlighting some of the articles in this issue, but I wanted to take a moment and share with you some of the progress that has happened in the last 90 days since IBM’s Sam Palmisano, gave his speech at the Council on Foreign Relations back in November on a Smarter Planet; the largest Enterprise System we humans deal with.
Since then, there is incredible amount of information, which has been made available both on the general IBM site, and some pretty amazing visibility through the media and Internet channels.On CNN, Sam had his first TV interview wth Fareed Zakaria, and on CNBC, we saw him with our new President Obama. If you look at YouTube, there are many new videos related to a Smarter Planet and what it means across industry, business, technology and society.Smarter utilities, telecommunications, energy, money, retail, infrastructure and more are being highlighted at the IBM and other sites including:
As I walked through these stories of Smart power grids, traffic management, new approaches to telecommunications, financial systems, food distribution and other solutions, I started to realize that this may be not only the largest initiative from IBM in decades, but a real tipping point for technology being applied to the world in a meaningful way.Whether you are an IBM employee, Business Partner, customer, or a citizen of the world, this initiative will affect you, and it is worth starting to understand what it involves and where it is going. I highly recommend taking the time-- I think you will be inspired.
Did anyone see the Game-Frame computer in the latest Popular Mechanics?It is the ‘Game-Frame’ version for a PC from HP where they use VooDooDNA water cooling. Mmm… I’ll bet you large systems types thought that the Game-Frame was the Hoplon System z with integrated cell processors for massively multi-player use.
Well, it is nice to see water cooling in a PC, since it is pretty well established, say 4+ decades, that water cools at around 3500 times more effectively than air. Oh, by the way, if you have been keeping track of IBM Green initiatives like Cool Blue, Big Green, and others, you may have noticed a whole range of cooling techniques that take it to the next stage including technologies like Rear Door Heat Exchanger, cooling arteries, Hydro Clustering, and Cold Batteries (i.e. ice) which uses advanced thermal exchange techniques.
With datacenters running out of power, costs escalating daily, and barely over 2 in 10 enterprises having done energy audits, it is time for large installations which have System z to take a look at an area that may have not been investigated in depth for quite a while.
Sometimes a blog is long and sometimes it is short. As we are all busy with starting a new year, with the pressures of economic realities and developments in green technology, I thought I would just remind and share this astounding thought from the Business Class z10.
I was looking at some of the collections on Flickr related to z10 and found this entry:
BIG PERFORMANCE, LITTLE ENERGY: Created for mid-sized businesses, the IBM z10 BC simplifies commercial computer operations with "specialty engines" to run popular business and consumer applications (email, website hosting, transaction processing, etc) on one of the world's most trusted and secure computer platforms. IBM co-op student Sean Goldsmith surveys the new z10 BC mainframe in IBM's Poughkeepsie, NY, plant to add an extra 1,000 email users with the energy of a 100 watt light bulb. Goldsmith, a senior at Marist College, anticipates a bright future with the mainframe.
(It refered to the associated announcement launch here)
Through a great example of teaming, POWER6 engineers worked with z10 engineers to create the breakthrough z10 chip, which benefits from some shared technologies, but differentiates itself based upon a different design point.Some of the common DNA includes the IBM the 65nm silicon on insulator technology, and large portions of core pipeline design, the hardware decimal floating-point unit, and building blocks like latches, data flow elements, and SRAMs. (Note: for loads of detail, watch for future issues of the IBM Journal of Research and Development which already has a number of detailed papers in acceptance status for future publication.)
There are also some important differences, which gives the sibling chip its own very distinct personality.With more real estate given to caching (like about 20% more in a multi-tier configuration) and extra on-chip elements dedicated to sparing functions, the z10 extends the tradition of IBMLarge Systems processors handling mixed workloads and larger working sets, while adding even more capabilities for CPU intensive workload. (Don’t forget the huge jump to 4.4 GHz quad core processor chips!)
In addition, there are the cryptographic and compression co- processors, Storage Control chips on each Multi-Chip Module, the new Symmetric Multi-processor topology, and larger caching- which includes support for the new 1 MB page frames for z/OS. Also, while the System z9 introduced hardware decimal floating point instructions in millicode, the z10 extends it to a hardware decimal floating point unit on each core. Along with changes in z/OS Language Environment designs, and backed by a new open standard definition for decimal floating point implementations, this increases accuracy, speed and capability for widely used commercial and financial applications and is an excellent example of systems design beyond just increasing the base chip speed.
So, like in any family, the z10 defines its own role in the family while leveraging the strengths and the lessons learned from its sibling.It is easy to just look at chip speed, but the detail of system design is what leads to balance systems throughput, resource optimization, and total value of the platform.There truly is nothing in the world like the z10 system platform, and it should be interesting to see the detail referenced above in those articles as they get published.I’ll keep an eye out for you!
As the year ends, I’ve been thinking about the larger trends in technology that are especially true for System z customers.Here are some of the trends I have observed.Do these make your top 10?
1.The economy drives more customers to revisit System z platform characteristics. New candidates for workload migration (including ISVs & application suites) and platform consolidation are on the table.
2.Environmental concerns are moving more customers to the System z, dynamic infrastructure, and IBM Green Planet technologies through executive level driven cost control and Green Initiatives.
3.Tighter controls and increased for optimization of systems are deriving new assurance, security, governance , and risk management programs which in turn are driving new looks at System z.
4.The new technologies represented by the System z10 have generated lots of attention; not just from the business class community, but also traditionally distributed platform infrastructures.
5.Installations are moving beyond basic portal access to the use of more advanced process collaborative technologies as IT continues to address complex people intensive interactions.
6. The ongoing IBM middleware evolution, including acquisitions, integration and migration across platforms (including System z), has created a more consumable and effective portfolio for clients.
7.Information on Demand technologies have matured and are truly starting to deliver the next generation of enterprise wide applications supported by all of the enterprise’s information.
8.Large installations are looking hard at modernization and transformation projects to move beyond connection and exposure of legacy assets to a more current infrastructure and use of SOA technologies.
9.New business models, virtualized infrastructures (SaaS), and governance procedures focusing on accountability are raising the importance of industry and true enterprise architecture solutions.
10.The On Demand Operating Environment was seen by many as a visionary milepost in the future.The call for a Smarter Planet, however, is being driven by customers, industry, and societies.
Large systems, as represented today by System z shops around the world, has always led the way in applied technology across industries of finance, technology, commerce, and government. It has lead recent innovation waves , includinge-business, On Demand, SOA and Green technologies. On November 6, IBM's chairman, Sam, Palmisano, gave a speech at the Council on Foreign Relations as part of their corporate meetings program. The subject was introduced as proposing an increased infusion of intelligence into decision-making, but the larger issue addressed in this talk is an opportunity for leadership in the context of the current economic, financial, and political turmoil in the part technology plays in creating a smarter world.
Building on the concept of the Globally Integrated Enterprise, (see the 2006 paper), Sam talks about the window of opportunity leaders have to effect change in the current climate of receptiveness for change. Building on Tom Friedman’s observations about a flattened world in dire need of greener and more environmentally responsive approaches ( Hot Flat and Crowded and Tthe World Is Flat), he suggests the world also needs to be smarter and to accelerate the convergence between digital and physical infrastructures. Suggesting we move towards the future rather then hunkering down, Sam reminds us ofthe inspirational effects of taking action towards a hopeful future rather than trying to defend the past. (Something System z and large systems understands too well!)
Sam shares the current urgency to start by reviewing some arresting facts; such as how 40 to 70% of energy is wasted in poorly managedpower grids, how half of the world doesn't have sanitation facilities and one in five don't have safe water to drink. Or, how a large part of the recent financial crisis was due to a lack of mechanisms to track and manage the risk, and how huge opportunities exist in managing traffic, supply chains, and especially the health care systems. Then, he reminds us of our collective technologic progress;how our planet has almost a billion transistors/person (at a cost of one millionth of a cent each!), 4 billion cell phones, 2 billion Internet users, and 30 billion RFID tags. He shines a light on the pervasive use of sensors, connected and networked to provide a growing base of intelligence and capability; representing a potential for problem solving with a huge future potential.
In short, an emerging world is revealed that is digitally aware, and intelligent or smart. If growing these capabilities is possible, affordable, and can make a significant difference in our world, then someone will do it. Sam asks: Why shouldn't it be you, your company, your country? (And naturally, IBM has examples of involvement in many areas from MRI Coils to Solar Capacitors, from Trraffic and Power Grid Management to Risk Systems).
Talking about the importance of this moment in time, Sam summarizes saying :Everyone has to come out of their lanes. It will take collaboration across government, academia and industry; with skills that are multidisciplinary, and end-to-end. (Hey, a lot like architects!) To dig out of this current situation, will require a realization that we need to move forward and create an even smarter world. As one of the audience commented, citing Peter Drucker's paper (1994 The Age of Social Transformation):
There will be no "poor" countries. There will only be ignorant countries.
As part of the System z and large systems community, I think we will once again be on the leading edge of this next transformation as we become more instrumented, interconnected, and intelligent; as we become a smarter world.
There are different kinds of consolidation; mergers and acquisitions, temporary partnerships, and even hostile takeovers. As times change, new combinations of people and institutions recombine and shuffle for optimal fit. This has certainly been highlighted for me in the past couple of weeks. First, I saw the financial battlefield of winners and losers with some institutions disappearing and others being picked by past competitors in ways we could never have imagined six months ago. Second, I attended a series of sessions on the dozens of software company acquisitions that IBM has folded into their portfolio over the last half-dozen years.
I've heard the process of folding these companies into the IBM world described as a core competency. The informal term I’ve heard is: 'blue-washing' and while can I imagine there may be a Harvard Business Review article on the detail someday,I can state, as an unofficial observer, that we get better at it each time a company is ‘blue-washed’. Externally, the process delivers tighter, pre-integrated solutions with more functionality for our clients. Doing these integrations of once isolated and disparate pieces can save our clients a huge amount of effort and time; as in an order of magnitude less effort.
This behind the scenes consolidation of middleware gets added to the consolidation efforts already underway in IT. Workload from legacy, ISVs, and e-business applications are moving in together where once they got to live in their own, sometimes messy, apartments. Platforms are merging for cost efficiency and infrastructure effectiveness, not unlike that house you may have rented with friends at college.
Of course, as we consolidate IT, some of us will have to get used to some different ways of doing things again; like System Z’s insistence that you share the common area, work with others, and pick up your garbage!
The theme of IT enablement across multiple fronts as broughtout by several of the speakers at the SOA Summit this summer keeps coming backto mind.On the one hand, I came acrossanother reference to the lines of COBOL code out there (this one was for 240Billion!), and one the other there was an article in Mainframe Executive onHoplon (Hoplon’s Infotainment’s Gameframe ) and the fact that they are planning on hosting2000 users per IFL for their massively multi-player environment (you know, theone with cell processors in the System z?).
I was thinking about CICS and the sure progress for web servicesenablement, which has gotten lots of publicity, and thought of its counterpart:the progress in expanding legacy CICS efficiency through the evolution of ‘Thread-safe’tasks.
CICS has continued to evolve to take advantage oftechnologies to maintain application integrity while growing with the hardwareevolution of more processors. The process has involved distributing workloadacross multiple tasks, first with CICS core functions, and later with DB2integration, and now exploitation moves to include potential threadsafe use bysome MQ and VSAM functions (CICS 3.2). Theenabling of multi-tasking of user applications through multi-threading hashelped improve performance utilization while maintaining little things like consistentand predictable throughput results and workload sequencing that large systemshave been so obsessed with for the past few decades.
Why is this important?Well, if you can do this sort of thing safely, and improve utilization(there is that old system Z focus again), it can mean serious returns; like5-15% or more. For some of IBM’s largercustomers this has translated to saving hundreds of MIPs and Millions of Dollars.
Finding the appropriate candidates and implementation can getpretty involved, but the impact can be worth it, and finding candidates iseasier than in earlier implementations via the use of CICS tools such as Performance Analyzer-For which are good candidates, InterdependencyAnalyzer –For which are not threadsafe(which could fit nicely in CICS Explorer), and Configuration Manager (to enforce a threadsafe environment).
Mmmm.. maybe something less sexy than web services,but an area worth looking at, eh?
I had the opportunity recently to talk with Paul Wirth, an IBMer who travels the country in support of DB2 and is a real smart guy. (… and not just because he agrees with me on so many things!). I had asked to talk with Paul after some reading about a new technology called pureQuery, which is part of the relatively new Data Studio suite. The technology is a result of a cross brand initiative from IBM software aimed at the intersection of application programmers and DBA’s for effective data access. This technology’s goal is to reduce the complexity of JDBC programming and queries to relational databases, Java collections and database caches.While there is a lot of detail (see the links below) on how this technology works, there are also a couple of System z implications that Paul shared.
The difference between dynamic and static execution of SQL statements is like the difference between compiled and interpreted execution. If you build static execution packages for DB2, you get a predetermined access path, built for the best performance and predictable execution of the workload. They're also benefits related to security isolation since the application accesses the package and not the database table (see SQL injections, fraud etc.) Those shops who have used the approach of static SQL statements via the whole mechanism of DB2 packages receive long known associated benefits of not only performance, but also cost, security, monitoring, and consistency. While previously available for Java via SQLJ, it was pretty complicated to do and only a limited number of shops did so. pureQuery makes this much easier to do regardless of your Java framework or the API you're using.
One of the strengths of System z is its focus on mixed workloads, performance, and cost effectiveness as a platform. Being able to prioritize and manage workloads through workload manager is essential to accomplishing this. Unfortunately, much of the distributed Java transactions coming off the web to DB2 are only dynamic SQL workload. So, z/OS sees them as undifferentiated pieces of work unless the programmer sets the properties for the connection class-which is often not done.If you have a unique package name you can identify the application program which means it's easier to monitor, do problem determination and especially that it is eligible to manage via the z/OS Workload Manager (WLM). pureQuery gets you get those unique package names as you implement static SQL, and gives WLM the ability to assign them to specific service classes, and allow prioritization of the DB2 threads. (Note: it is not just the Java workload coming off the web, Java workloads via Websphere for z/OS, stored procedures, and CICS Java workload can potentially benefit from pureQuery via the pureQuery Runtime for z/OS).
A third interesting area to look at is stored procedures. First, if you have a simple, one statement procedure that doesn't contain business logic, rules, or network data filtering, and is used just to provide a static plan, then consider using pureQuery and rewrite the SQL statement in the Java application. The pureQuery statement is zIIP eligible, provides the static plan, and avoids the need for the stored procedure. IF the Java program is running on System z it also becomes eligible for zAAP processor use.
Next, we know that DB2 9 gives us the new, native stored procedures, which avoid the use of WLM because they run within the thread, and are zIIP eligible- when used over TCP/IP with DRDA. So, say you rewrite an external procedure (e.g. one that currently uses COBOL) using native SQL/PL. The result? A procedure which is more efficient and is zIIP eligible (DRDA).
As Paul said: “…if you think about it; pureQuery makes Java web applications behave a lot like CICS COBOL applications…”And that, as System z folks know, can be a good thing.
Whew... a lot to think about, but this is a technology to watch that holds the potential for improvements on numerous fronts.Watch this one as it moves in closer on the radar!
It felt like it's been about six months since the announcement of the z 10, so I took an informal web sampling that reflected the z10 announcement from different points of views. Naturally, I found a couple of excellent articles from Mainframe Executive (one from June entitled: Green Machines: The System z10 Enterprise Class, and the other from April: IBM Unveils New System z10: Vital Signs Remain Strong). I also found a nice entry from an electrical engineers news service in Asia with some comments from IBM z10 designer Charles Web who addressed the design challenges with the increase of speed to 4.4 GHz. Finally, I scanned the spring publication of the z 10 technical Redbooks : IBM System z10 Enterprise Class Technical Introduction SG 24 – 7515 and IBM System z10 Enterprise Class Technical Guide SG24-7516 ( along with loads of consolidation and green related items!).
One of the things that really stood out on the external entries were comments like: ‘The z10 chip is easily the most elegant enhancement in more than a decade’, ‘Rock-Solid Computing for the Next Decade’, and ‘the first ground-up CPU redesign in an IBM mainframe in a decade’.All the articles I read really underscored my initial impressions about how many different areas were changed at once (huge speed increase, buffer structures, infiniband connections, reducing chips from 16 to 7 on the MCM, etc. etc.) but also the commitment to the platform these new capabilities represent.
It's always been fun to see the technical changes, but it seems easier than ever to link them directly to the business behind the workload. Floating-point decimal functions gets moved from millicode (yes, there is a step between microcode and the chip) to the chip for the demands of workloads related to financial institutions. Support is added for growing encryption needs through enhanced cryptographic processor functions. An additional 50 instructions are added aimed at improving compiled code efficiencies for the software (i.e. Java, WebSphere, and Linux )that enables growing Internet and application workloads. These are all good examples of the platform evolving to a changing world.
Besides, seeing that quote: ‘long-time assembler programmers will rejoice’, or the fun fact of 20,000 error checkers on the chip, there seemed to be a lot of discussion about the design effort between the system z and System p teams.I like the phrase “shared DNA" for their collaboration on areas like the design of memory controllers, floating point processors, and I/O bus controllers, but also that the z chip is different due to platform focus on functions like cryptography, compression, and decimal floating point capabilities. (Or the different buffer structures for different workloads, levels of availability mechanisms, and something called local clock gating to reduce power consumption.)
Perhaps the nicest summary I saw was from Bill Carico, president of ACTS (an IBM Premier Business Partner), who wrote at Mainframe Executive: “… and a litany of other advancements, confirming that IBM remains strongly committed to keeping the mainframe on the cutting edge of technology. The one-sentence executive summary of the z10 announcement is simply this:‘The mainframe still leads the industry in its ability to run mixed workloads, share data, operate consistently at over 90% utilization and near 100% availability at the lowest cost of ownership (TCO) in an impenetrable environment that runs on autopilot.’No, I’m not saying the mainframe is the best tool for any job. I’m saying it’s the only platform with these unique capabilities”.
Says it pretty well, huh? Let us know what are you hearing about the z10 and its evolving role in the enterprise...
As fall starts showing some early signs here in the Midwest, I took a few days last week for cleanup chores. Today’s entry will be kind of like that, with a couple of quick notes or tidbits.
First, while I have mentioned the Academic Initiative IBM has in relation to System z, I don’t think I included any relevant pointers, so for this program that now has 500 schools participating worldwide:
There is an extensive online mainframe certificate program that IBM has partnered with Marist College to develop; here
Thereis also a new resume database where students are posting their resumes: here
For a list of schools teaching the mainframe, see: here (Then just click on the "Participating Schools" tab. )
Oh, and to engage with folks from IBM regarding System z skills, just send a note to: firstname.lastname@example.org.
Next, did anyone see the browser announcement of 'Chrome' by Google? I’ve played with it a little bit and two things pop out at me; besides the obvious business implications of stepping in this space of browser and client interfaces. First, it is kind of nice to have a quickstart icon for certain sites, but also, the idea of not having a browser task that is having problems bring down all of your sessions says some positive things about their possible awareness of an old System z design concept! Something stumbles but it doesn’t bring anyone else down with it… sound familiar?
Finally, I heard a couple of items related to System z in the last couple of weeks that you umight want to put in your virtual pocket.Did you know there are more CICS transactions every day than searches on the web? (I know, still true!) Another fun fact I heard in a customer teleconference referenced the mechanism of System z hardware executing instructions in parallel and then comparing them to make sure they come out the same.That is a great example of an availability mechanism that came from a time where they were flipping those little ferrite cores of memory and is so deep in the design we take if for granted if we stop to think about it at all.It’s a detail, but as great coaches say, details build champions. Think of John Wooden starting the first practice by teaching his players how to put on their socks and tie their shoes correctly to avoid blisters. .. and win basketball championships.
Hi, this is Dave again. I just got back from IBM's SWITA and zITA internal University, where I was part of a team delivering the System z track. (SWITA is a pre-sales software architect, and a zITA is the same specializing in System z platform accounts). It's always good to step back and spend some time renewing oneself, talking to others who do similar jobs, and of coarse, when you deliver sessions, seeing how they are received and gauging where the platform sits with others who may not work with it every day.
One of the insights I brought away from this experience was to remember, for those of us who have spent a long time with System z (like when it was the mainframe!),is that 40+ years is a lot of time to not only layer functionality, butto lose layers reasons of why those functions were put in place.As we talked about some basic concepts like virtual storage, partitioning with PR/SM, and the original software virtualization engine z/VM, I was reminded of a friend’sreport on an interview he had a few years ago.
This friend had traveled to Texas to interview at a large IT installation, and was anxious that the interview go well. Towards the end of the day, while touring the data center, his guide stepped aside and gave him the feedback he was looking for. He indicated that the hiring team had been concerned since my friend was from the north and in their experience there had been difficulties with other candidates, who just didn't seem to fit in culturally. He shared in a Texas drawl that while the candidates were qualified and generally good people, that part of the reason they ‘just didn't get it’ was not their fault, as they ‘ just didn't know any better’.
Experience, context, and exposure. It's not the fault of those who haven't been around System z that they don't remember why functional recovery routines were put in place, exactly how virtual storage works to ensure memory doesn't get stepped on by those who don't belong there, or the context that two phase commit came out of.If you are one of those who have been around System z enough to understand its design point and value, take the time like a good Texan would with an 8 pound brisket and help others ‘get it’.