Another Blog- One of my team members asked: 'Why another blog?' It is a good question. We have Mainframe which has lots of good detail for big iron. We have millions of pages online, hardcopy, and in e-mail to look at! Still, the channels proliferate, and for a reason. We have moved from phone, face to face and mail to a panoply of social networking tools, channels and media forms which compete for our attention. Filters, feeds, and federation attempt to address the mixed blessing. We have enterprise systems, distributed systems, and grids in the sky. The complexity is not going away.
Connections that make things possible like never before. In this weekend's New York Times, there was an article called 'Can You Become a Creature of New Habits?'. It talked about how we approach problems as humans through analysis, proceedure, collaboration, and innovation. Built on on the familiar tools of analysis and proceedure, it is through new connections, combining existing things in new ways, and leveraging existing expertise, wherever it is, that gets us the breakthrough approaches called innovation. System z knows about innovation, so it makes sense new techniques of collaboration are something this community is a part of.
At IMPACT 2008, Sandy Carter announced the new Smart SOA Social Network which will add Line of Business and Business Analyst communities to the existing Developer and Architect communities that vehicles like Developerworks has done so much for. Social tools will be integrated that include Orkut, Second Life, MySpace, Xing, Twitter, Facebook, and LinkedIn. Add 400 universities participating in the Academic Initiative where students learn about the evolved enterprise systems, and Destination z, where System z Business Partners provide an amazing array of capability, and this is clearly a new world. (Look! I am not wearing white shirt and tie! There is no black dial phone in front of me! My inbox is not a physicial box! My office is where again? )
So, we are going to add to the mix. We are a group of IBM Architects called zITAs, who have background and focus on System z. Here you will see observations, perspectives, and comments on what we see from our experience across gobs of technology, customers, and you don't what to know how many years. (Let's just say the hair on our heads is graying.) What will we talk about? It could be chargeback or cashflow, workload or method, technology or trends.
Having just put new windows in a 41 year old house, the benefits of remodeling was top of mind while listening to keynotes from the recent Business Partner Leadership Conference. In one podcast, a CIO was referenced summing up the next few years of IT focus: ‘Modernizing applications, and dealing with the Web 2.0 thing. ‘(paraphrased) . Mmm..easy to say, but:How does that work really?
44 years ago, jut before the original sashes were put in my humble abode, IBM invested over $5B to launch the general purpose commercial business machine; the System/360.It was a bold move which has proved to have a large impact. One of the reasons these systems have successfully evolved is an initial design and commitment to enable applications to move forward as technology changes.
Plenty of applications have made technology jumps over the last four decades because of that initial commitment. Today’s leaps involve making applications accessible via the web, enabling them to be a part of new applications and accessible to new customers, markets, and business models. (Oh yeah, and dealing with that Web 2.0 thing too.) This refactoring transformation is referred to as ‘modernization’. (I guess it sounds more business like that remodeling...)
It is easy to forget that these kinds of changes have ensured that a majority of the data and transactions that run the world still reside happily, and effectively, in System z houses. A good example of evolved and layered systems fell in my lap, or ear, just days later from another pod broadcast. Entitled Web 2.0 and Wall Street, this discussion provides a great retrospective of IT’s essential role in Wall Street and speculates on possible future use of Web 2.0 technologies. (Yes, there are a few System z platforms on Wall Street!)
Design, architecture, and remodeling… oops, modernization.For we with the responsibility of crafting solutions otherslive with, it’s a good reminder that most projects don’t start with an empty lot or start in a ‘Greenfield’ state.* and that it matters what base you start with when you make changes. Put another way, starting with good materials gives the option of remodeling down the road.
h, and it’s pretty neat that those who live in a System z neighborhood can ‘modernize’ those old structures when the original windows get drafty rather than to start from scratch…… isn’t it?
Upcoming BLOG topics: Thoughts on Chargeback and Systems z, Good Things we forget about System z, notes on design Methods and...your ideas?
* ‘Greenfield’ refers to clean slate projects versus ‘Brownfield’ efforts built on pre-existing structures. See: Eating the IT Elephant
As a one time naturalist, it's been interesting for me to watch the evolution of energy usage and its increased price last few years; particularly in an IT context.Energy as part of an IT budget is a bit of a misnomer.Did you know that less than one in four IT departments pay for their energy?Or, that fewer than one in five enterprises have done a detailed energy audit?
Let’ review some alarming energy IT growth numbers.For instance, did you know that over the last decade energy costs have doubled in the last five years and may again in less than the next three?Or that servers have grown six fold, power and cooling eightfold, storage sixtyfold, and the administrative costs for these systems have grown an average of four fold?
I've also seen references that suggest that there's low hanging fruit here with the proper management attention.One figure quoted from the EPA suggested that many enterprises could save in the range of 25 to 55% .Some of the numbers I've seen from IBM Project Green analyses suggest this could be as high as 80%.
These are huge numbers; especially when you add in server utilization rates. Excepting System z (Shameless plug: designed to run to 100%, deal with different kinds of workload, etc), many of these servers are at really low utilizations; like less than 10%for wintel servers, and in the 10-20% ranges for Unix servers.While part of the problem can be technical limitations of scale, from an organizational perspective many of these servers are limited by setup to not achieve optimum resource utilization since they are dedicated to limited workloads and managed by single departments or lines of business.
So, there are clearly some reasons to focus in this area, and clearly some benefits to be had.It's also pretty clear that if we don't address this issue there could be roadblocks to IT and enterprise progress. Remember, even if you're willing to pay the price, there are areas where there simply is no additional power capable of coming off the grid!
As a tech guy, it's tempting to talk about some of the new enterprise data center capabilities, the neat new ice battery, and all of the virtualization, consolidation, optimization capabilities we have to contribute in this area.However, this is another problem that’s going to take both business and IT to solve and continue growing and contributing. It is going to take the governance stick, to remind everyone what goals are, to cooperate and play nice, and solve a problem with the greater good in mind. It's going to take executive attention, sponsorship, and support.
For those of us in the tech community, let’s be ready with our enterprise and architectural perspective when the call comes from on high and get green! (For fun see green data center man!)
Whew, just back from a week off of hauling stone, trimming trees and walking dogs.Last time I was discussing topics related to effective use of computing resources by taking a tour of green power issues and some serious inefficiencies as represented by low utilization rates on platforms.After that BLOG entry, I came across a great series by Marlin Maddy who has run hundreds of Scorpion studies which help enterprises determine the most efficient use of platforms.
-Did you know that the ratio of facilities costs has flipped over? It used to be that 80% were attributable to ‘mainframe’ and 20% to distributed systems (on average). So naturally, costs tended to get lumped into mainframe for accounting. It is the other way around now and yet distributed financial models not only don’t tend to include them, their burden gets shunted to System z models.
-What was the reason you lasted upgraded power and cooling capability?Odds are it was for distributed systems according to Marlin’s studies.
-What happens if you have good tools to capture resource usage?That’s right; those are the metrics that get put into financial models.System z has had great tools for decades, and often suffers for it when it comes time to build costing models. (Marlin has a great anecdote about the corporate jet getting slammed into mainframe ‘facilities’ costs!!)
-These are just a couple that jumped out at me.There are many more insights, so: Check these out!
While thinking about costing operations and potential stumbling blocks, it made me flashback to the 1980’s, when I was working for a large financial institution.As part of capacity and performance duties, I installed the then new MICS accounting components, wrote the SAS exits, and reported on systems resource metrics. These activities created input to financial acquisition modeless and to chargeback processes.Years later, I was surprised through a chance meeting to discover that many of those same pieces I set up were still running unchanged!In talking with my peers, it seems there are plenty of chargeback systems which have not kept up with changes in technology or the supporting facility cost patterns.
Like layers of business processes, or government regulations, these often elaborate systems are well overdue for reengineering and spring cleaning. If not, they can be a serious stumbling block to IT optimization of resources. The misperception persists that mainframe Systems z resources and platforms are ‘too expensive’.You and I know better.What has your experience been in this area?
A couple of weeks ago, IBM’s Roadrunner supercomputer blew past the petaflop barrier. (IBM news and NYTimes article).Having spent so much time in the Kilo, Mega, and Giga ranges during my career, and finally getting comfortable with Tera prefixes, this threw me into one of my little time trips.
As the music for a time shift faded and the fuzzy strings cleared up (like in any good TV flashback), I was talking with a mainframe systems programmer in downtown Detroit, and based on the wide ties, I could see it was the early 80’s.As a proud owner of an aging behemoth System 370 (a 3033 in the single digit range of MIPs), he was thinking about crossing into 2 digits territory, and dreaming of futures of gigaflops.
At that time we were well past basic accounting into complex transactions, just neophytes in distributed processing and well before the internet. I remember trying to not roll my eyes and wondering: ‘Come on, really, what would we need that kind of power for? ‘
Well, in 2008, what we (and I say we since many of these projects are federally funded by whose tax dollars?) are using this kind of power for is even more mind-blowing than my fellow gear head buddies might have envisioned. How about these for starters?:Nuclear Stockpile Monitoring, Terrorist Activity, Climate Change, and Genome Analysis....
One thousand trillion… 1,000,000,000,000,000 – Fifteen zeroes and my head starts to fuzz out.The names for the next progressions at 18 (Exa), 21 (Zetta), 24 (Yotta), and 27 (Xona) zeroes are waiting. Researchers will get us there by continuing to look at every element in the system and may use approaches like the recent chip stacking experiments where water cooling rivers as thin as a hair flow between stacked condos of computing. They’ll probably also use a mix of specialized processors as you see in this Roadrunner and the current System z. Out of the way petaflop… here comes the new fraternity, Exa, Zetta, Yotta.I think I’ll get a t-shirt!
Architects are not just techies. As we shepherd solutions from creation to fulfillment, we are concerned with method and design, project realities and technology, but we also need facilitative skills in our toolkit. With projects like SOA and Enterprise Data Centers on the horizon for many, processes to work with groups to elicit information, create plans and make decisions need to be employed effectively by architects. In other words, facilitation processes.
IT has deep roots in the complex merging of businesses, technology, and people.Facilitative disciplines have played a key role; especially in shops where System z and its predecessors lived. For instance, Chuck Morris and Tony Crawford, both of IBM, created Joint Application Requirements (JAR) and Development (JAD) approaches in the 1970’s. We can recognize their facilitative grandchildren in current workshop offerings for System z, SOA, Green Data Centers, and Modernization.
Another great example of IT and facilitation intersection is the creation of a leading organization in the field the: International Association of Facilitators (IAF). With contributions from Organizational Development, Education, and other disciplines, this group was started in ‘94 with a very strong contribution from IT participants.Just two further examples include the wonderful work of Ellen Gottesdiener in her book: Requirements by Collaboration, or Michael Wilkinson, who came from IT and recognizing the value of facilitation techniques moved to a full time focus with: The Secrets of Facilitation.
Today, when execs talk about top of mind concerns, legacy modernization and collaborative tools are top of the list. (See May 19 blog entry).More than ever, addressing these areas means collaboration and participation, understanding and buy in across disparate groups to achieve optimal solutions. As we build our personal skills and experiences as architects, let’s be sure facilitative techniques have their well deserved place in our toolkit.
While traveling on business, I heard a refrain whose time I thought had passed in referenced to System z.No, it was not the one that claims the platform is ‘more expensive’. (There are reams of material and experience to lay that urban myth to rest!) This was a comment bemoaning the lack of new workloads for System z.What??!
Perhaps the biggest door opener for new workloads have been the capabilities to bring Linux workloads on board with the enabling capabilities of VM and IFL engines on System z. Not only does this strategy not add to MLC licensing streams for z/OS, but the engines themselves are priced so compellingly that they really provide financial leverage in workload migration analysis.
As we would expect, System z continues to refine design elements to improve system performance for more kinds of workloads and applications. Every element continues to get refined from new chipsets and buffer strategies, to new connection technologies.
Look at the new range of workloads enabled with just with the faster chips on z10. Doubling the chip speed, while only a portion of system performance, does move the line to include compute intensive workloads that might not have been a good fit for movement or consolidation previously.
While it is easy to overlook, even new instructions are being created to help workloads.In the z10 these include instructions created to address floating point decimal functions, compiler code efficiencies (C++, PL1, and COBOL) and improved Java workload enhancements.
ISVs have been busy voting for System z as a platform.Consider 500 new ISVs last year joined the System z world.According to some of the numbers from our partner ISV community, there are 1300 new applications being built that will run on System z. This is in addition to the 1200 applications already enabled for Linux on System z, and the 4000 or so already available.
No new workloads?!We haven’t even talked about the other specialty engines, the expanding role of System z to deal with workloads related to energy management, administration and governance, serving enterprise data in-line, or new applications related to process and analytics that support new business models and could not have even been done a few years ago.
There was an article in the New York Times earlier this year that talked about the ‘mainframe’ and how it has evolved to be a whole new creature; unrecognizable from the thing we called ‘mainframe’.As it continues to evolve, the question we have to ask is not what workloads are there for it, but which ones don’t fit, and for how long?
A month or so ago I heard a lecture from someone at Los Alamos who has talking about their efforts with IBM on their latest specialized grid solution. The lecture started by talking about Moore’s law, the progression of smaller and more powerful chip sets and how if you keep making things smaller with more powerthat over time the logical conclusion is a kind of ‘singularity event’ with a bright flash of light.There was a pause while the audience figured out what he was saying (it go boom!), and then the laughter came.The dry delivery was, as a current advertisement says, priceless.But it points out the issue: we can’t assume there is no end to increasingly denser, more powerful chips. There are limits that imply compensating design strategies.
As we in ‘z’ know, strategies for extending systems capacity have been around a long time. There were attached and multi processors (AP, MP) decades ago.Systems evolved to horizontally add specialized processors for IO processing (SAP), vector processing, and cryptographic functions. Loads of queuing studies led to carefully designed and balanced buffer hierarchies to feed the engines. Balanced Systems Performance was a key phrase that reminded us all that a faster chip without overall system design improvements was pretty ineffectual.
Recently, while IBM has been visible in grids that are widely, geographically networked (e.g. World Community Grid), System z has been extending the horizontal and parallel processing strategy with the addition of specialty processors like the ZIIP, ZAP, IFL, and the ICF engine for the coupling facility. It should be happily noted that some have pointed out the seemingly serendipitous ‘cost engineering’ effects of moving specialized functions to these processors; a serious financial outcome from good technical design.
System z continues to evolve, and looking forward, there are processing challenges in areas like XML, security, and analytics that could benefit from better cost and performance improvements. These workloads may be logical candidates to be enfolded in the System z sphere as specialty engines. Of course, as Ian Richardson used to say as the politician Francis Urquhart in the BBC thriller ‘House of Cards’: " You might well think that; I couldn't possibly comment ".
Hi, this is Dave again. I just got back from IBM's SWITA and zITA internal University, where I was part of a team delivering the System z track. (SWITA is a pre-sales software architect, and a zITA is the same specializing in System z platform accounts). It's always good to step back and spend some time renewing oneself, talking to others who do similar jobs, and of coarse, when you deliver sessions, seeing how they are received and gauging where the platform sits with others who may not work with it every day.
One of the insights I brought away from this experience was to remember, for those of us who have spent a long time with System z (like when it was the mainframe!),is that 40+ years is a lot of time to not only layer functionality, butto lose layers reasons of why those functions were put in place.As we talked about some basic concepts like virtual storage, partitioning with PR/SM, and the original software virtualization engine z/VM, I was reminded of a friend’sreport on an interview he had a few years ago.
This friend had traveled to Texas to interview at a large IT installation, and was anxious that the interview go well. Towards the end of the day, while touring the data center, his guide stepped aside and gave him the feedback he was looking for. He indicated that the hiring team had been concerned since my friend was from the north and in their experience there had been difficulties with other candidates, who just didn't seem to fit in culturally. He shared in a Texas drawl that while the candidates were qualified and generally good people, that part of the reason they ‘just didn't get it’ was not their fault, as they ‘ just didn't know any better’.
Experience, context, and exposure. It's not the fault of those who haven't been around System z that they don't remember why functional recovery routines were put in place, exactly how virtual storage works to ensure memory doesn't get stepped on by those who don't belong there, or the context that two phase commit came out of.If you are one of those who have been around System z enough to understand its design point and value, take the time like a good Texan would with an 8 pound brisket and help others ‘get it’.
As fall starts showing some early signs here in the Midwest, I took a few days last week for cleanup chores. Today’s entry will be kind of like that, with a couple of quick notes or tidbits.
First, while I have mentioned the Academic Initiative IBM has in relation to System z, I don’t think I included any relevant pointers, so for this program that now has 500 schools participating worldwide:
There is an extensive online mainframe certificate program that IBM has partnered with Marist College to develop; here
Thereis also a new resume database where students are posting their resumes: here
For a list of schools teaching the mainframe, see: here (Then just click on the "Participating Schools" tab. )
Oh, and to engage with folks from IBM regarding System z skills, just send a note to: email@example.com.
Next, did anyone see the browser announcement of 'Chrome' by Google? I’ve played with it a little bit and two things pop out at me; besides the obvious business implications of stepping in this space of browser and client interfaces. First, it is kind of nice to have a quickstart icon for certain sites, but also, the idea of not having a browser task that is having problems bring down all of your sessions says some positive things about their possible awareness of an old System z design concept! Something stumbles but it doesn’t bring anyone else down with it… sound familiar?
Finally, I heard a couple of items related to System z in the last couple of weeks that you umight want to put in your virtual pocket.Did you know there are more CICS transactions every day than searches on the web? (I know, still true!) Another fun fact I heard in a customer teleconference referenced the mechanism of System z hardware executing instructions in parallel and then comparing them to make sure they come out the same.That is a great example of an availability mechanism that came from a time where they were flipping those little ferrite cores of memory and is so deep in the design we take if for granted if we stop to think about it at all.It’s a detail, but as great coaches say, details build champions. Think of John Wooden starting the first practice by teaching his players how to put on their socks and tie their shoes correctly to avoid blisters. .. and win basketball championships.
It felt like it's been about six months since the announcement of the z 10, so I took an informal web sampling that reflected the z10 announcement from different points of views. Naturally, I found a couple of excellent articles from Mainframe Executive (one from June entitled: Green Machines: The System z10 Enterprise Class, and the other from April: IBM Unveils New System z10: Vital Signs Remain Strong). I also found a nice entry from an electrical engineers news service in Asia with some comments from IBM z10 designer Charles Web who addressed the design challenges with the increase of speed to 4.4 GHz. Finally, I scanned the spring publication of the z 10 technical Redbooks : IBM System z10 Enterprise Class Technical Introduction SG 24 – 7515 and IBM System z10 Enterprise Class Technical Guide SG24-7516 ( along with loads of consolidation and green related items!).
One of the things that really stood out on the external entries were comments like: ‘The z10 chip is easily the most elegant enhancement in more than a decade’, ‘Rock-Solid Computing for the Next Decade’, and ‘the first ground-up CPU redesign in an IBM mainframe in a decade’.All the articles I read really underscored my initial impressions about how many different areas were changed at once (huge speed increase, buffer structures, infiniband connections, reducing chips from 16 to 7 on the MCM, etc. etc.) but also the commitment to the platform these new capabilities represent.
It's always been fun to see the technical changes, but it seems easier than ever to link them directly to the business behind the workload. Floating-point decimal functions gets moved from millicode (yes, there is a step between microcode and the chip) to the chip for the demands of workloads related to financial institutions. Support is added for growing encryption needs through enhanced cryptographic processor functions. An additional 50 instructions are added aimed at improving compiled code efficiencies for the software (i.e. Java, WebSphere, and Linux )that enables growing Internet and application workloads. These are all good examples of the platform evolving to a changing world.
Besides, seeing that quote: ‘long-time assembler programmers will rejoice’, or the fun fact of 20,000 error checkers on the chip, there seemed to be a lot of discussion about the design effort between the system z and System p teams.I like the phrase “shared DNA" for their collaboration on areas like the design of memory controllers, floating point processors, and I/O bus controllers, but also that the z chip is different due to platform focus on functions like cryptography, compression, and decimal floating point capabilities. (Or the different buffer structures for different workloads, levels of availability mechanisms, and something called local clock gating to reduce power consumption.)
Perhaps the nicest summary I saw was from Bill Carico, president of ACTS (an IBM Premier Business Partner), who wrote at Mainframe Executive: “… and a litany of other advancements, confirming that IBM remains strongly committed to keeping the mainframe on the cutting edge of technology. The one-sentence executive summary of the z10 announcement is simply this:‘The mainframe still leads the industry in its ability to run mixed workloads, share data, operate consistently at over 90% utilization and near 100% availability at the lowest cost of ownership (TCO) in an impenetrable environment that runs on autopilot.’No, I’m not saying the mainframe is the best tool for any job. I’m saying it’s the only platform with these unique capabilities”.
Says it pretty well, huh? Let us know what are you hearing about the z10 and its evolving role in the enterprise...
I had the opportunity recently to talk with Paul Wirth, an IBMer who travels the country in support of DB2 and is a real smart guy. (… and not just because he agrees with me on so many things!). I had asked to talk with Paul after some reading about a new technology called pureQuery, which is part of the relatively new Data Studio suite. The technology is a result of a cross brand initiative from IBM software aimed at the intersection of application programmers and DBA’s for effective data access. This technology’s goal is to reduce the complexity of JDBC programming and queries to relational databases, Java collections and database caches.While there is a lot of detail (see the links below) on how this technology works, there are also a couple of System z implications that Paul shared.
The difference between dynamic and static execution of SQL statements is like the difference between compiled and interpreted execution. If you build static execution packages for DB2, you get a predetermined access path, built for the best performance and predictable execution of the workload. They're also benefits related to security isolation since the application accesses the package and not the database table (see SQL injections, fraud etc.) Those shops who have used the approach of static SQL statements via the whole mechanism of DB2 packages receive long known associated benefits of not only performance, but also cost, security, monitoring, and consistency. While previously available for Java via SQLJ, it was pretty complicated to do and only a limited number of shops did so. pureQuery makes this much easier to do regardless of your Java framework or the API you're using.
One of the strengths of System z is its focus on mixed workloads, performance, and cost effectiveness as a platform. Being able to prioritize and manage workloads through workload manager is essential to accomplishing this. Unfortunately, much of the distributed Java transactions coming off the web to DB2 are only dynamic SQL workload. So, z/OS sees them as undifferentiated pieces of work unless the programmer sets the properties for the connection class-which is often not done.If you have a unique package name you can identify the application program which means it's easier to monitor, do problem determination and especially that it is eligible to manage via the z/OS Workload Manager (WLM). pureQuery gets you get those unique package names as you implement static SQL, and gives WLM the ability to assign them to specific service classes, and allow prioritization of the DB2 threads. (Note: it is not just the Java workload coming off the web, Java workloads via Websphere for z/OS, stored procedures, and CICS Java workload can potentially benefit from pureQuery via the pureQuery Runtime for z/OS).
A third interesting area to look at is stored procedures. First, if you have a simple, one statement procedure that doesn't contain business logic, rules, or network data filtering, and is used just to provide a static plan, then consider using pureQuery and rewrite the SQL statement in the Java application. The pureQuery statement is zIIP eligible, provides the static plan, and avoids the need for the stored procedure. IF the Java program is running on System z it also becomes eligible for zAAP processor use.
Next, we know that DB2 9 gives us the new, native stored procedures, which avoid the use of WLM because they run within the thread, and are zIIP eligible- when used over TCP/IP with DRDA. So, say you rewrite an external procedure (e.g. one that currently uses COBOL) using native SQL/PL. The result? A procedure which is more efficient and is zIIP eligible (DRDA).
As Paul said: “…if you think about it; pureQuery makes Java web applications behave a lot like CICS COBOL applications…”And that, as System z folks know, can be a good thing.
Whew... a lot to think about, but this is a technology to watch that holds the potential for improvements on numerous fronts.Watch this one as it moves in closer on the radar!
The theme of IT enablement across multiple fronts as broughtout by several of the speakers at the SOA Summit this summer keeps coming backto mind.On the one hand, I came acrossanother reference to the lines of COBOL code out there (this one was for 240Billion!), and one the other there was an article in Mainframe Executive onHoplon (Hoplon’s Infotainment’s Gameframe ) and the fact that they are planning on hosting2000 users per IFL for their massively multi-player environment (you know, theone with cell processors in the System z?).
I was thinking about CICS and the sure progress for web servicesenablement, which has gotten lots of publicity, and thought of its counterpart:the progress in expanding legacy CICS efficiency through the evolution of ‘Thread-safe’tasks.
CICS has continued to evolve to take advantage oftechnologies to maintain application integrity while growing with the hardwareevolution of more processors. The process has involved distributing workloadacross multiple tasks, first with CICS core functions, and later with DB2integration, and now exploitation moves to include potential threadsafe use bysome MQ and VSAM functions (CICS 3.2). Theenabling of multi-tasking of user applications through multi-threading hashelped improve performance utilization while maintaining little things like consistentand predictable throughput results and workload sequencing that large systemshave been so obsessed with for the past few decades.
Why is this important?Well, if you can do this sort of thing safely, and improve utilization(there is that old system Z focus again), it can mean serious returns; like5-15% or more. For some of IBM’s largercustomers this has translated to saving hundreds of MIPs and Millions of Dollars.
Finding the appropriate candidates and implementation can getpretty involved, but the impact can be worth it, and finding candidates iseasier than in earlier implementations via the use of CICS tools such as Performance Analyzer-For which are good candidates, InterdependencyAnalyzer –For which are not threadsafe(which could fit nicely in CICS Explorer), and Configuration Manager (to enforce a threadsafe environment).
Mmmm.. maybe something less sexy than web services,but an area worth looking at, eh?
There are different kinds of consolidation; mergers and acquisitions, temporary partnerships, and even hostile takeovers. As times change, new combinations of people and institutions recombine and shuffle for optimal fit. This has certainly been highlighted for me in the past couple of weeks. First, I saw the financial battlefield of winners and losers with some institutions disappearing and others being picked by past competitors in ways we could never have imagined six months ago. Second, I attended a series of sessions on the dozens of software company acquisitions that IBM has folded into their portfolio over the last half-dozen years.
I've heard the process of folding these companies into the IBM world described as a core competency. The informal term I’ve heard is: 'blue-washing' and while can I imagine there may be a Harvard Business Review article on the detail someday,I can state, as an unofficial observer, that we get better at it each time a company is ‘blue-washed’. Externally, the process delivers tighter, pre-integrated solutions with more functionality for our clients. Doing these integrations of once isolated and disparate pieces can save our clients a huge amount of effort and time; as in an order of magnitude less effort.
This behind the scenes consolidation of middleware gets added to the consolidation efforts already underway in IT. Workload from legacy, ISVs, and e-business applications are moving in together where once they got to live in their own, sometimes messy, apartments. Platforms are merging for cost efficiency and infrastructure effectiveness, not unlike that house you may have rented with friends at college.
Of course, as we consolidate IT, some of us will have to get used to some different ways of doing things again; like System Z’s insistence that you share the common area, work with others, and pick up your garbage!
Large systems, as represented today by System z shops around the world, has always led the way in applied technology across industries of finance, technology, commerce, and government. It has lead recent innovation waves , includinge-business, On Demand, SOA and Green technologies. On November 6, IBM's chairman, Sam, Palmisano, gave a speech at the Council on Foreign Relations as part of their corporate meetings program. The subject was introduced as proposing an increased infusion of intelligence into decision-making, but the larger issue addressed in this talk is an opportunity for leadership in the context of the current economic, financial, and political turmoil in the part technology plays in creating a smarter world.
Building on the concept of the Globally Integrated Enterprise, (see the 2006 paper), Sam talks about the window of opportunity leaders have to effect change in the current climate of receptiveness for change. Building on Tom Friedman’s observations about a flattened world in dire need of greener and more environmentally responsive approaches ( Hot Flat and Crowded and Tthe World Is Flat), he suggests the world also needs to be smarter and to accelerate the convergence between digital and physical infrastructures. Suggesting we move towards the future rather then hunkering down, Sam reminds us ofthe inspirational effects of taking action towards a hopeful future rather than trying to defend the past. (Something System z and large systems understands too well!)
Sam shares the current urgency to start by reviewing some arresting facts; such as how 40 to 70% of energy is wasted in poorly managedpower grids, how half of the world doesn't have sanitation facilities and one in five don't have safe water to drink. Or, how a large part of the recent financial crisis was due to a lack of mechanisms to track and manage the risk, and how huge opportunities exist in managing traffic, supply chains, and especially the health care systems. Then, he reminds us of our collective technologic progress;how our planet has almost a billion transistors/person (at a cost of one millionth of a cent each!), 4 billion cell phones, 2 billion Internet users, and 30 billion RFID tags. He shines a light on the pervasive use of sensors, connected and networked to provide a growing base of intelligence and capability; representing a potential for problem solving with a huge future potential.
In short, an emerging world is revealed that is digitally aware, and intelligent or smart. If growing these capabilities is possible, affordable, and can make a significant difference in our world, then someone will do it. Sam asks: Why shouldn't it be you, your company, your country? (And naturally, IBM has examples of involvement in many areas from MRI Coils to Solar Capacitors, from Trraffic and Power Grid Management to Risk Systems).
Talking about the importance of this moment in time, Sam summarizes saying :Everyone has to come out of their lanes. It will take collaboration across government, academia and industry; with skills that are multidisciplinary, and end-to-end. (Hey, a lot like architects!) To dig out of this current situation, will require a realization that we need to move forward and create an even smarter world. As one of the audience commented, citing Peter Drucker's paper (1994 The Age of Social Transformation):
There will be no "poor" countries. There will only be ignorant countries.
As part of the System z and large systems community, I think we will once again be on the leading edge of this next transformation as we become more instrumented, interconnected, and intelligent; as we become a smarter world.