Enterprise Class Innovation: System z Perspectives
systemzblogger 2700017BYR 1,732 Views
Wow.. a month already! I have seen a number of indicators, including server acquisitions for System z, that suggest the economy may have bottomed out, or at least that pent up demand to continue running the large enterprises had to make some decisions on acquisitions. Certainly, the zEnterprise has been adopted nicely as reflection in the Mainframe Zone and comments floating back from individual customers.
Next week is the annual Rational Software conference: Innovate, and the opportunity to hear Grady Booch, Walker Royce et al share developments (get it? developments?) in the area of Software and System Innovation. Streams range from Application Lifecycle Management, Enterprise Modernization, Application Security and Compliance, to Strategic Planning, and Software and Systems Innovation. Here is your chance to get updated on OSLC, Jazz, and all of the Portfolio pieces that contribute to new business outcomes. As the recent issue of CIO says, IT alignment and IT Value are no longer enough, business outcomes are what matter. (IT Value is Dead, Long Live Business Value -- May 15, 2011 edition)
Oh, and one more thing of interest: The Register reports today that "IBM Guns down Mainframe accelerator in Texas...". So, zIIPs and zAAPs will continue to be used as IBM intended, and under the rules and restrictions that have been set over time.
They sum it up by saying:
"The hired legal guns were blazing, and when the smoke cleared, the US District Court for the Western District of Texas, located in Austin, granted a permanent injunction in favor of IBM in a long-running lawsuit Neon Enterprise Software, killing the controversial zPrime mainframe acceleration program."
well... that settles that I guess... until next time!
systemzblogger 2700017BYR 1,959 Views
The 13 Volume ABCs of z/OS Systems Programming is a classic set, and the release of version 5 of Volume 9, dealing with UNIX Systems Services, seems a good time to highlight the series for those who may not be aware of it. For review, the series is aligned as follows:
systemzblogger 2700017BYR 2,541 Views
Well, I just got off US District Court jury duty and it was quite an interesting view into a different world. As one who had not avoided but just not participated before, you can imagine my little IT head's dialogue: where are the pictures? There is a whiteboard there, why don't they use it? Couldn't they put together a better compiled document of evidence briefs than this? I admit, I do the same thing when I go to the doctor's office and see the wall after wall of paper file folders and watch the physician dig through the handwritten notes jammed into the folder. Anyway, if a vacation is a change of scenery, than maybe that is what I had, but it sure did not feel like it!
Did anyone notice the Novell SUSE extended support in Enterprise z last week? Novell Offers Industry's Longest Enterprise Linux Support Program This adds to a more than 10 year presence in the enterprise space -- something easy to forget I think. (brochure)
How about the Amazon cloud outages and Sony security exposures? Characterized by the headline in the Economist as: Break-ins and Breakdowns, it seems we are seeing more of these sorts of things almost weekly. Again, whether it is cloud or general platform selection, put your enterprise systems hat on and access the Service Level Agreement section, remember the decades of work to define qualities of service and non-functional requirements that Enterprise Systems represents when you think of moving workload. Just because you are used to it all being built in and there, does not mean it will be in a new environment unless you make sure of it!! (Yes, System z and Enterprise Systems work with cloud and are still the most secure....)
A final quick note, and a reflection of what I am sure we all do... as I was looking across the functional portfolio of our acquisitions, I thought to myself: what is Jeff Jonas of SRD doing these days?
Well, it turns out he is speaking at IDUG, infusing his solutions into ILOG and Infosphere portfolios, and still thinking hard about privacy, in stream analytics, applying these new ideas to IBM 'Smarter' solutions, and earlier this year talked about his G2 or sense-making project that came out. quietly back in January. He talks about it several places and there is a nice slide deck on Slideshare here. where he talks about the skunk works G2 project, sense-making and the larger picture of work he is doing with the deck entitled: Confessions of an Architect. This is good stuff around both advancing and controlling the next generation of analytics which, as he puts it, deals with Privacy and Performance, and is Smarter and More Responsible.
Just what you expect from IBM, right?
systemzblogger 2700017BYR 1,813 Views
Good Day! I was going to post a draft I had about now the 3/29/11 issue of the Lex Column in The Financial Times had a nice summary and reference on how IT has had a drop in costs of 3.5 per cent last year -- while health care went up 6 percent-- and then talk about what we are all seeing in cost control efforts. You know, 10% across the board, except how well Enterprise Systems are doing since they run the world and help implement new business models. (Really, it was all set!) Then, I was going to talk about the recent cloud announcements a couple of days ago, since we have had a thread about cloud, and talk about how important it is to be careful as you step onto you first cloud and think about the risk, who understands and has developed mechanisms dealing with security, backup, availability etc versus what could be just a little server in a rack somewhere. But then, another thread quietly weaves its way into my head this morning, and I realize it is time to post an entry.
Sometimes large events have quiet taglines along the way. It is so short, I include it here from announcement 111-078 : (First the overview)
In Hardware Announcement 110-177, dated July 22, 2010, "IBM® zEnterprise BladeCenter® Extension (zBX)," IBM introduced a new dimension in computing with the announcement of the IBM zEnterprise Server (zEnterprise). This first in the industry offering makes it possible to deploy an integrated hardware platform that brings mainframe and distributed technologies together -- a system that can start to replace individual islands of computing and can work to reduce complexity, lower costs, improve security, and bring applications closer to the data they need.
As part of that announcement we provided a road map for IBM's hybrid capabilities, the delivery of special-purpose workload optimizers and select general-purpose IBM blades. In 2010 we began to deliver, first with our business analytics solution -- IBM Smart Analytics Optimizer -- and then general-purpose POWER7™ blades. In February 2011 we continued with the announcement of the IBM WebSphere® DataPower® XI50 for zEnterprise (DataPower XI50z), a multifunctional appliance for the System z® environment that can be implemented to help provide XML hardware acceleration, and to streamline and secure valuable service-oriented architecture (SOA) applications.
The next step of the road map is to incorporate select IBM System x® technologies, originally targeted for the first half of 2011. The reaction to delivering IBM System x capabilities has been very positive, with our clients also asking that we support Microsoft® Windows®. Therefore, today we are revising our road map to include planned support for Windows on System x as well as a revised schedule for IBM System x blade delivery on the IBM zEnterprise Systems.
...and then the Statement of General Direction:
In the third quarter of 2011, IBM intends to offer select IBM System x blades running Linux® on System x in the IBM zEnterprise BladeCenter Extension Model 002.
In the fourth quarter of 2011, IBM intends to offer select IBM System x blades running Microsoft Windows in the IBM zEnterprise BladeCenter Extension Model 002.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party's sole risk and will not create liability or obligation for IBM
Today is one of those days we will look back on as a milestone. Want to control and manage you glass house? Want to control costs and manage complexity? zEnterprise. Let's go!
systemzblogger 2700017BYR 2,453 Views
I had the opportunity recently to visit the zITA Boot Camp and speak with a community of Architects who watch over the mainframe or Enterprise Systems world. For those of you who don't know, zITAs are the System z IT Architect community in IBM. These are the guys who advise and guide the largest institutions as they keep the big systems evolving to meet current business initiatives with the appropriate technologies. Now I am not saying that there was not a few missing or grey hairs in residence, but there was a nice mix of what some HR folks might call 'vitality' representatives as well. While this community knows the intricacies of where virtual storage and channel commands started from, they were discussing developments that ranged from best fit and use of workloads across platforms, futures of zEnterprise capabilities, and guiding IT leaders on strategies for decisions based on Total Cost of Ownership based on the full range of use and expenditure categories that real clients experience. .
These folks are really the Trusted Advisors we hear about, the technical conscience overseeing change in our Enterprise Clients. While they were being vetted on topics from future extensions to current technologies, they were also looking at application in spaces such as Cloud, Industry specific frameworks and solutions, and the next stage of low level integration across component boundaries using open technologies to accelerate and increase options across technology providers that can give more options for solution innovation.
As at any meeting or conference of this type, the conversations at breaks and meals are often some of the best. These are not individuals who are stuck in the quagmire of detail without a clue to what is going on with the business side at their clients. I heard conversations that included topics such as: how can we better understand industry trends our clients are dealing with, how can we help our customers better link IT and LOB initiatives so they are more effective and valuable once implemented, and what lessons can we leverage from other fields such as negotiation and the legal system to better communicate across disparate technical communities?
Sure, these guys know their availability strategies, but they also know what is going on with Web 3.0 and Linked Data, Economic and Risk Models of the Cloud Infrastructure, and what changes in compliance legislation might mean for reporting strategies and the systems behind them.
Who better to have guiding our largest and best institutions? Thank you zITAs.
systemzblogger 2700017BYR 2,558 Views
z Growing: In the recent z/Journal I noted some impressive numbers for growth as shared by Bob Thomas on the publisher's page. In just the fourth quarter of last year, MIPS grew by an amazing 58% (the highest growth in a decade) and sales grew by 70% in the same period. zEnterprise systems pushed over 450 shipments representing 1.5 million MIPS. Whoa... no editorializing needed there, right?
GUI GUI everywhere... I was talking with some folks in a session about the new interfaces for development (RDz and IDE vs ISPF and lovely green screens) and the enhancements with sysprog, sorry systems programmer, tools with z/OS Management Facility and it made me flash back to when, in the late 70s, we worked with a customer to demo a function where you could actually start to put customized screens in color on ISPF. It was a pretty exciting thing-- that unfortunately went nowhere between the novelty, customization, and the fact that despite multiple run throughs ahead of time, the demo ran into (ahem) unforeseen technical difficulties. BUT the point was just the idea of color something we all agreed would be nice someday. Heck, even the 3290 gas panel display with its pretty orange characters was enough variety to get many speculating. (I mean come on, color TV only broadcast around 1967, right?)
..and Smarter everywhere... This all made me think about how we are simultaneously getting further away from our systems while interacting with them more intensely all the time. With phones and iPads with touch screens the distance and links in the chain is longer than ever and yet the response and interaction is greater too. In a matter of minutes I have watched a demo where a CICS transaction gets enabled with EGL and modern tooling from Rational to expose a banking application on an iPhone. Between CICS events, Webshere process engines, and appliances like DataPower, or ILOG rules engines, the options for dynamic interactions in flight are now astounding. Change a business rule, a process, a partner or supplier and it is not 2 years of waterfall development and testing, it can be moments. It is not just intelligent and instrumented devices as part of a smarter planter that are interconnected, there has been a lot happening back at the old IT shop too! Don't forget those web transactions have at least as many CICS transactions to match them every day (around 30 Billion the last someone looked at it a few years ago). And yes, most of the data is still back home in the datacenter for enterprises-- and moving there more daily with private clouds too. So go and play, do real work, whatever.... don't worry, we (z and enterprise systems) got it covered behind the scenes.
systemzblogger 2700017BYR 2,082 Views
As IBM moves to consolidate security solutions, gets into Social Business courses for free, or --of course --having the Watson machine play on Jeopardy, you see the span of influence grow and expand beyond the traditional boundaries of IT. So, it becomes more important to think about how we talk with the folks in these difference places. For instance, I was recently at an IBM sponsored session where the now long established Eclipse client was a base for a number of Enterprise level development tools from Rational and there was someone not long out of school watching next to me. Suddenly, I heard: 'Oh look, it looks just like Visual Basic!'.
Exactly. We in the large systems, and for that matter distributed world, in IBM have been trying to get the message out for years about the evolution of tooling and the use of GUI interfaces, perspectives, and attractiveness with productivity for the next generation of IT coming along. We talk about skills, the changing of the guard, and the Academic Initiative and yet I don't think I heard in any of the materials someone actually come out and say the perfect thing to link to the next generations experience with programming!
In Management Speak:
Another example has to do with the aging workforce OR the aging technology challenge that goes on cycle after cycle. Sometimes it seems that this is a new crisis business has not had to deal with -- the whole issue of either old technology or the baby boomers being half of the large systems crowd and depending on what study you look at, the majority of them at least eligible for 'transition from the workforce'. And yet... while reading the current issue of Strategic Finance (the publication of the Institute of Management Accountants) there was a nice article about business as usual for planning the transition of executive leadership. They call it: Succession Planning.
Now, why the heck don't we use that term?? That is all Y2K was, or the first Virtual System, Telecommunication Network, SOA Solution... just business as usual and Succession Planning. For those of us who are in IT, or contribute to the changes around IT, we should remember this: The guys making the decisions are used to and familiar with a whole process around the discipline of succession planning. It fits into existing governance and strategy processes. It is immediately understandable and rings the bell in the cranium. Who is our next leader, the next big thing, zNext (sorry, not the official moniker-- though many of us have heard it!) technology? Use the right phrase or lingo and it is: '...No problem, we know how to deal with that. ...'
Why didn't you says so..?
So, let's try to see if we can get the eyes to light up, the bell to ring, the light bulb shining. Try to find the right phrase to link to the experience and world of your non-IT partners.
After all, it is called Enterprise, that means we need to connect everywhere and with everyone.... Use the right connection phrase and it can be, as the Cowardly Lion said in the Wizard of Oz: "...why didn't you say so?!..."
systemzblogger 2700017BYR 3,089 Views
This headline came across my virtual desk last week, and I don't even remember how I found that, but I rushed immediately to two places. First, the direct link from the Register, and secondly to IBM news and System z news. This is pretty big stuff, and I will share a couple of key extracts and let the article do the talking:
" The second big change that's coming with the zBX, according to Doris Conti, director of System z marketing at IBM, is that Microsoft's Windows operating system will be supported on the Xeon blade servers inside the zBX complex. IBM has hosted over 300 workshops with mainframe customers discussing the new hybrid system, and customers were not exactly happy that IBM was restricting Linux to Xeon blades and not supporting Windows.
"We heard the feedback and we very much intend to deliver Windows support on zBX," says Conti."
...and a little bit later.....
"You may be wondering why Windows and Linux support on the Xeon blades in the zBX didn't ship back in November with the Power-AIX blades. Jeffrey Frey, the IBM Fellow and System z architect who designed the zEnterprise 196-zBX hybrid, says that the Xeon blades are coming later because IBM's Power-AIX customers were the ones Big Blue felt would take to the hybrid computing model first. (IBM is also fixated on preserving its market share in the Unix racket against resurgent Oracle and HP.) The plan now is to get Linux support on Xeon blades out the door this year, and then add Windows support as soon as possible.
Frey said that IBM was not sure how deeply it would have to get into the operating system or hypervisor code to manage AIX, Linux, or Windows when it started the zBX. So AIX, which IBM has the source code for, and Linux, which is open source, were the easiest places to start. IBM didn't want to get involved with Windows until it knew what it might need from Microsoft in the way of cooperation. "As it turns out, there is very little of that," Frey explained to El Reg, referring to the need to get into Windows code to make the OS work on the hybrid system.
Frey also let the cat out of the bag on what hypervisor IBM is using on the blades. IBM's own Processor Resource/System Manage (PR/SM) type 1 hypervisor and its related z/VM operating system (which can function as a type 2 hypervisor) are used to dice and slice the zEnterprise 196. The company's own PowerVM hypervisor is used on the Power7 blades to carve them up into logical slices and to virtualize I/O on the blades. IBM has chosen a variant of Red Hat's Enterprise Virtualization (RHEV), the commercial-grade implementation of the KVM hypervisor for x64 iron, for the Xeon blades; this tweaked version is known as RHEV-Blue, predictably, and is made to cooperate with IBM's mainframe firmware. Power VM will support AIX 5.3, 6.1, and 7.1, and Conti says that if mainframe shops want to run the IBM i 7.1 operating system (formerly known as OS/400) on Power blades, IBM will consider it, says Conti. As for Linux on Power, Frey says there will be a need for it, but that Windows on Xeon blades is more important to get to market given the installed base of machines at mainframe shops. "
Responding to client feedback, there are no excuses now, just let the z196 and zBX flow across the datacenter like an amoeba ....... Read the article and think about it.
systemzblogger 2700017BYR 2,102 Views
Well, a quick personal note to explain the absence of postings. On Oct 11, I started work and had a chest and jaw pain that, long story short, led to a heart bypass and my absence from this blog and work for too much of the fall. It was quite a surprise and hole to fall into, so welcome back me!
Some of my main observations from the sidelines include laughing at the new phone with tiles which lets you know there is activity for mail or social networking sites (see Lotus Notes basic concept that is how old? 10-12 years?), the zEnterprise system becoming available, and continued tech announcements from CMOS Integrated Silicon Nanophotonics (as in 10x density, with electrical and optical devices on the same piece of silicon) to a range of announcements on security, smart solutions, and the cloud.
I notices the magazine, Government Technology, had a good article on Modernization, as did zJournal, and Mainframe Executive had a cover story on the creation of the new zEnterprise systems. In the legacy systems article in Government Technology, there is a good reference to the NASCIO survey in '08 that has the top 2 drivers as changes to business processes, and inability to support LOB requirements. Nice reminder about where to start in an enterprise before any tech change effort, huh?
OK, back with more, gotta ease into things again ya know...
I was listening to a popular weekly technology podcast (hint hint) when they mentioned how Intel has made a chip for a Gateway PC which, when you need to, you can upgrade in place with a $50 upgrade card. According to PC Magazine (here), the card is used to unlock hyper-threading and additional cache. While the first reaction may be, and was on this podcast, 'wait a minute, why aren't they giving me all the capability?', one of the participants quickly added: '..wait a minute, this is a really good idea... think of the costs savings, extending the life of the processor and...'. OK, obviously I paraphrase, but you get the point. Capacity on Demand has come to the desktop.
While reading many of the news items related to this clearly shows the initial confusion reflected in the podcast I listened to, others are starting to realize the implications. How long until we have extra processors available to turn on? When do we finally get raid devices for disk that ensure we don't ever lose things on these sometimes fragile home systems? Will we ever get to the point where our desktops get dynamic snapshots to enable variable leasing schemes? ....or the ability to turn on more power for temporary periods of time?
My peers and I joke about how we have been cheating for decades by knowing about large systems and watching those capabilities stream down to distributed platforms. Now, maybe, we're able to talk about and share some tech insights with our kids or grand-kids!
"All right Grandpa, turn up the computer and turn down the thermostat, the grand-kids are coming!"
I am heading for some travel so wanted to get my mid month update done now. One of the interesting things about technology is the lag before announcements start to filter out into the world. It has been about 6 weeks since zEnterprise hit the stage and I notice through my RSS feed aggregator that suddenly this last week or so there are lots of 'news' posts that, golly gee, IBM has the fastest processor in the z196 at 5.2 GHz --while one typo'd it at 5.6. While that is great, they usually stop there since the PC culture, I guess, seems to think that is all that matters when it comes to performance; systems performance.
They don't tend to continue and say:
(Jennifer Dennis discusses the implications and some of the new offerings related to Application Management, Application Resilience, Security, and Asset and Financial Management. )
Another question-- or class of question -- is about fit and where to use it, could I use it for.....what? Naturally, this leads to a discussion and as the realization hits I hear comments like:
systemzblogger 2700017BYR 2,779 Views
Back when the the largest processor was a MIP (million instructions a second vs the 50K MIPS of just the z196 Central Processing Complex...) and a megabyte of memory on the 'mainframe' was a million dollars or so (versus about a thousand fold less now with memory advancements and the latest announcements), we saw not only the creation of distributed systems and strategies to optimize the effective use of those key resources, but also looked hard at individual user's cycle use. Of particular focus were systems programmers, developers, and 2 or 3D drafting. We know that CAD stations evolved, and that each TSO user (sysprog or AD) were looked at very closely since --and I remember doing these studies --- each user could easily be linked to say.... a percent or more of CPU usage as recently as the 80's. (Ouch!)
Fast forward to 2010 and the latest evolutionary strategy includes a strong focus on both cycle savings and productivity savings for users. For systems programmers, we have the z/OS Management Facility (which we have mentioned a few times) which is accessible via a browser. For developers on system z, there is the Rational Developer for System z (with Java or EGL). Besides being and eclipse based IDE, the workstation based tool does local syntax checking and -- with the new Unit Test feature -- also offloads still additional cycles. With the focus on effective use of system resources, and the emphasis on maintenance and operational support costs (since they represent a large portion of IT budgets still) these approaches should not be overlooked since they not only address both concerns of animate and inanimate resources (people and machines), but do so in an integrated way.
It is also nice to note that these are just part of a strategy to have tooling that are similar across multiple roles and platforms which address the challenges of models that involve extended development infrastructures for globally integrated enterprises that involve development centers located ...well... anywhere. (See near shore, off shore, on shore and models that are complicated to manage for shore! ) Groan....
..and these are just the first level tools since Rational also has strategies for communication across teams (Rational Team Concert for z ), for modernizing the UI and accessing legacy applications through the tool Host Access Transformation Services, and for advancing SOA services strategies that optimize component reuse by looking at what is out there today via Rational Asset Analyzer .
OK... I won't go off on portfolio reviews here, but will also mention that you should not overlook one of the most obvious areas for effective use of resources and that is: make sure you are current on software compilers since Big Blue has quietly been enhancing them on ALL platforms, but certainly System z...... and the impact over the last 5 years or so can add up to total improvements that may double or more performance across a range of language/subsystem/platform combinations.
So, don't get caught short with old tooling. Refresh, renew and refurbish. Change things out and up. Your mom may have told you to change out your old under things since you never know who may see them. I say change out your older system things before an audit shows you have been missing out on big savings and you have egg on your face to your management team. (How is that for mixed metaphors?) Oh, and do take time this labor day to take a break and set down that phone and e-mail system!
In case you have not seen the Fit for Purpose materials from your friendly IBM site or local team calling on you enterprise, there are a number tools available to help you determine 'best fit' for servers and workload. Based on insight from some external studies, they focus on workloads related to business intelligence and analytics, Web and SOA, traditional transaction, and suites like ERP, CRM, SCM etc. Best Fit has been around a long time with concepts like Balanced Systems (a nod to Ray Wicks et al), 'loved ones' ( and a tip of the hat to Seibo Freisenborg), and constant fiddling with levels and amounts of cache to keep data flowing to the big engines crunching away ---even when those engines where what we would now consider little guys.
Yep, some workloads need more or less qualities of service (QoS), or non functional requirements like availability, security, performance and so on. And some are more compute or I/O intensive, shorter or longer transactions, spread across infrastructures or limited to running on isolated or even specialized processors. This all makes sense to the technical mind, but it is a good idea to remember the power of inertia, decisions already made, and what one buddy called Fit for Politics. It is tough to make changes when decisions are often not re-examined or justified -- they get cast in stone or aligned with factions and can be perceived to be linked to career paths even. Don't forget that in you consolidation or movement plans for the z196 and zEnterprise we have talked about! And don't forget that part of the power of being able to QUICKLY move workloads across platforms in the new complex is that you can quickly try things out and over time learn to trust the idea of decisions not caste in stone, maybe not doing lots of analysis ahead of time and just freaking try it!! (Hey, there's an idea....)
On another 'how things have changed' I cleaned my office recently and found I tossed both round and square backup discs galore. Between more reliable cheaper drives, and backup schemes offsite I realized it had been awhile since I did that hours long data backup and labeling fun time. The other thing I found was a folder of magnetic shapes representing e-business server types. Ten years ago or so when the idea of creating these e-business infrastructures with customers was new I'd find a magnetic whiteboard (easier than you think for big companies) and slap these blocks on the wall with new names like web servers, application servers, portal, gateways for voice or files or B2B....and we would plan out the new world of opening up the enterprise to partners, suppliers and customers. Those concepts and server types are common place now, so I felt pretty secure in setting them free too.
...but maybe I should think about making a new set of magnet blocks with lines of business or service types, with event categories and collaboration options as discussions for the next wave of Smart systems start getting built?
systemzblogger 2700017BYR 3,812 Views
Ta-da! The zEnterprise is out!! I admit I hinted a while back, and yes a little bit last time, about continued evolution and here… it…. is… the System of Systems, the third dimension of not just making processors with faster engines or specialized function, not just growing the number of processors, but now pulling in other platform systems into the z complex.
After going to the teach the teachers session, I took my notes and summarized them for some internal calls with teams, so have been summarizing and netting out, boiling the ocean and relooking at materials. I won’t try to give an announcement here, but just touch on some highlights.
First, realize the vertical integration this reflects, the added dimension of creating the hypervisor of hypervisors (Universal Resource Manager) on top of the other platforms so that the zEnterprise system now wraps its arms around --and the amount of integration represented here. (I understand our friends at Gartner used the term: 'brilliant' in describing the Universal Resource Manager !)
Next, look at the concept of being able to manage, as in Service Level Agreements, not just workload, but security, availability and virtualization targets. (..and the implied amount of monitoring and reporting behind the scenes…)
Then, look at the absolutely killer numbers BOTH in the base complex of the z196 for the kinds of technology improvements we are used to seeing in new z generations, and also in the incredible impact on space, energy, and operating costs compared to an infrastructure before (yesterday!) the zEnterprise complex.
Note that in the z Blade Extension (zBX), that the Power7 and SAO optimizer blades are first, followed next year by DataPower and x-series blades. Take a hard look at what improvements to certain star queries-- that may be 80 fold (or more)-- might do to the concept of how you build your analytics and process systems. And….. if you have looked at the System z Management Facility, look at the new CICS Deployment Assistant that may reduce administrative time up to 80% !
OK, I am giddy… I promised not to repeat the announcement here. It is just so packed with improvements it will take us all time to fully absorb it. So, dive it, we’ll talk later.
Looking through the current issue of Mainframe Executive, (you are subscribed, right?) and saw a nice interview with some of the Academic Initiative students. I also noted that there will be university program representatives at Share this year to talk with mainframe shop managers in August in Boston (see z Events ). The theme continues in IBM Systems Magazine for the Mainframe
with the article: Educated for Success.
These items made me nostalgic, thinking of Dr. Seuss and "all the wonderful things they'll see!" We old fog-gees saw virtual storage, MVT, SVS, MVS, and up to z/OS and they may see operating systems so many levels of complexity and abstraction above what we have watched it boggles the mind. We abstracted platforms with middleware running anywhere and then raising the bar by abstracting run-times with the evolutionary result of early CSP and VA/GEN to the current Enterprise Generation Language: EGL.
We watched virtualization from basic storage to VM, server consolidation, and federation, and they start by taking steps on the cloud!
On the note of systems evolving, it seems I am hearing about more enterprises looking hard at long term systems that were build over decades to perform incredibly efficiently but, alas, in many cases (since they are rigid and tightly coupled), when it is time to introduce the change monster, the prospect of 'different' overwhelms them. Projected costs, time, & risks start to look pretty scary. Hey, just remember that the remodeling industry is bigger than new housing construction, and build that value case regardless of how large the 'maintenance' is to your application or infrastructure base.
And don't forget, there are many more options with componentization, messaging, event-based architectures, SOA and web services; not to mentionand modernization transformation strategies and tooling that weren't there just a few years ago. Remember VSE to MVS migrations? How about Y2K? The longer you go without changing, the bigger the bump --whatever the system.
Just remember: start small (skunk works and prototypes) , draw some good pictures (architectural models), bring extra sandwinches (resources)....maintain your perspective (humor).
Or maybe, wait for some magic that we have yet to see!!
Let's look at some more developing trends or patterns... IBM announces the Financial Markets Framework at the annual Financial Services Technology Expo in New York and not only refers to microsecond latencies and millions of transactions, but also builds on existing industry frameworks, adding feeds from disparate sources (think Info stream and stream computing from last year...) adding analytics and process extensions to address risk, regulatory and compliance areas. Do you remember the 20 plus times volume system that was announced about a year ago? Do you think that middleware is evolving still? (Do we need to mention what systems the Financial Industry runs?)
I stumbled on another source I should perhaps have known about called the Dancing Dinosaur blog, pretty interesting reading.
I found it after seeing one of the sessions at Innovate that had a demo to connect System z to smartphones, seeing the redbook, and curious to see if there were any mentions of it on the net. This ties in with a number of announcements to support phones like the recent Android support by collaboration software. Integration of the user to mainframe continues in all kinds of ways.
Well, I know this is a short one, I am off next week and then traveling. Oh, and take a look at the COBOL Cafe hub for the new Rational Developer for z Unit Test feature. It certainly adds some interesting possibilities for managing the test workload on z!
Just got back from Innovate 2010 and I encourage you to view the Keynotes; they will fire you up on innovation, the future, systems of systems and the future-- fer sure!
Take a look at the current IBM System Magazine to learn, among other things, how tape is far from dead -- 29.5 B bits per square inch demonstrated (44x today's capacity)??
The mainframe mag, zJournal, is now at the mainframezone, and the current issue has an article on mashups and Web 2.0 with the mainframe. (have you looked at CICS support for PHP?)
...and IBM has demonstrated a Graphene transistor based chip at 100Ghz -- as in single layer of carbon atoms exceeding cut off frequencies of silicon chips with the same gate length, is getting involved in more auto systems with Daimler, and there is a nice retrospective on Disney and IBM (with videos!) here.
So, I ran across a couple of interesting articles recently that, as a architect working with large systems and large enterprises made me stop and think.
First, in the Financial Times from June 8, a nice bit of thought on outsourcing, governance, and a reminder how key tech is and the importance of managing technology correctly!
Secondly, in the current issue of Strategic Finance (oops, sorry the specific article is for IMA members only it seems..) a good discussion on how to handle Idle Capacity Costs (pg 55). Without going into the detail, the point is execs at your enterprise are looking at these things.
And... what system helps manage resources to minimize excess capacity, maximize utilization?
... and oh by the way have you looked at moving to or consolidated non-System z systems on z lately?
Or would rather wait for your manager to ask why not?
systemzblogger 2700017BYR 2,713 Views
Last time we talked about acquisitions, and then IBM sets up to acquire Sterling Commerce and extends the idea of extended business networks for B2B (a term you don't seem to hear as much as 10 years ago) interactions. Does this affect System z? Gee, let me think, what runs most large enterprises? Who is concerned with process flows, transactions, CRM, SCM, PLM, ERP etc and has more than one partner? With enterprise transformations squeezing costs and increasing speeds, it just might be an interesting one to watch.
On the news from 'what, IBM is involved in that?', Texas A&M works with IBM to speed up drug discoveries to deal with Tuberculosis, big blue is working with Guang Dong hospital for analytics of treatment efficiencies, Linux hits 10 years on System z, and System z Expo is only a few months out (October 4) and is now called System z Technical University.
(With a slogan of: z can do it!, I can't help but think of Adam Sandler movies with Rob Schneider saying you can do it! )
Has it really been 10 years for Linux on z? (OK, a couple of other nice links for Linux and System z: : datasheet, white-paper)
Linux has had a pretty large effect helping consolidation efforts, moving the open movement...
Another smaller milestone happened In the last couple of days as I saw an item in the Financial Times on Google moving only to either Linux or Mac systems.
Who would have thought all those years ago when IBM added Unix Systems Services, it would lead to where we are today?
Sidestep to a USS z/OS implementation redbook reference: Redbook where it notes:
"In 1991, the US Federal Information Processing Standards (FIPS) Document 151 stated that MVS must incorporate support for popular UNIX interfaces. So began the challenge of including UNIX functionality into the MVS operating system. The first implementation was known as OpenEdition (or OE, or OMVS), then it became OS/390 UNIX System Services, and then finally z/OS UNIX System Services, as we know it today." (Open Edition first came out in MVS/ESA V4R3 in1994)
Happy Birthday Linux!!
systemzblogger 2700017BYR 4,740 Views
Well, I had some folks wondering when the next System z is coming along, since it was February '08 that the z10 came along, and we had those nice Power7 announcements this fall. Naturally, I can't say even if I knew, but I did see a nice counterpoint on the System z skills discussion on gomainframe.com with a document from Clabby Analytics called: The Alleged Mainframe Skills Shortage. It is part of a counterpoint to a recent consultant release on mainframe skills. I will leave you to read it, but not that there are some nice tidbits in it, including: IBM's $100M investment for easier management interfaces for the mainframe, some detailed demographics vs anecdotal information on mainframe support, and a few comments like 'silly' and 'daft' to suggest that there is anymore a crisis than for IT skills in general. Of course, you add the thousands of new graduates and 100's of schools that are flowing from the Academic Initiative, and one wonders what issues are really behind this urban myth.
With the recent bid by SAP to acquire Sybase, I noted in the Financial Times and other sources how IBM is setting aside $20B for acquisitions at a pace that will double the last decade in the next five years. Wow! I maintain a site internally that helps keep track of acquisitions at a high level, guess I will have to ramp it up. And yes, the wheels are spinning for candidates in my brain....
On an unrelated note, while we know the next major phase after services may be cloud, but looking back to a push that is still very much alive, I have been noticing how Enterprise Modernization is very much alive and well, incorporates some aspects of e-business, SOA, and really underscores some potential culture clash issues between the free wheeling internet boys and super systems management Systems z folks. If you happen to be modernizing a part of the company and IT that has not been touched by that upstart internet, just be extra sure you look at ALL the security implications, and review infrastructure, development, and governance! (Hey, there are some acquisitions that relate to these too... like ISS, AppScan (Watchfire), Ounce Labs, and of course Rational itself which now has the system z Rational Developer for z we have talked about.)
systemzblogger 2700017BYR 2,817 Views
So, what is going on?
Green Chips Stocks had their view of the tightening of Environmental standards reflected in the IBM Suppliers announcement on Global sustainability, while IBM acquires Cast Iron Systems for cloud initiatives and the delivery of SaaS, and also now has a follow on game to Innova8 with the new CityOne game.
Wow.. you really don't have to work that hard to see these patterns of Big Blue just moving along on their long time stated intentions of moving the bar forward on basic beliefs of making a difference for their customers and the world... while being a strong business entity. I mean, hey, let's be good citizens, let's help our suppliers do the things we have been on green initiatives, and let's continue to bolster the cloud offerings we are developing! (Yes, I am going to point out the obvious: what systems predominate in the cloud of the future, or those huge system outsourced floors IBM runs? Hint: Enterprise........Systems!)
There is a new Poughkeepsie green manufacturing system, there are new Business Process components announced at Impact today in Las Vegas, and IBM has signed an agreement with Drawbase software to support the smart building initiatives. (see previous entries on the amount of energy used by buildings and the growth of cities....)
OK, so there are patterns other than development or design ones. Here is one: I just got back from a meeting (yes, I am an internal IBMer architect) where the Enterprise Systems architects from North America got together for some updates -- and to see one another face to face--- of course.
Besides learning about some of the cool stuff coming down the pipe (oh how I wish I could share!!), there was lots of comparing notes on what clients are doing, talking with those with decades of experience with banks, government, health, and manufacturing institutions, and plenty of 'what would you do' chats on behalf of clients.
And the pattern? The big boys are as dedicated to Enterprise Systems as ever, to growing and modernizing them, and to adding workload (as we have talked about with Chordiant, ACI, WAS, Development, Warehouse and others --see Solution Editions).
And, that while thinking of the conversations I had with other IBM'ers, looking at the news, and the stuff coming... well, it all reminded me of the IBM Values:
Well, I almost missed IBM being named the top Security Company by SC Magazine. Naturally, it got me thinking of how the company is pushing ahead on many fronts so the technology is there when clients need it.
Besides the announcements on making plastics from plants, Tivoli has been looking at security needs for SOA and clouds across different environments and believe me, it is a whole lot more complicated than just you typical e-business application from a few years ago!
Across platforms, the internet, and different infrastructures you run into SAML, WS-Trust, Kerberos certificates, RACF Passtickets, LTPA tokens, keys and oh yeah, passwords and IDs. Yikes, it gets complicated quickly... I used to think in terms of Tivoli Identity and Access Management (TIM and TAM), but we now need to think about Federated Identity Management (TFIM), and a relatively new set of functionality called Security Policy Manager (TSPM) to address the life-cycle of policies, credential translation, and identity propagation across an SOA environment. Now I am certainly not going to go into any detail here, this is more of a heads up on where to look if you are getting into the next phases of this stuff. (And look at the Redbook too!)
Oh, and take a look at the IBM Security Framework. With the zSecure suite, the ISS contributions with the Proventia suite, the acquisition of Guardium for DB security, and AppScan products for applications (as just a few examples), security is being layered and added by Tivoli security and others ALL through the IT infrastructure.
Hey, guess what? If you take things out of the black box and distribute them all over the place you need more than RACF!
Did you see the entry on IBM being the first to eliminate these PFOS and PFOA compounds from its chip manufacturing processes? The range of announcements from Big Blue seems to be growing with a broad range of technology, collaborative centers for IT and even awards from the American Bar Association ---not for Patent Reform this time but legal assistance to non-profits. I was wondering if IBM is getting better at publicizing things in addtion to doing new and different things. It made me think of today's post theme as: "No one knows unless you tell them".
This started after I visited a client who went on and on about the wonders of virtualization and how being in a rack brought things close together. All true, but how about so close you avoid any network at all ( like hipersockets ---which has been around for years), or being at memory speeds at microseconds (yes, that is one millionth of a second) instead of potentially milli or even just plain seconds delay across a distributed infrastructure with systems laid across server farms?
System z's heritage of getting data close to where it is processed, handling huge amounts of data, and getting transaction levels of tens thousands a second, has lately been referred to in guidelines that say: Get your data and applications back together if one is on and one is off of System z! Our job as large systems folks includes telling others about this often silent world that sits behind the enterprises that run the world too I guess.
No one knows unless you tell them that object oriented programming started with COBOL (commercially oriented business object language), that high availability systems like GDPS can run synchronously and asynchronously and can be across huge distances in anticipation of disaster, or that innovation keeps moving on many fronts at IBM that are not always very visible. (Like the Innovate 2010 Jam that is going on right now internally to IBM, or a presentation I saw a bit ago that revealed System z clients grow twice as fast as non System z ones !)
I had another case recently that falls under the mantle of 'you need to tell or remind folks about things' that related to changes and technology. For those of use who have been around a bit it is like a part of our DNA to understand the key concept of how IT applied with custom implementations can provide true competitive differentiation for clients. In a conversation with some folks, we were reviewing the syndrome of how some enterprises either fall so far behind the technology curve, or get frustrated with IT's ability to deliver, and they abandon huge investments in 'legacy' resources as represented by their application base build over decades.
Now don't get me wrong, sometimes getting new suites of function makes a lot of sense, but sometimes leveraging or modernizing existing assets and processes can by done with strategies that not only take a lot less resource, but are more timely and effective. Think everything from web services to SOA to screen scrapping strategies as part of modernization strategies that can extend differentiation and advantage rather than throwing everything out and starting over again.
Alert System z clients have legacy capabilities that have been around for decades but also are continually updated and refreshed with these approaches. One example I crossed paths with recently addressed some of the older technology bases like Natural, Adabase, RPG, Cool:Gen 4GL. Rational Migration Extensions not only has the capabilities, but partnerships with specialists who have been in this space for years.
Before you throw anything important out with the bath water... see what can be renovated, updated, modernized. You might find with a little elbow grease you have a classic that works great in the 21st century and is surprisingly valuable!
systemzblogger 2700017BYR 2,975 Views
Consider this an 'extra'. If you have been watching the news lately, here is another great example of IBM depth in large systems. This all started when I did, way back in 1978. There have been articles in the 90's galore up to now....
systemzblogger 2700017BYR 2,073 Views
OK... Daylight Savings kicked in yesterday, even though the Vernal Equinox is not until next weekend, and green things are shooting up out of the ground (think 100's of Daffodils in the case of my lawn). Light is the theme in my brain as I think of the chips that communicate via 'light avalanches' or photon paths rather than copper ones. Are you kidding me? Wow.... Oh, and IBM research -with Stanford- announces environmentally sustainable plastics (or a start). Things seem to be accelerating. Don't forget the game changing p-series announcements with dramatic increases in parallelism, and now the x86 ex5 servers which decouple memory from the processors to potentially ' reduce the number of servers needed by half while cutting storage costs 97% and licensing fees by 50%'. (!!)
OK... when I see something with a SAM in it, I think access method. VSAM, ISAM, SAM, even QSAM for the older fuddy duddies in the crowd. But TSAM? Turns out it is Tivoli Service Automation Manager to automate requesting, deployment, monitoring and management of cloud computing services. First it was out for Linux on System z, and now we have if for cloud resources. Things expand in different directions. We have IaaS (information as a services) PaaS (Platform), and of course SaaS (Software). This is a big jump--- what will TSAM handle next....?
I am getting anxious, or at least eager, for System z to do its next revolution now that p and x are out. Will the platform that first started parallelism have some things in store? Will the platform that really got shared storage have some new memory schemes? What about the platform that started and still leads virtualization..... will it extend its arms further and in what direction? Virtualization software at the x86 level is great, but is still learning to walk compared to the mechanisms on System z that underlay consolidation projects and workload management techniques that maintain service levels while pegging utilization rates.
It is spring and a 'young man's fancy turns to thoughts of love', as Tennyson said, but at least for older farts like myself they also (hey, that is also, not instead!) turn to thoughts of how fast technology is turning.
systemzblogger 2700017BYR 1,769 Views
What can’t run on the Mainframe?
OK, we know about system Z’s heritage for running the big boys, and we have talked about how there are thousands of applications that have been certified over last few years for Linux on system Z.
…. But what about all of that .NET workload out there?
Here is a Hint: If you do start consolidation onto VM or Linux on Z, don’t overlook a key component of the project: monitoring. Take a hard look at Velocity Software, a partner started by ex IBMers that has worked with IBM since 1988, works with the system z labs, and has been involved in VM Early Support Programs since then too.
Note: Another important virtualization
management consideration dealing with multiple platforms is also reflected in
the creation of the IBM Systems Software organization. Covering
components from System Director, VM Control, and Active Energy Manager (to name
but a few), the portfolio addresses management, security, energy, availability,
virtualization and of course, operating systems.
For fun: Take a look at this video on You Tube: How IBM Software Works--- A nice summary of where IBM software is going.
If you saw this weeks Power7 announcements you may have had a similar reaction to mine. First, of course, was whoa... increasing threads to nearly 10 fold? Or increasing the chip power, structure, etc and the middleware is all ready to take advantage?? When you look at the details, or if your friendly local IBMer gives you a presentation, look at the performance and cost charts and see the dramatic bend in the curves for Power7. This is a game changer... really. It will force us all to take a hard look at design point, fit and sweeping changes could come to certain datacenters this year. Secondly, if you have been around awhile, and remember the design charts for large systems processors over the years, and the way System z has dealt with mixed workloads, you may have had a sense of deja vu. I often have said since starting in large systems in the 70's that I have sometimes felt like I have been cheating as I watch large systems concepts move over to other systems. Well, this is a big one and I feel as if I have lost some of my crib sheets!
On an unrelated note, I was looking at BPM BlueWorks, the new LotusLive facility for customers to be able to freely model and share their strategies, capabilities, and processes as well as leverage sample models and maps from their industries. As I understand it, you can then flow these models to tools like Business Modeler for more elaboration and the development flow. Powerful stuff indeed. I always thought that SCP stood for systems control program, but I guess it is now: strategy, capability, and process....
Finally, start studying (as I will be) the preview of z/OS v1.12. It appears the future does keep moving in z land and it we pay attention our crib notes are still there: Predictive Failure Analysis, automatic partitioning, avoiding VSAM data fragmentation, workload driven provisioning of capacity, the extended address volumes coming into their own and further enhancements to cryptographic processing algorithms. The preview is entitled: Heralding a new Generation of Smart Operating Systems. Boy those enterprise systems guys, just when you think you haven't heard heuristic or ease of use in a while you get a statement like: 'IBM has taken the long-term outlook by simplifying a mainframe system from the inside out and from end to end.' There is a lot associated with this announcement and phase: Tivoli Service Management Center, CICS Explorer, Rational Develop for z, ZMF, all the health checks...
Well, better get out of here and start reading!
systemzblogger 2700017BYR 3,890 Views
As I went through my feed aggregators the last few weeks, another software acquisition (National Interest Security Company) showed itself along with the Smarter Planet exhibit at Disney World. These juxtaposed themselves with articles and blogs about how technology continues become 'commoditized', how hard it can be to explain the value of z technology ( even though it quietly runs the world's major institutions ), and a haircut where I actually had a barber ask me: 'Does IBM still make computers?' Ouch.
I have been tracking the software acquisitions (well over 80) and acquisitions of other companies that do things like process mortgages, automate and move workload, and other related to systems outsourcing for about a decade. It has been a fascinating puzzle picture building for those who want to watch if form, and the crystallization of the Smarter Planet strategy is one that makes real sense if you are both a tech watcher and futurist.
While some some talk about the cloud as the latest marketing spin (see a recent YouTube video by a sailing officianado..), others recognize it as the next tier in the 'distributed' model.
The pieces are coming together, and the case studies or examples are starting to pop up from hospitals, cities, power grids etc. Imagine if all of these endpoints of intelligence not just in devices, but instruments, industry, and networks, from PCs and phones, temperature regulators and flow valves, assembly lines and portable diagnostic devices.... This is so big and exciting it hurts the brain a little bit BUT... that is why it is so important as system z and Enterprise Systems folks to explain what has quietly been going on behind the scenes for the last 40 some years.
We have been building and refining ways to deal with mind freezing amounts of data, processing, and workload balancing. Virtualization, automation and availability strategies, and frameworks have been improving quietly in this world the general population and popular media rarely becomes aware of.
The pieces are coming to together, and we get to explain the story about not only what happened 'in the beginning', but be able to explain about how 'that is why they were able to make cities livable, find ways to live sustainably and save the planet'.
And then not say: Then End.. but instead: The Beginning....
systemzblogger 2700017BYR 2,419 Views
OK, so maybe it is because it is the start of the year and I try to refresh bookmarks, or I was thinking about talking with a new contact at a system z partner, and what sources to point him to... But I found myself entering the waving fingers, doodledy doo music, and time transition to compare what finding things was like in the 70s versus now.
Announcements: Then, we had blue letters for announcements that came in secure shipments that we would read through at our desks, and then reread again for the following week or so. Now, we go online, don't have desks, and may have them delivered via RSS feed aggregators or podcast series. We would schedule auditoriums in every even close to large city, and announce with fanfare the yearly changes after doing the same with the branch after staying up all night learning about them-- having gotten a signature secured box the night before!
Systems Engineers evolved into Architects and IT Specialists, and Business Partner counter parts, and Orange books became Redbooks. In the branch (the what?) I started at, we had ONE dot matrix printer, a green screen dumb terminal in a room for all to share and could not have imagined the social sites for large systems that include: destination z, mydeveloperworks, the mainframe blog on typepad, and system z on Facebook ! We would have never imagined the need for the academic initiative, the SOA Social Network, or that there might be IBM channels on youtube...
As an architect I, and many of you, have to keep up on executive topics, and so look at magazines for the CIO, CFO, Mainframe Executive, in addition to IBM Research journals like the Systems Journal. I go to the Institute for Business Value, as well as Investors Business Daily, the Financial Times, Wall Street Journal, and the wonderful NY Times Reader.
When I make time, I may use igoogle, my yahoo, feedly or other aggregation methods to look at news from tech, industry, and partners who build solutions. I have a feed for pictures from Flickr, press releases looking for key works like acquisition, and created a search widget or gadget for IBM when it pops up out on this interweb in any of 'those tubes'. The information overload and anxiety drives a person to feel he needs these tools, just as business intelligence and information analytics is a needed response at the enterprise level. Without these tools we just would not know what is going on. Things have changed for sure, but fortunately there are ways to not feel as if we are waiting for the newspapers to show up from the ship landing from the 'continent'....
What are you pet techniques to stay current? Let me know....
systemzblogger 2700017BYR 2,565 Views
So, looking for patterns, have you noticed the increase in Smarter Planet activity involving government entities, power, water, traffic, medical records etc? This stuff is building and becoming real and is not just some marketing Public Relations. Take a look at the press room or the sites related to Smarter Planet for this growing trend!
As part of this entry's miscellany, I noted some announcements related to Solution Edition (for Chordiant's Customer Experience Management solution - think CRM plus...), a z-series Linux Enterprise Server for consolidation strategies, and ran across an entry that talked about how the top 50 banks are all running z-series. (We've seen lots of top enterprises running z stats, I had just not seen that particular one before..)
Tech folks are, from my experience, optimists, eager to apply innovation for the betterment of their enterprise, and yes, the world. As a new year starts, I know you join me in being ready to go, to dig in, help out, and make a difference. It is not just IBM values to make a difference in the world.. I think all you you techies love doing so too... I look forward to joining you in doing just that in 2010 ! !
systemzblogger 2700017BYR 4,942 Views
So, you probably saw the announcement for the Linux enterprise server and maybe thought of the recent Solution Edition for Linux.
If this doesn't line up nicely with all of the virtualization and consolidation efforts going on, I sure don't know what does!
Being December, I find myself looking back and thinking about some of the high-level enhancements in enterprise systems in 2009. From z/OS management facility to CICS Explorer to the 45th Anniversary of the mainframe (CICS turning 40, or COBOL 50), and the Academic Initiative hitting new levels of participation, it has been an interesting year.
Whether from increased integration through enterprise modernization (especially CICS Web services), insights through deeper analytics, or efficiencies from green characteristics, it seems system z keeps pushing forward in meaningful ways on many fronts.
Being December, I find myself paging more than normal -- as many of you probably do --but I wanted to stop by and especially wish you and yours the very best of holiday seasons and hope you join me in looking forward to the exciting developments awaiting us all in 2010.
‘Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses. ‘
I asked one of my friends if that meant that when you talk to it …it ignored you.. (drumroll please!)
Another indicator about how things are about to change in some pretty big ways our recent developments from IBM and cloud computing. As I listen to these developments, I can't help but reflect on how many of the goals tie back nicely to enterprise systems kinds of goals. Think of being able to Log onto an essentially dumb terminal (in the old days a WYSE terminal, today through a browser). Imagine the cost and management savings of not having to maintain thousands of endpoints beyond their just being endpoints! (think POT: plain old telephone)
When I hear the story of the Pike County school system (and their implementation of cloud computing for the desktop), there is a part of me that starts to jump up and down and scream: ‘It's coming! It's coming!’ We really are moving into the age of industrialization for information technology. (Our old friend Irving Wladawsky-Berger talked about it earlier this year when he talked about the industrialization of services, and there are other recent articles such as this one by strategist Gene Wright where he references Simon Wardley’s lecture on YouTube.---Look for the cats in his slideshow!!)
I heard a great analogy for the ‘older ones’ in the
audience: …think about when you had to use an operator for phone calls, then
for long distance, then for overseas, and now you don’t even need a phone!
Oh, and what do we think will be running the clouds most efficiently in that sky? Enterprise systems??
systemzblogger 2700017BYR 2,483 Views
System z Personal Development Tool
Well, it is fourth-quarter and that means it's crazy busy time for those in the IT world, the workplace world, or those just anticipating getting out of shopping for the holidays.
Just back from a productive visit to a bank, I noticed last week that IBM has made available to its Independent Software Vendor members of Partner World, the previously internal only zPDT tool. With this tool, you can run z/OS on Intel or Intel compatible platforms and use it for development, education, or demos.
It comes in three sizes -- one, two, or three virtual machines. Having seen this in operation, I can tell you that some of the combinations can be quite complex including middleware stacks, Linux environments, and network connections. If you go to Partner World to this link, there's all the information you need as an ISV to guide you through the process for how to purchase your licenses and get your hardware key.
It's a powerful tool and a great option for those looking for something in our distributed world to work with and show others system Z.
Did you miss the last months birthday? Yes, COBOL turned 50 last month. I could blather on about how there are 30 billion COBOL and CICS transactions a day, or how their are still well over 1 million COBOL programmers, or about how there may be as many as 5 trillion lines of COBOL code out there running there largest institutions. Instead, let's just say Happy Birthday, to this commercially oriented business object language and, like your grandpa, remember to give it some respect!
As a field architect, I have the opportunity to run into all kinds of customers and situations. This means I get to read all kinds of technology. Recently, I had the opportunity to take a closer look at one aspect of virtualization across our server platforms in IBM and found myself very encouraged in the direction we're going. You may recall last year at this time IBM announced, and later followed through on, the acquisition of the Transitive company.
About six months later, IBM announced PowerVM, which provided the capability to consolidate sets of applications across power systems (AI, IBM i, and Linux). This included the rapid deployment of workloads in partitions, and even the live transfer of running workloads. There's a lot of detail in features about the resource sharing, the implementation of micro-partitioning where you can have as many as 10 dynamic logical partitions per processor core and so on, but the exciting thing is to see the direction of virtualization and that concepts started on System z percolate down across other platforms. I recently heard about the impact at a conference where a video running out of a partition was moved across physical machines live on the conference floor -- who wouldn't like to have seen that?
Remembering how the i-series was moved to power systems, learning that Transitive helped move Apple systems across chipsets, and seeing examples everywhere of increased management and utilization of processor resources (such as the recent z/OS enhancement for zAAP eligible workload on zIIP engines), it just gets one itching to see the next result of that virtualization acquisition in Transitive!
systemzblogger 2700017BYR Tags:  system z resilience engineering research ibm 1 Comment 4,618 Views
I was reading the October issue of Popular Mechanics when I came across a column by Glenn Harlan Reynolds entitled Ready for Anything. In the article he defines resilience engineering to include the idea of designing and creating maintenance systems so they have some give, are able to offer extra capacity, handle sudden loads, provide plenty of warning when things begin to break down, backup systems in case they do, and so on. I immediately thought: hey this sounds a whole lot like what System z has been doing for decades!
I wondered what IBM is doing for this newly coined approach (though it was mentioned that resilience engineering was born as an academic idea in response to the 2003
In case you did not know, IBM Research is the world’s largest industrial research organization with about 3,000 scientists and engineers in eight labs spanning six countries. IBM has produced more research breakthroughs than any other company in the IT industry and has led in U.S. patents for the past 15 years. It’s always nice to see our scientists at IBM Research-- which recently celebrated the 20 year anniversary of moving atoms -- ,and the engineers across IBM, are continuing to stay ahead of the curve in designing our systems!
systemzblogger 2700017BYR 2,574 Views
I had the opportunity to hear a podcast this week of a Town Hall meeting where the story of Baldor Electric and their evolution from a mixed infrastructure to almost exclusively System z was compellingly shared. The journey, also shared on Mainframe Executive here, took 12 years and serious focus to achieve technical and fiscal performance they can be proud of.
The Baldor story reminded me of Project Big Green, where IBM is moving 3900 servers to 30 some System z servers ( which I think is now around 19 wth system z10). We are two years into what may be a five year cycle of better optimizing our own systems and part of a transformation which has already seen a reduction of CIOs from 128 to 1, and data centers from 155 to 5 (east and west coast of North America, Australia, Asia and Europe for the curious out there).
It occurred to me that neither story would be possible if the full usage or exploitation of the platform had been happening right along --including at deal old Big Blue!! Growth, politics, and taking your eye off of optimization in the big picture is an easy thing to do across an enterprise, but fortunately the promise of System z and the Mainframe Charter (innovation, community and value) continues to roll out with happenings this summer including the Solution Edition offerings, IFL and memory pricing actions, and Academic Initiative thresholds being passed as more students and universities become part of the program.
systemzblogger 2700017BYR 1,991 Views
After seeing the news about IBM scientists (working with Caltech) using DNA molecules as scaffolding with carbon nano tubes as part of new sub 22 nm lithography processes, I found myself thinking about progress on the storage side as well.
Remembering how storage, back in 1956 with the 5 MB and 50 platter devices, has evolved in 50 years to multiple TB sizes, or how with solid-state storage now can completely eliminate seek and set sector activity, it does all blow one's mind. (here's a nice white paper on SSD performance) Still, I guess we need these kinds of improvements when data doubles every year and a half, or there are estimates that is growing at 60% compound growth rates which are depicted as seemingly relatable examples as how many libraries of Congress per day we add.
Fortunately, at least in System z land, we have decades of evolution in management systems which can help government, control, and manage the scale of technical resources we use. Whether that storage is in the mainframe or on the floor, or it is memory, specialty processors, programs, tasks, or workloads... it's not version 1 approaches we have helping us. (A example of continued platform simplification is the recent announcement of the z/OS management facility -- announcement letter here)
By the way, this month's z/OS statement of direction notes IBM intends to use DVDs to deliver systems, adding the latest V2 Internet Key IKEv2 support, pulling DCE (technologies come and go!), and based upon customer feedback, not dropping support for VSAM IMBED, REPLICATE, and KEYRANGE attributes. for all the details just see the link above.
systemzblogger 2700017BYR 2,691 Views
The new Solution Edition offerings make a pretty big statement. Designed as integrated hardware, software and services packages that help customers deploy new enterprise workloads, they also make a great compliment to the zRewards and the Mainframe Charter. Building on the kind of packaging IBM has done for SAP, there are workloads for Data Warehouse, Application Development, Disaster Recovery, Security, ACI Payments, and SOA. So, without belaboring the point, some of the barriers to migration of key enterprise applications to z/OS are being lowered and made even more attractive for consolidation, performance, management, and fiscal effectiveness. Take a look at the detail page here.
Now, like every third person I have tried to work with this summer, I'm going to get out of here, take some of my vacation days, and keep this blog post short. If you're reading this, and it is still August, please get your rear end out to the car, airport, or backyard and make sure you get your days in too!
systemzblogger 2700017BYR 2,812 Views
The Mainframe Executive site has so many good postings, Iwas tempted to refer to a recent (July 16) entry that reviews some of the overwhelmingbenefits of System z in an interview with IBM Fellow, Gururaj Rao, Ph.D.. While I still encourage you to read it,since there some enticing discussion of futures including extremevirtualization and expanding the platform to even more architectures, I wantedinstead to share an interesting experience I had.
I happened to be looking at some of the System z benefitmetrics in the same day I was looking at some of the new presentations onbenefits of the evolving cloud environments. Some of these included studies on the kinds ofresources dedicated to test environments, and the parallels you can draw themwere intriguing.
For instance, how about going from server utilization ofless than 10% to up to 90? Would youlike to provision a test system in minutes rather than weeks? Or improve your release management by thesame kinds of numbers? Would you like toreduce software costs, reduce configuration errors, and reduced labor cost byover 50%? How about increasing yourUtilization by over 75%, and drastically reducing defects through automation? Did you know that distributed serverproliferation also includes a large chunk dedicated to the testing environment,like 1/3 to 1/2, of all the servers in a typical IT shop?
These sound a whole lot like the System z platformcharacteristics don’t they? So, forexisting System z shops looking for workloads to fit on System z, it might be agood idea to ask the question: Should we be making our capabilities more visible to upper management as they look at future sourcing decisions? (Oh yeah, and what are we doing with test across our wholeenterprise?)
I had a conversation this morning with a longtime consultant who has worked with 'the mainframe' since at least the 60s and is heavily involved in both SHARE and CMG. We found ourselves shaking their heads at the durability of perceptions would suggest, in spite of overwhelming and increasing evidence, that the System z platform is far and away the most cost-effective platform in the world.
One nice recent example of continuing evidence comes from The Clipper Group, and their newsletter, The Clipper Group Navigator (April 23, 2009), which talks about how well System z fits into upcoming Cloud strategies.
This was on the heels of a session I attended that talked again about consolidation efforts. These results showed energy savings of 80%, space savings of 85%, software savings of 35%, and labor savings of 54% -- while reminding us that average servers utilization leave 85% of their capabilities unused.
Next, I took a look at the recent announcement letter, which previewed z/VM V6.1 (letter 209-207). Besides a raft of improvements related to storage, networking, and Linux enablement, my eyes perked up (while I was reading not listening), when I saw this:
Running more Linux server images on a single System z server:Considerably more images than are currently supported by the LPAR modeof operation (up to 60 on z10 EC and z10 BC) may be supported with z/VMguest support. These Linux on System z server images can be deployed onstandard processors (CPs) or IFL processors. Running multiple Linuximages on an IFL-configured z/VM system may not increase the IBMsoftware charges of your existing System z environment. Clients runningz/OS, z/VM, TPF, z/TPF, z/VSE, or Linux on System z can add z/VM V6.1on IFL processors to their environments without increasing IBM softwarecosts on the standard processors (CPs).
Then, at the end, there is a statement of General Direction that talks about a z//VM Single System Image whose intent is to allow all z/VM member systems to be managed as one system, across which workloads can be deployed. There is also something called z/VM Live Guest Relocation, aimed at moving a running Linux virtual machine from one single system image member to another.
Wow, way to move the hypervisor along!
It is easy to think of System z as just the hardware, or focus on the z/OS operating system and forget about the 40 year contribution that VM has made in virtualization; especially over the last few years with consolidating Linux systems.
With each additional step of on demand capabilities, the numbers for total cost of operations improve and the picture of how all these new 'clouds' can actually start to form becomes clearer.
systemzblogger 2700017BYR 2,815 Views
Have you been reading, google-ing, listening to podcasts on ‘The Cloud’? (Not The Blob, that was a 1958 movie with Steve McQueen, Aneta Corsaut and Jane Martin. )
I’ve decided not to rush to understanding this still forming next wave and am starting to see a set of cloudy parameters for watching the waves we will all be riding for the next decade or two…Besides the obvious cost savings public and private clouds promise, here are some patterns I see evolving: (what do you see??)
Immenseness & Immediacy: This cloud is so big, our brains may have trouble understanding the associated scale.and orders of magnitude. Mainframes were a big deal with thousands of green screen terminals, then distributed computing brought applications to millions of new users, and the Internet now connects billions of people, sites, and programs. But you haven’t seen anything yet.
New intelligent units are getting connected into networks that generate connection points or interactions that go way beyond a billion to a trillion (10 to the 12th )to .. well…. who knows, really big numbers like Nonillion (10 to the 30th).
There are devices everywhere that are joining up. We add new intelligent devices; like phones, cars, refrigerators, game stations, stoplights, valves, pacemakers, cement mixers, cameras, bottles and dog collars. Sensors, RFID tags, intelligent end- points, input devices and computing nodes all join the mix. You don’t churn your own butter, you don’t have to walk to the corner to place a phone call, and as anyone with a smartphone can tell you, you don’t need a computer to work with applications.
Incredibleness & Industrialization: We are only beginning to see the industrialization (standardization, automation, virtualization) of information technology. (Start a sound track in your head; say, a Carl Stalling score in a Warner Bros cartoon where they are stamping something out in a factory!) Just as each generation of Tool & Die makers create templates for more amazing creations, the adolescent IT world is growing up, reaching past the apprentice stage, and creating its own master works.
Besides the visible advances of putting computers in nearly everyone's hands, or the important connections between individuals across social software, major advances in technology, infrastructure and services science have been building in places like IBM Research and Development enabling startling and incredible announcements such as financial trading systems with 20+ fold improvements, inline analytics (system s) with microsecond response times for the study of Space Weather or new, complex Neonatal monitoring.
Immersion & Intensity: We will all be interacting with more of our senses (think Second Life and gaming), different devices, and on new scales. When you smash things together in new ways you can end up with wonderfully creative results or horrible tasting desserts. Fortunately, enterprise sytems has been playing with the pieces (many of which we have invented) for a while now. (Think AP, MP, parallel sysplex, VM, GDPS, WLM...)[Read More]
systemzblogger 2700017BYR 1,700 Views
I'm been on vacation, but I came across this item and it was initially confusing to me, so thought I would add this as a quick post to maybe clarify your confusion on the subject.. thanks.
In case you were wondering where the IBM technical journals are (Systems Journal or Journal of Research and Development):
If you are like me and have had the journals show up via a subscription, and may have been wondering where the next one is showing up, starting in 2009, they are only online now. There are options for institutions or libraries to purchase subscriptions, or individual articles can be purchased.
The journals have also been combined into one Journal starting this year. Oh, and the new combined site for IBM Technical Journals is here.[Read More]
systemzblogger 2700017BYR 1,554 Views
Well, it has been almost 2 months since IBM announced the in full sphere warehouse for System z.
The announcement is particularly well-suited to customers who understand and appreciate the qualities of the System z platform. (see a 2006r Willie Favero post on why warehouse and z make sense together : here)
The offering enables operational and warehouse data to be on the same platform (think co-location), facilitates the possibility of near real-time analytics linked to key applications (think in-line), and delivers multidimensional, no- copy OLAP capabilities!
To build the solution, you take your existing DB2 on z/ OS, and on Linux for z you add both the new InfoSphere Warehouse, and then for presentation, add Cognos BI. (Note that for the presentation layer, the solution is designed to easily go to DataQuant or Excel as well.)
There are built-in tools for physical modeling via Design Studio, data movement and transformations via the SQL warehousing tool, and ‘no-copy’ OLAP analytics via built in Cubing Services modeling and a ROLAP engine to build those Materialized Query Tables we need to build those fast reports about sales of products by line, region, well….you know the drill.
Now, it is true that this is aimed at getting a good start on warehousing, and you can still augment and connect to other related parts of the information management portfolio related to pulling together your information for business intelligence.
Good examples of this include the already mentioned Cognos 8 BI, and InfoSphere Data Architect for complete logical and physical modeling, the Industry Data Models we have been working on for a few decades now, the Information Server for federating panoply of data sources, and even Master Data Management connections via the MDM Server.
You know..... all those tools to ensure your data is clean, accurate, and trustworthy –well, you get the idea. (I won’t say kind, courteous, cheerful and so on like the Boy Scout oath!!)
So, take another look at the volume, kinds, and speed of data your enterprise needs. Ask yourself what your perception of the risk and cost of setting up warehouses is before this announcement. Consider the current Gartner ratings of our BI portfolio, the value of system Z, and start to think about what you could if you put the whole thing on the platform you already put your key business applications on.
See if you can be more cheerful about getting information out of your enterprise data![Read More]
systemzblogger 2700017BYR 2,330 Views
Last week found me reviewing a series of case studies that showed the risk in migrating platforms without a careful analysis of design points and the related total value cases in different infrastructures. As you might guess, these had to do with System z, and for these unfortunate institutions, they moved from z; hoping to garner significant benefits.
As you might also guess, that did not happen. It did not happen to the tune of having new environments that cost multiple times as much as what they moved from. Just as I wondering how the message (after 45 years) of managed virtualization, lower manpower costs, and Qualities of Service still seem to be underestimated for System z, I heard about a new case.
It seems there is a State institution out West that took several years and 10’s of millions of dollars to ignore their well running CICS application base and create a new distributed set of applications that resulted not in the sub-second response time they hoped for, but coffee break long response times.
Many in System z land have heard of the metaphor for System z where they show a draft horse pulling a large load and then another picture with scores of chickens hooked to the same load. In this case, being out West, I envisioned Buffalo and Prairie Dogs as horsepower instead hooked up to a covered wagon.
Care to take a guess which one might move the wagon load best?[Read More]
You may remember, we have mentioned Hoplon Infotainment and their massively multiplayer game, Taikodom, which resides on System z and uses Linux and the gaming chip used in Sony Playstation (sometimes called the ‘gameframe’), a couple of times in this blog. (I think was a year ago in February, and again this past October.) They now have announced plans to go global and could be hosting as many as half a million users by year-end, with plans including graphic novels, a possible TV show, and other tie in activities.
The announcement includes a great You Tube video where the Founder and CEO, Tarquinio Teles, talks through the selection of System z and their gaming business model. He discusses the characteristics we are all so familiar with in System z like availability, security, reliability, creating virtual resources quickly, and running Java workload. However, perhaps the best reason for picking a z based platform is reflected in his final comment where he sums up by saying:
“…fun is something people take very seriously, and you don’t want to mess with people when they are having fun.”
Or - In the Club: More Detail
(I wrote a Blog a while ago, and it was suggested I add my more detailed version..so, here you go!)
The big boys on System z platforms have decades of experience tuning and using techniques to optimize mixed workloads and databases. Workload Manager and Static Plans for DB2 are examples of component function which are widely used on System z but not always leveraged well during integration with the distributed world. DB2 static plans are used to improve database workload performance, management, and security. Workload Manager assures high utilization, high service levels, and high value of the System z platform. How can distributed applications leverage them for their benefit? Let’s take a look at both areas.
Workload Manager looks at transactions, jobs, and even database work then lets them run and use resources based upon priorities, profiles and things like intended service levels and velocity through the system. As e-business applications came along, and the new transaction manager on the block, Websphere on z, joined CICS, IMS, and JES2 as a transaction manager and was integrated as a full member of ‘the club’. So Websphere on z workload, and its associated DB2 work, could then be managed, secured, and line up to get the kinds of resources it needed. (See Say What? for a great overview of ‘the good old days’, packages, plans and such.)
Distributed Workload Needs a Ticket:
Unfortunately, as things evolved for the e-business applications (and here we mean both Java and .NET workload) off of System z, the kind of integration necessary to achieve the same kinds of benefits sometimes needs a little help. It turns out that when requests are sent to DB2 they tend to show up in the hopper as undifferentiated pieces of work --unless the Java programmer knows to specify an indicator (as SAP does by specifying differentiating parameters such as the correlation ID). Usually though, there is no ‘handle’ for DB2 to differentiate the incoming workload (or for subsystems to handle Workload Management, Monitoring, Reporting...). So, by default, everything settles in to a low priority and a low level of service. (For DB2 the default service class is 4 --which is low!).
If you don’t have that ‘handle’, System z can’t prioritize the workload so it gets its fair share competing with all the other workloads running on System z. Without the ‘handle’, you guarantee your incoming distributed work is at the bottom of the bucket!
Another large issue affecting overall performance for distributed Java workload using DB2 on the mainframe (and by the way, .Net applications implement that scenario more than to any other IBM database), is taking advantage of the first technique mentioned above: the use of Static Plans.
In the distributed environment where the paradigm is for interpretive rather than compiled programs, it is tough to even think of techniques using a pre-compiled approach as an alternative. However, there are ways to leverage the concept of a ‘static plan’ for DB2 by distributed workloads too.
Welcome to the Party: PureQuery:
PureQuery now enables distributed applications to bind static packages set up ahead of time.
A package, also called an access plan, can be thought of as a precompiled subroutines with the information to line up execution details ahead of time, (such as: access paths, indexes, etc ). Once created, the packages can then be stored in the DB2 catalog and are bound to a program. Note: Using packages can also add a layer of abstraction and security to DB2 access.
See some good references from developerworks and DB2 Magazine;
These static packages can be created and waiting on System z in the DB2 catalog for the Java applications to ‘call’ or connect to and use through a binding process done by the Websphere distributed admin or programmer. (By the way this is done through techniques that require far less heavy lifting than an earlier technology called SQL J employed. See some good tutorials on PureQuery again in DB2Mgazine and DeveloperWorks.).
Gift Bags for All:
If both approaches are used, distributed Java workloads can then benefit from the serious ($$) performance, security, and management effects of engaging DB2 and Workload Manager, and from using those static packages waiting at the ‘Enterprise Data Hub’ (System z).
But wait a minute.. there’s more…
Okay, that sounds great, but let’s stops and thinks about this. Put another way, what if you could now have the assurance that your database access on the mainframe from distributed applications could now have much better performance, much more predictable performance, like on the level of the service level agreements they use up there?
What if the mainframe land of sometimes ‘unknown-ness ‘ for distributed folks was something that now be counted on to give you fast data from lots of data sources, like federated, like services, like real time… and across not just your infrastructure of distributed and centralized, but across partner networks??
Since many application suites are on distributed platforms, how might your plans be affected for BPM, real time intelligence, MDM….? (See the article on MDM and PureQuery on Developerworks).
These are surely some things to go back and look at… don't you think?
systemzblogger 2700017BYR 1,800 Views
Enterprise Architecture: Galactic Everything.
What can’t you include in these two words: Enterprise and Architecture? Over the past few years there has been a huge emphasis on innovation, and with it, a focus on new business models. Eventually, this leads to line of business managers talking to the technical arm of the company for ways to implement the new processes, functions, and capabilities. Whether you call yourself an analyst, project manager, architect, or something else, if you sit between the vision and the reality, then you are involved with modeling and modeling frameworks.
Whose View of the World?
Maybe you use the Zachman Framework, or TOGAF, or Catalysis, or MDD, or Michael Porter, or Business Scorecard. Maybe you go back to the 1960s, with Peter Checkland and Soft Systems Methodology, or are a fan of the Rational Unified Process. There are frameworks that focus on views and perspectives from a technical standpoint, or organizational development, social change, construction and problem solving. They all fight for a balance between simplicity and the tendency to elaborate one more level. Do you use a telescope or microscope? Do you crank it up or down one level? Do you bother to include society, culture, and economics?
Can It Fit in Your Head?
Whatever framework your company or customer might be using, keep in mind some simple rules of thumb. First, the point of building models is to communicate. This means you need to consider who is looking at the model, what you leave in or take out (elided), and how well your audience can in turn communicate it to someone else. Second, remember the studies that show our brains can only handle a handful of items at a time (The Magic Number Seven ). Third, do not try to get it all on one piece of paper. Think of an architect building your house, and the fact that he has a set of plans with each sheet depicting a particular viewpoint for wiring, plumbing, landscaping, etc. For each picture, diagram, artifacts, work product, or exhibit, justify its creation by its intended use for this particular project.
All of this means you do not ask your client or organization to deliver your particular framework or model to you on your first day (as I have to confess, I tended to do in my early years). Instead, try to tease out the things they use and refer to. Ask what pictures, diagrams, charts, or reference material they find essential when talking about the problem being addressed. Don’t try to vacuum cleaner up all of their information from a raid on their file cabinet and a quick peppering of questions where you try to suck their brains out. Instead, take a lesson from other agents of change in organizational development, consulting, and even psychotherapy and take time to build collaborative models together. Yep, that’s right, roll up your sleeves, grab a marker, and take more time than you thought you needed to find out ‘what they mean by that’, ‘what happens next’, and ‘who cares about this?’. Keep a simple framework in your head and start with things like financial, logical, and physical aspects. Elaborate from conceptual to more specific and concrete as you need. Don’t forget to focus on context and end results as barometers. Oh, and test models with subject matter experts, constituents, end users and implementers along the way.
How does relate to System z?
Well, I was cleaning through some files and ran across examples of both cases where systems were designed with very narrow scopes, and those, like enterprise class System z solutions, had larger perspectives, and did things like lasting better over time, scaling better, and ending up costing less in the long run. Naturally, I started to think about some things that characterized the design of those solutions. Today, when divisions and applications and infrastructures are being called on to be more efficient, and integrate together more effectively, we are all being called on to evolve a ‘brown field’ solution architecture to its next incarnation. If you are involved in leading that change, please remember that most projects still ‘fail’ against their original objectives, most requirements are gathered incorrectly, and that it’s incredibly easy to get lost in detail or jargon and fail to communicate across the business and IT chasm effectively.
Where to start…
Sme good thought frameworks I have personally found useful, that you may not have run into, include Ellen Gottesdiener (Requirements by Collaboration), David Sibbet (Graphic Facilitation and Process Methodology of The Grove), Peter Checkland (Soft Systems Methodology) and of course for business models there is always Michael Porter, Balanced Scorecard's Kalpan and Norton, and Eric Helfert for insights to financial frameworks. Look for approaches that organize large, complex systems in other fields -- you might be surprised what can be leveraged.[Read More]
You may remember five years ago, when CICS get the 30 billion transactions a day level. Now, RFID tags have hit 30 billion worldwide, and there will soon be 1 trillion intelligent device endpoints; 50 million of these being Wii devices! Part of the smarter planet initiative includes expanding the concept of a truly dynamic infrastructure, which talks about merging both digital and physical infrastructures in an enterprise. A couple of weeks ago, IBM continued to add capabilities in the space by announcing new governance models, industry solutions, specialty partner programs, and products and hardware and software.
The Tivoli portfolio enhanced its strategy of visibility, control and automation with Tivoli Service Automation Manager (a base to manage and automate cloud computing environments), Tivoli Key Lifecycle Manager (centralizing and managing encryption keys across their lifecycle) and Tivoli Monitoring for Energy Management (including both IT and overall Facilities energy devices).
On the hardware front, there are new options for dealing with the 15 petabytes of data the world generates every day now (8 fold what is in all US libraries! ) both with the "ProtecTIER®" Deduplication Appliance (a result of the Diligent acquisition last year) which can reduce duplication for as much as a 25:1 savings, and the addition of full disk encryption on the DS8000 to our already proven tape encryption offerings.
Also included were new IBM Service Management Industry Solutions from Tivoli and a new Dynamic Infrastructure Specialty Program for which the first wave of Business Partners is already gaining certification, including Sirius, Mainline, Vicom, MicroStrategies, Agilysys and Computer Integrated Engineering System. Check out the new Dynamic Infrastructure Journal too.
With so many institutions starting to step back and look at technology from a truly enterprise-level, maybe it is time for us to rename IT to ET? Enterprise Architecture, Enterprise Messaging, and Enterprise Technology -- -- whatya think?[Read More]
As mainframe guys, we know that one of the key strengths of the System z platform with the z/OS operating system is that it is designed to run to 100% utilization. Managing jobs, tasks, and transactions, he assigns workload to resources based upon priorities. Now, that means you need a discrete name for each kind of workload. Without a name, there is no ‘handle’ for Workload manager to associate things like service levels and priorities and velocities so that he can figure out whether that piece of workload deserves whatever amount resource and computing capability. There is also a long-established technique on the mainframe that involves compiling programs so everything gets resolved including the database access component -- where SQL statements get converted into access paths using the right indexes, getting to the right databases, etc. This process gets done ahead of time to make actual runtime execution more effective. Okay, that is not too exciting and most of you probably are well aware of what it means for program to get compiled ahead of execution time rather than being interpreted dynamically at runtime.
Now let’s turn to the distributed world where one of the paradigms is interpretive rather than precompiled and where (surprise, surprise) many of the technicians supporting the environment did not spend the last few decades of their lives working with the mainframe. It turns out that if you don’t know how to specifically classify your workload flowing to DB2, that all the distributed workload, (whether it is Java or.Net) gets put in to a bucket which automatically defaults to a low priority and low allocation of resources and service. If enough care is not paid to differentiate the incoming workload and provide it with the ‘handle’ we talked about above, all of the subsystems of the mainframe, (including your DB2 database, workload manager, security, monitoring and reporting components) can’t give your application the attention it deserves. Also, while there have been some techniques to accomplish it, being able to associate or buying static packages to distributed application programs is something that has not been done very much; due both to awareness and the perceived heavy lifting associated with those techniques (see SQLJ).
While there have been ways to do both of these techniques, distributed and glasshouse folks do not always talk as much as they should, and the techniques were not always particularly easy, with a low threshold level, or were just considered ‘heavy lifting’. A good example of this was something called SQLJ which, while it enables the use of static SQL, has had very low adoption rates. Now, with the advent of pureQuery (which is been out about a year) distributed workload can take advantage of the concept of static packages (that precompiled effect) and ,as a part of the larger data studio suite, pureQuery delivers techniques to the distributed technicians to more easily classify their workload as it shows up to DB2 on the mainframe.
If you do utilize these two techniques, not only is this distributed workload finally eligible for workload manager to give him whatever level of service he needs, but that workload can benefit from the efficiencies of static packages on the mainframe (since the programs now have an easier way to bind them to his deployed programs).
Okay, so now we have a ticket to the party and can use a lot of the same mechanisms that COBOL programs or batch jobs have historically used to get their fair share of resources. One of the key things in this means is that distributed workload can integrate better with the mainframe. It can more effectively use the data on the enterprise data hub. The kinds of response times available to these distributed applications from the mainframe can be far more predictable with better performance. There is even a security benefit, since using these static packages means that users are authorized at the package level and there is not a potential exposure of direct database access.
If you are an architect or an IT manager, this also has large implications for designing applications across your whole infrastructure; including applications that work with other enterprises and partners. Do you have applications that need to go to other locations? Or that would like to become real time, federating data from multiple sources, integrated into business process management flows, and relying on applications and information across platforms and networks? Are you starting to look at master data management, greater infrastructure integration with endpoint devices, or folding in new groups of users that just happened to reside on different networks, platforms are infrastructures? Having a way for your workload to integrate better with enterprise resources, level the playing field for use of resources, and achieve better performance and predictability can be a huge step in an end to and systems design.
Oh, and by the way, it should be no surprise that there are also new opportunites for zIIP engine usage.[Read More]
systemzblogger 2700017BYR 1,699 Views
You may remember, I mentioned that there were a number of articles in progress on the system z10 Server. In the interim, the IBM Systems Journal and the Journal of Research and Development have merged, and those z10 articles are now available in issue #53 at this site. I was going to dedicate this blog to highlighting some of the articles in this issue, but I wanted to take a moment and share with you some of the progress that has happened in the last 90 days since IBM’s Sam Palmisano, gave his speech at the Council on Foreign Relations back in November on a Smarter Planet; the largest Enterprise System we humans deal with.
Since then, there is incredible amount of information, which has been made available both on the general IBM site, and some pretty amazing visibility through the media and Internet channels. On CNN, Sam had his first TV interview wth Fareed Zakaria, and on CNBC, we saw him with our new President Obama. If you look at YouTube, there are many new videos related to a Smarter Planet and what it means across industry, business, technology and society. Smarter utilities, telecommunications, energy, money, retail, infrastructure and more are being highlighted at the IBM and other sites including:
· A new blog called: Building a Smarter Planet
· YouTube: see the IBM socialmedia Channel
As I walked through these stories of Smart power grids, traffic management, new approaches to telecommunications, financial systems, food distribution and other solutions, I started to realize that this may be not only the largest initiative from IBM in decades, but a real tipping point for technology being applied to the world in a meaningful way. Whether you are an IBM employee, Business Partner, customer, or a citizen of the world, this initiative will affect you, and it is worth starting to understand what it involves and where it is going. I highly recommend taking the time-- I think you will be inspired.
Did anyone see the Game-Frame computer in the latest Popular Mechanics? It is the ‘Game-Frame’ version for a PC from HP where they use VooDooDNA water cooling. Mmm… I’ll bet you large systems types thought that the Game-Frame was the Hoplon System z with integrated cell processors for massively multi-player use.
Well, it is nice to see water cooling in a PC, since it is pretty well established, say 4+ decades, that water cools at around 3500 times more effectively than air. Oh, by the way, if you have been keeping track of IBM Green initiatives like Cool Blue, Big Green, and others, you may have noticed a whole range of cooling techniques that take it to the next stage including technologies like Rear Door Heat Exchanger, cooling arteries, Hydro Clustering, and Cold Batteries (i.e. ice) which uses advanced thermal exchange techniques.
With datacenters running out of power, costs escalating daily, and barely over 2 in 10 enterprises having done energy audits, it is time for large installations which have System z to take a look at an area that may have not been investigated in depth for quite a while.
(picture from flickr ibmphoto24 stream)[Read More]
systemzblogger 2700017BYR 1,936 Views
Sometimes a blog is long and sometimes it is short. As we are all busy with starting a new year, with the pressures of economic realities and developments in green technology, I thought I would just remind and share this astounding thought from the Business Class z10.
I was looking at some of the collections on Flickr related to z10 and found this entry:
BIG PERFORMANCE, LITTLE ENERGY: Created for mid-sized businesses, the IBM z10 BC simplifies commercial computer operations with "specialty engines" to run popular business and consumer applications (email, website hosting, transaction processing, etc) on one of the world's most trusted and secure computer platforms. IBM co-op student Sean Goldsmith surveys the new z10 BC mainframe in IBM's Poughkeepsie, NY, plant to add an extra 1,000 email users with the energy of a 100 watt light bulb. Goldsmith, a senior at Marist College, anticipates a bright future with the mainframe.
(It refered to the associated announcement launch here)
Wow... I mean really... a 100 Watt bulb...[Read More]
systemzblogger 2700017BYR 1,915 Views
Through a great example of teaming, POWER6 engineers worked with z10 engineers to create the breakthrough z10 chip, which benefits from some shared technologies, but differentiates itself based upon a different design point. Some of the common DNA includes the IBM the 65nm silicon on insulator technology, and large portions of core pipeline design, the hardware decimal floating-point unit, and building blocks like latches, data flow elements, and SRAMs. (Note: for loads of detail, watch for future issues of the IBM Journal of Research and Development which already has a number of detailed papers in acceptance status for future publication.)
There are also some important differences, which gives the sibling chip its own very distinct personality. With more real estate given to caching (like about 20% more in a multi-tier configuration) and extra on-chip elements dedicated to sparing functions, the z10 extends the tradition of IBM Large Systems processors handling mixed workloads and larger working sets, while adding even more capabilities for CPU intensive workload. (Don’t forget the huge jump to 4.4 GHz quad core processor chips!)
In addition, there are the cryptographic and compression co- processors, Storage Control chips on each Multi-Chip Module, the new Symmetric Multi-processor topology, and larger caching- which includes support for the new 1 MB page frames for z/OS. Also, while the System z9 introduced hardware decimal floating point instructions in millicode, the z10 extends it to a hardware decimal floating point unit on each core. Along with changes in z/OS Language Environment designs, and backed by a new open standard definition for decimal floating point implementations, this increases accuracy, speed and capability for widely used commercial and financial applications and is an excellent example of systems design beyond just increasing the base chip speed.
So, like in any family, the z10 defines its own role in the family while leveraging the strengths and the lessons learned from its sibling. It is easy to just look at chip speed, but the detail of system design is what leads to balance systems throughput, resource optimization, and total value of the platform. There truly is nothing in the world like the z10 system platform, and it should be interesting to see the detail referenced above in those articles as they get published. I’ll keep an eye out for you!
systemzblogger 2700017BYR 2,220 Views
As the year ends, I’ve been thinking about the larger trends in technology that are especially true for System z customers. Here are some of the trends I have observed. Do these make your top 10?
1. The economy drives more customers to revisit System z platform characteristics. New candidates for workload migration (including ISVs & application suites) and platform consolidation are on the table.
2. Environmental concerns are moving more customers to the System z, dynamic infrastructure, and IBM Green Planet technologies through executive level driven cost control and Green Initiatives.
3. Tighter controls and increased for optimization of systems are deriving new assurance, security, governance , and risk management programs which in turn are driving new looks at System z.
4. The new technologies represented by the System z10 have generated lots of attention; not just from the business class community, but also traditionally distributed platform infrastructures.
5. Installations are moving beyond basic portal access to the use of more advanced process collaborative technologies as IT continues to address complex people intensive interactions.
6. The ongoing IBM middleware evolution, including acquisitions, integration and migration across platforms (including System z), has created a more consumable and effective portfolio for clients.
7. Information on Demand technologies have matured and are truly starting to deliver the next generation of enterprise wide applications supported by all of the enterprise’s information.
8. Large installations are looking hard at modernization and transformation projects to move beyond connection and exposure of legacy assets to a more current infrastructure and use of SOA technologies.
9. New business models, virtualized infrastructures (SaaS), and governance procedures focusing on accountability are raising the importance of industry and true enterprise architecture solutions.
10. The On Demand Operating Environment was seen by many as a visionary milepost in the future. The call for a Smarter Planet, however, is being driven by customers, industry, and societies.
Have a great Holiday Season !![Read More]
Large systems, as represented today by System z shops around the world, has always led the way in applied technology across industries of finance, technology, commerce, and government. It has lead recent innovation waves , including e-business, On Demand, SOA and Green technologies. On November 6, IBM's chairman, Sam, Palmisano, gave a speech at the Council on Foreign Relations as part of their corporate meetings program. The subject was introduced as proposing an increased infusion of intelligence into decision-making, but the larger issue addressed in this talk is an opportunity for leadership in the context of the current economic, financial, and political turmoil in the part technology plays in creating a smarter world.
Building on the concept of the Globally Integrated Enterprise, (see the 2006 paper), Sam talks about the window of opportunity leaders have to effect change in the current climate of receptiveness for change. Building on Tom Friedman’s observations about a flattened world in dire need of greener and more environmentally responsive approaches ( Hot Flat and Crowded and Tthe World Is Flat), he suggests the world also needs to be smarter and to accelerate the convergence between digital and physical infrastructures. Suggesting we move towards the future rather then hunkering down, Sam reminds us of the inspirational effects of taking action towards a hopeful future rather than trying to defend the past. (Something System z and large systems understands too well!)
Sam shares the current urgency to start by reviewing some arresting facts; such as how 40 to 70% of energy is wasted in poorly managed power grids, how half of the world doesn't have sanitation facilities and one in five don't have safe water to drink. Or, how a large part of the recent financial crisis was due to a lack of mechanisms to track and manage the risk, and how huge opportunities exist in managing traffic, supply chains, and especially the health care systems. Then, he reminds us of our collective technologic progress; how our planet has almost a billion transistors/person (at a cost of one millionth of a cent each!), 4 billion cell phones, 2 billion Internet users, and 30 billion RFID tags. He shines a light on the pervasive use of sensors, connected and networked to provide a growing base of intelligence and capability; representing a potential for problem solving with a huge future potential.
In short, an emerging world is revealed that is digitally aware, and intelligent or smart. If growing these capabilities is possible, affordable, and can make a significant difference in our world, then someone will do it. Sam asks: Why shouldn't it be you, your company, your country? (And naturally, IBM has examples of involvement in many areas from MRI Coils to Solar Capacitors, from Trraffic and Power Grid Management to Risk Systems).
Talking about the importance of this moment in time, Sam summarizes saying : Everyone has to come out of their lanes. It will take collaboration across government, academia and industry; with skills that are multidisciplinary, and end-to-end. (Hey, a lot like architects!) To dig out of this current situation, will require a realization that we need to move forward and create an even smarter world. As one of the audience commented, citing Peter Drucker's paper (1994 The Age of Social Transformation): There will be no "poor" countries. There will only be ignorant countries.
There will be no "poor" countries. There will only be ignorant countries.
As part of the System z and large systems community, I think we will once again be on the leading edge of this next transformation as we become more instrumented, interconnected, and intelligent; as we become a smarter world.
systemzblogger 2700017BYR 1,563 Views
There are different kinds of consolidation; mergers and acquisitions, temporary partnerships, and even hostile takeovers. As times change, new combinations of people and institutions recombine and shuffle for optimal fit. This has certainly been highlighted for me in the past couple of weeks. First, I saw the financial battlefield of winners and losers with some institutions disappearing and others being picked by past competitors in ways we could never have imagined six months ago. Second, I attended a series of sessions on the dozens of software company acquisitions that IBM has folded into their portfolio over the last half-dozen years.
I've heard the process of folding these companies into the IBM world described as a core competency. The informal term I’ve heard is: 'blue-washing' and while can I imagine there may be a Harvard Business Review article on the detail someday, I can state, as an unofficial observer, that we get better at it each time a company is ‘blue-washed’. Externally, the process delivers tighter, pre-integrated solutions with more functionality for our clients. Doing these integrations of once isolated and disparate pieces can save our clients a huge amount of effort and time; as in an order of magnitude less effort.
This behind the scenes consolidation of middleware gets added to the consolidation efforts already underway in IT. Workload from legacy, ISVs, and e-business applications are moving in together where once they got to live in their own, sometimes messy, apartments. Platforms are merging for cost efficiency and infrastructure effectiveness, not unlike that house you may have rented with friends at college.
Of course, as we consolidate IT, some of us will have to get used to some different ways of doing things again; like System Z’s insistence that you share the common area, work with others, and pick up your garbage![Read More]
The theme of IT enablement across multiple fronts as broughtout by several of the speakers at the SOA Summit this summer keeps coming backto mind. On the one hand, I came acrossanother reference to the lines of COBOL code out there (this one was for 240Billion!), and one the other there was an article in Mainframe Executive onHoplon (Hoplon’s Infotainment’s Gameframe ) and the fact that they are planning on hosting2000 users per IFL for their massively multi-player environment (you know, theone with cell processors in the System z?).
I was thinking about CICS and the sure progress for web servicesenablement, which has gotten lots of publicity, and thought of its counterpart:the progress in expanding legacy CICS efficiency through the evolution of ‘Thread-safe’tasks.
CICS has continued to evolve to take advantage oftechnologies to maintain application integrity while growing with the hardwareevolution of more processors. The process has involved distributing workloadacross multiple tasks, first with CICS core functions, and later with DB2integration, and now exploitation moves to include potential threadsafe use bysome MQ and VSAM functions (CICS 3.2). Theenabling of multi-tasking of user applications through multi-threading hashelped improve performance utilization while maintaining little things like consistentand predictable throughput results and workload sequencing that large systemshave been so obsessed with for the past few decades.
Why is this important? Well, if you can do this sort of thing safely, and improve utilization(there is that old system Z focus again), it can mean serious returns; like5-15% or more. For some of IBM’s largercustomers this has translated to saving hundreds of MIPs and Millions of Dollars.
Finding the appropriate candidates and implementation can getpretty involved, but the impact can be worth it, and finding candidates iseasier than in earlier implementations via the use of CICS tools such as Performance Analyzer-For which are good candidates, InterdependencyAnalyzer –For which are not threadsafe(which could fit nicely in CICS Explorer), and Configuration Manager (to enforce a threadsafe environment).
Mmmm.. maybe something less sexy than web services,but an area worth looking at, eh?
Some good paces to start:
Threadsafe Considerations for CICS RedbookDoesCICS still Love Fast Engines? (zJournal)
OTE(Open Transaction Environment) and Threadsafe: Why They Should Be Important toYou (also in zJournal)
Webcast on: Thread-safety and CICS file control applications
I had the opportunity recently to talk with Paul Wirth, an IBMer who travels the country in support of DB2 and is a real smart guy. (… and not just because he agrees with me on so many things!). I had asked to talk with Paul after some reading about a new technology called pureQuery, which is part of the relatively new Data Studio suite. The technology is a result of a cross brand initiative from IBM software aimed at the intersection of application programmers and DBA’s for effective data access. This technology’s goal is to reduce the complexity of JDBC programming and queries to relational databases, Java collections and database caches. While there is a lot of detail (see the links below) on how this technology works, there are also a couple of System z implications that Paul shared.
The difference between dynamic and static execution of SQL statements is like the difference between compiled and interpreted execution. If you build static execution packages for DB2, you get a predetermined access path, built for the best performance and predictable execution of the workload. They're also benefits related to security isolation since the application accesses the package and not the database table (see SQL injections, fraud etc.) Those shops who have used the approach of static SQL statements via the whole mechanism of DB2 packages receive long known associated benefits of not only performance, but also cost, security, monitoring, and consistency. While previously available for Java via SQLJ, it was pretty complicated to do and only a limited number of shops did so. pureQuery makes this much easier to do regardless of your Java framework or the API you're using.
One of the strengths of System z is its focus on mixed workloads, performance, and cost effectiveness as a platform. Being able to prioritize and manage workloads through workload manager is essential to accomplishing this. Unfortunately, much of the distributed Java transactions coming off the web to DB2 are only dynamic SQL workload. So, z/OS sees them as undifferentiated pieces of work unless the programmer sets the properties for the connection class-which is often not done. If you have a unique package name you can identify the application program which means it's easier to monitor, do problem determination and especially that it is eligible to manage via the z/OS Workload Manager (WLM). pureQuery gets you get those unique package names as you implement static SQL, and gives WLM the ability to assign them to specific service classes, and allow prioritization of the DB2 threads. (Note: it is not just the Java workload coming off the web, Java workloads via Websphere for z/OS, stored procedures, and CICS Java workload can potentially benefit from pureQuery via the pureQuery Runtime for z/OS).
A third interesting area to look at is stored procedures. First, if you have a simple, one statement procedure that doesn't contain business logic, rules, or network data filtering, and is used just to provide a static plan, then consider using pureQuery and rewrite the SQL statement in the Java application. The pureQuery statement is zIIP eligible, provides the static plan, and avoids the need for the stored procedure. IF the Java program is running on System z it also becomes eligible for zAAP processor use.
Next, we know that DB2 9 gives us the new, native stored procedures, which avoid the use of WLM because they run within the thread, and are zIIP eligible- when used over TCP/IP with DRDA. So, say you rewrite an external procedure (e.g. one that currently uses COBOL) using native SQL/PL. The result? A procedure which is more efficient and is zIIP eligible (DRDA).
As Paul said: “…if you think about it; pureQuery makes Java web applications behave a lot like CICS COBOL applications…” And that, as System z folks know, can be a good thing.
Whew... a lot to think about, but this is a technology to watch that holds the potential for improvements on numerous fronts. Watch this one as it moves in closer on the radar!
...and from DB2 Magaizine:Read More]
It felt like it's been about six months since the announcement of the z 10, so I took an informal web sampling that reflected the z10 announcement from different points of views. Naturally, I found a couple of excellent articles from Mainframe Executive (one from June entitled: Green Machines: The System z10 Enterprise Class, and the other from April: IBM Unveils New System z10: Vital Signs Remain Strong). I also found a nice entry from an electrical engineers news service in Asia with some comments from IBM z10 designer Charles Web who addressed the design challenges with the increase of speed to 4.4 GHz. Finally, I scanned the spring publication of the z 10 technical Redbooks : IBM System z10 Enterprise Class Technical Introduction SG 24 – 7515 and IBM System z10 Enterprise Class Technical Guide SG24-7516 ( along with loads of consolidation and green related items!).
One of the things that really stood out on the external entries were comments like: ‘The z10 chip is easily the most elegant enhancement in more than a decade’, ‘Rock-Solid Computing for the Next Decade’, and ‘the first ground-up CPU redesign in an IBM mainframe in a decade’. All the articles I read really underscored my initial impressions about how many different areas were changed at once (huge speed increase, buffer structures, infiniband connections, reducing chips from 16 to 7 on the MCM, etc. etc.) but also the commitment to the platform these new capabilities represent.
It's always been fun to see the technical changes, but it seems easier than ever to link them directly to the business behind the workload. Floating-point decimal functions gets moved from millicode (yes, there is a step between microcode and the chip) to the chip for the demands of workloads related to financial institutions. Support is added for growing encryption needs through enhanced cryptographic processor functions. An additional 50 instructions are added aimed at improving compiled code efficiencies for the software (i.e. Java, WebSphere, and Linux )that enables growing Internet and application workloads. These are all good examples of the platform evolving to a changing world.
Besides, seeing that quote: ‘long-time assembler programmers will rejoice’, or the fun fact of 20,000 error checkers on the chip, there seemed to be a lot of discussion about the design effort between the system z and System p teams. I like the phrase “shared DNA" for their collaboration on areas like the design of memory controllers, floating point processors, and I/O bus controllers, but also that the z chip is different due to platform focus on functions like cryptography, compression, and decimal floating point capabilities. (Or the different buffer structures for different workloads, levels of availability mechanisms, and something called local clock gating to reduce power consumption.)
Perhaps the nicest summary I saw was from Bill Carico, president of ACTS (an IBM Premier Business Partner), who wrote at Mainframe Executive: “… and a litany of other advancements, confirming that IBM remains strongly committed to keeping the mainframe on the cutting edge of technology. The one-sentence executive summary of the z10 announcement is simply this: ‘The mainframe still leads the industry in its ability to run mixed workloads, share data, operate consistently at over 90% utilization and near 100% availability at the lowest cost of ownership (TCO) in an impenetrable environment that runs on autopilot.’ No, I’m not saying the mainframe is the best tool for any job. I’m saying it’s the only platform with these unique capabilities”.
Says it pretty well, huh? Let us know what are you hearing about the z10 and its evolving role in the enterprise...
As fall starts showing some early signs here in the Midwest, I took a few days last week for cleanup chores. Today’s entry will be kind of like that, with a couple of quick notes or tidbits.
First, while I have mentioned the Academic Initiative IBM has in relation to System z, I don’t think I included any relevant pointers, so for this program that now has 500 schools participating worldwide:
Oh, and to engage with folks from IBM regarding System z skills, just send a note to: firstname.lastname@example.org.
Next, did anyone see the browser announcement of 'Chrome' by Google? I’ve played with it a little bit and two things pop out at me; besides the obvious business implications of stepping in this space of browser and client interfaces. First, it is kind of nice to have a quickstart icon for certain sites, but also, the idea of not having a browser task that is having problems bring down all of your sessions says some positive things about their possible awareness of an old System z design concept! Something stumbles but it doesn’t bring anyone else down with it… sound familiar?
Finally, I heard a couple of items related to System z in the last couple of weeks that you umight want to put in your virtual pocket. Did you know there are more CICS transactions every day than searches on the web? (I know, still true!) Another fun fact I heard in a customer teleconference referenced the mechanism of System z hardware executing instructions in parallel and then comparing them to make sure they come out the same. That is a great example of an availability mechanism that came from a time where they were flipping those little ferrite cores of memory and is so deep in the design we take if for granted if we stop to think about it at all. It’s a detail, but as great coaches say, details build champions. Think of John Wooden starting the first practice by teaching his players how to put on their socks and tie their shoes correctly to avoid blisters. .. and win basketball championships.[Read More]
Hi, this is Dave again. I just got back from IBM's SWITA and zITA internal University, where I was part of a team delivering the System z track. (SWITA is a pre-sales software architect, and a zITA is the same specializing in System z platform accounts). It's always good to step back and spend some time renewing oneself, talking to others who do similar jobs, and of coarse, when you deliver sessions, seeing how they are received and gauging where the platform sits with others who may not work with it every day.
One of the insights I brought away from this experience was to remember, for those of us who have spent a long time with System z (like when it was the mainframe!), is that 40+ years is a lot of time to not only layer functionality, but to lose layers reasons of why those functions were put in place. As we talked about some basic concepts like virtual storage, partitioning with PR/SM, and the original software virtualization engine z/VM, I was reminded of a friend’s report on an interview he had a few years ago.
This friend had traveled to Texas to interview at a large IT installation, and was anxious that the interview go well. Towards the end of the day, while touring the data center, his guide stepped aside and gave him the feedback he was looking for. He indicated that the hiring team had been concerned since my friend was from the north and in their experience there had been difficulties with other candidates, who just didn't seem to fit in culturally. He shared in a Texas drawl that while the candidates were qualified and generally good people, that part of the reason they ‘just didn't get it’ was not their fault, as they ‘ just didn't know any better’.
Experience, context, and exposure. It's not the fault of those who haven't been around System z that they don't remember why functional recovery routines were put in place, exactly how virtual storage works to ensure memory doesn't get stepped on by those who don't belong there, or the context that two phase commit came out of. If you are one of those who have been around System z enough to understand its design point and value, take the time like a good Texan would with an 8 pound brisket and help others ‘get it’.[Read More]
A month or so ago I heard a lecture from someone at Los Alamos who has talking about their efforts with IBM on their latest specialized grid solution. The lecture started by talking about Moore’s law, the progression of smaller and more powerful chip sets and how if you keep making things smaller with more power that over time the logical conclusion is a kind of ‘singularity event’ with a bright flash of light. There was a pause while the audience figured out what he was saying (it go boom!), and then the laughter came. The dry delivery was, as a current advertisement says, priceless. But it points out the issue: we can’t assume there is no end to increasingly denser, more powerful chips. There are limits that imply compensating design strategies.
As we in ‘z’ know, strategies for extending systems capacity have been around a long time. There were attached and multi processors (AP, MP) decades ago. Systems evolved to horizontally add specialized processors for IO processing (SAP), vector processing, and cryptographic functions. Loads of queuing studies led to carefully designed and balanced buffer hierarchies to feed the engines. Balanced Systems Performance was a key phrase that reminded us all that a faster chip without overall system design improvements was pretty ineffectual.
Recently, while IBM has been visible in grids that are widely, geographically networked (e.g. World Community Grid), System z has been extending the horizontal and parallel processing strategy with the addition of specialty processors like the ZIIP, ZAP, IFL, and the ICF engine for the coupling facility. It should be happily noted that some have pointed out the seemingly serendipitous ‘cost engineering’ effects of moving specialized functions to these processors; a serious financial outcome from good technical design.
System z continues to evolve, and looking forward, there are processing challenges in areas like XML, security, and analytics that could benefit from better cost and performance improvements. These workloads may be logical candidates to be enfolded in the System z sphere as specialty engines. Of course, as Ian Richardson used to say as the politician Francis Urquhart in the BBC thriller ‘House of Cards’: " You might well think that; I couldn't possibly comment ".[Read More]
While traveling on business, I heard a refrain whose time I thought had passed in referenced to System z. No, it was not the one that claims the platform is ‘more expensive’. (There are reams of material and experience to lay that urban myth to rest!) This was a comment bemoaning the lack of new workloads for System z. What??!
Perhaps the biggest door opener for new workloads have been the capabilities to bring Linux workloads on board with the enabling capabilities of VM and IFL engines on System z. Not only does this strategy not add to MLC licensing streams for z/OS, but the engines themselves are priced so compellingly that they really provide financial leverage in workload migration analysis.
As we would expect, System z continues to refine design elements to improve system performance for more kinds of workloads and applications. Every element continues to get refined from new chipsets and buffer strategies, to new connection technologies.
Look at the new range of workloads enabled with just with the faster chips on z10. Doubling the chip speed, while only a portion of system performance, does move the line to include compute intensive workloads that might not have been a good fit for movement or consolidation previously.
While it is easy to overlook, even new instructions are being created to help workloads. In the z10 these include instructions created to address floating point decimal functions, compiler code efficiencies (C++, PL1, and COBOL) and improved Java workload enhancements.
ISVs have been busy voting for System z as a platform. Consider 500 new ISVs last year joined the System z world. According to some of the numbers from our partner ISV community, there are 1300 new applications being built that will run on System z. This is in addition to the 1200 applications already enabled for Linux on System z, and the 4000 or so already available.
No new workloads?! We haven’t even talked about the other specialty engines, the expanding role of System z to deal with workloads related to energy management, administration and governance, serving enterprise data in-line, or new applications related to process and analytics that support new business models and could not have even been done a few years ago.
There was an article in the New York Times earlier this year that talked about the ‘mainframe’ and how it has evolved to be a whole new creature; unrecognizable from the thing we called ‘mainframe’. As it continues to evolve, the question we have to ask is not what workloads are there for it, but which ones don’t fit, and for how long?[Read More]
Architects are not just techies. As we shepherd solutions from creation to fulfillment, we are concerned with method and design, project realities and technology, but we also need facilitative skills in our toolkit. With projects like SOA and Enterprise Data Centers on the horizon for many, processes to work with groups to elicit information, create plans and make decisions need to be employed effectively by architects. In other words, facilitation processes.
IT has deep roots in the complex merging of businesses, technology, and people. Facilitative disciplines have played a key role; especially in shops where System z and its predecessors lived. For instance, Chuck Morris and Tony Crawford, both of IBM, created Joint Application Requirements (JAR) and Development (JAD) approaches in the 1970’s. We can recognize their facilitative grandchildren in current workshop offerings for System z, SOA, Green Data Centers, and Modernization.
Another great example of IT and facilitation intersection is the creation of a leading organization in the field the: International Association of Facilitators (IAF). With contributions from Organizational Development, Education, and other disciplines, this group was started in ‘94 with a very strong contribution from IT participants. Just two further examples include the wonderful work of Ellen Gottesdiener in her book: Requirements by Collaboration, or Michael Wilkinson, who came from IT and recognizing the value of facilitation techniques moved to a full time focus with: The Secrets of Facilitation.
Today, when execs talk about top of mind concerns, legacy modernization and collaborative tools are top of the list. (See May 19 blog entry). More than ever, addressing these areas means collaboration and participation, understanding and buy in across disparate groups to achieve optimal solutions. As we build our personal skills and experiences as architects, let’s be sure facilitative techniques have their well deserved place in our toolkit.[Read More]
A couple of weeks ago, IBM’s Roadrunner supercomputer blew past the petaflop barrier. (IBM news and NYTimes article). Having spent so much time in the Kilo, Mega, and Giga ranges during my career, and finally getting comfortable with Tera prefixes, this threw me into one of my little time trips.
As the music for a time shift faded and the fuzzy strings cleared up (like in any good TV flashback), I was talking with a mainframe systems programmer in downtown Detroit, and based on the wide ties, I could see it was the early 80’s. As a proud owner of an aging behemoth System 370 (a 3033 in the single digit range of MIPs), he was thinking about crossing into 2 digits territory, and dreaming of futures of gigaflops.
At that time we were well past basic accounting into complex transactions, just neophytes in distributed processing and well before the internet. I remember trying to not roll my eyes and wondering: ‘Come on, really, what would we need that kind of power for? ‘
Well, in 2008, what we (and I say we since many of these projects are federally funded by whose tax dollars?) are using this kind of power for is even more mind-blowing than my fellow gear head buddies might have envisioned. How about these for starters?: Nuclear Stockpile Monitoring, Terrorist Activity, Climate Change, and Genome Analysis....
One thousand trillion… 1,000,000,000,000,000 – Fifteen zeroes and my head starts to fuzz out. The names for the next progressions at 18 (Exa), 21 (Zetta), 24 (Yotta), and 27 (Xona) zeroes are waiting. Researchers will get us there by continuing to look at every element in the system and may use approaches like the recent chip stacking experiments where water cooling rivers as thin as a hair flow between stacked condos of computing. They’ll probably also use a mix of specialized processors as you see in this Roadrunner and the current System z. Out of the way petaflop… here comes the new fraternity, Exa, Zetta, Yotta. I think I’ll get a t-shirt![Read More]
Whew, just back from a week off of hauling stone, trimming trees and walking dogs. Last time I was discussing topics related to effective use of computing resources by taking a tour of green power issues and some serious inefficiencies as represented by low utilization rates on platforms. After that BLOG entry, I came across a great series by Marlin Maddy who has run hundreds of Scorpion studies which help enterprises determine the most efficient use of platforms.
There is a series of six broadcasts on IBM TV relating to Cost Misconceptions, Server Utilization and Proliferation, Facility and Infrastructure, Leveraging Specialty Engines, guidance on building a business case with System z, and Find the best Fit for System z. Some interesting questions that come up include:
- Did you know that the ratio of facilities costs has flipped over? It used to be that 80% were attributable to ‘mainframe’ and 20% to distributed systems (on average). So naturally, costs tended to get lumped into mainframe for accounting. It is the other way around now and yet distributed financial models not only don’t tend to include them, their burden gets shunted to System z models.
- What was the reason you lasted upgraded power and cooling capability? Odds are it was for distributed systems according to Marlin’s studies.
- What happens if you have good tools to capture resource usage? That’s right; those are the metrics that get put into financial models. System z has had great tools for decades, and often suffers for it when it comes time to build costing models. (Marlin has a great anecdote about the corporate jet getting slammed into mainframe ‘facilities’ costs!!)
- These are just a couple that jumped out at me. There are many more insights, so: Check these out!
While thinking about costing operations and potential stumbling blocks, it made me flashback to the 1980’s, when I was working for a large financial institution. As part of capacity and performance duties, I installed the then new MICS accounting components, wrote the SAS exits, and reported on systems resource metrics. These activities created input to financial acquisition modeless and to chargeback processes. Years later, I was surprised through a chance meeting to discover that many of those same pieces I set up were still running unchanged! In talking with my peers, it seems there are plenty of chargeback systems which have not kept up with changes in technology or the supporting facility cost patterns.
Like layers of business processes, or government regulations, these often elaborate systems are well overdue for reengineering and spring cleaning. If not, they can be a serious stumbling block to IT optimization of resources. The misperception persists that mainframe Systems z resources and platforms are ‘too expensive’. You and I know better. What has your experience been in this area?
systemzblogger 2700017BYR 1,790 Views
As a one time naturalist, it's been interesting for me to watch the evolution of energy usage and its increased price last few years; particularly in an IT context. Energy as part of an IT budget is a bit of a misnomer. Did you know that less than one in four IT departments pay for their energy? Or, that fewer than one in five enterprises have done a detailed energy audit?
Let’ review some alarming energy IT growth numbers. For instance, did you know that over the last decade energy costs have doubled in the last five years and may again in less than the next three? Or that servers have grown six fold, power and cooling eightfold, storage sixtyfold, and the administrative costs for these systems have grown an average of four fold?
I've also seen references that suggest that there's low hanging fruit here with the proper management attention. One figure quoted from the EPA suggested that many enterprises could save in the range of 25 to 55% . Some of the numbers I've seen from IBM Project Green analyses suggest this could be as high as 80%.
These are huge numbers; especially when you add in server utilization rates. Excepting System z (Shameless plug: designed to run to 100%, deal with different kinds of workload, etc), many of these servers are at really low utilizations; like less than 10% for wintel servers, and in the 10-20% ranges for Unix servers. While part of the problem can be technical limitations of scale, from an organizational perspective many of these servers are limited by setup to not achieve optimum resource utilization since they are dedicated to limited workloads and managed by single departments or lines of business.
So, there are clearly some reasons to focus in this area, and clearly some benefits to be had. It's also pretty clear that if we don't address this issue there could be roadblocks to IT and enterprise progress. Remember, even if you're willing to pay the price, there are areas where there simply is no additional power capable of coming off the grid!
As a tech guy, it's tempting to talk about some of the new enterprise data center capabilities, the neat new ice battery, and all of the virtualization, consolidation, optimization capabilities we have to contribute in this area. However, this is another problem that’s going to take both business and IT to solve and continue growing and contributing. It is going to take the governance stick, to remind everyone what goals are, to cooperate and play nice, and solve a problem with the greater good in mind. It's going to take executive attention, sponsorship, and support.
For those of us in the tech community, let’s be ready with our enterprise and architectural perspective when the call comes from on high and get green! (For fun see green data center man!)[Read More]
Having just put new windows in a 41 year old house, the benefits of remodeling was top of mind while listening to keynotes from the recent Business Partner Leadership Conference. In one podcast, a CIO was referenced summing up the next few years of IT focus: ‘Modernizing applications, and dealing with the Web 2.0 thing. ‘(paraphrased) . Mmm..easy to say, but: How does that work really?
44 years ago, jut before the original sashes were put in my humble abode, IBM invested over $5B to launch the general purpose commercial business machine; the System/360. It was a bold move which has proved to have a large impact. One of the reasons these systems have successfully evolved is an initial design and commitment to enable applications to move forward as technology changes.
Plenty of applications have made technology jumps over the last four decades because of that initial commitment. Today’s leaps involve making applications accessible via the web, enabling them to be a part of new applications and accessible to new customers, markets, and business models. (Oh yeah, and dealing with that Web 2.0 thing too.) This refactoring transformation is referred to as ‘modernization’. (I guess it sounds more business like that remodeling...)
It is easy to forget that these kinds of changes have ensured that a majority of the data and transactions that run the world still reside happily, and effectively, in System z houses. A good example of evolved and layered systems fell in my lap, or ear, just days later from another pod broadcast. Entitled Web 2.0 and Wall Street, this discussion provides a great retrospective of IT’s essential role in Wall Street and speculates on possible future use of Web 2.0 technologies. (Yes, there are a few System z platforms on Wall Street!)
Design, architecture, and remodeling… oops, modernization. For we with the responsibility of crafting solutions others live with, it’s a good reminder that most projects don’t start with an empty lot or start in a ‘Greenfield’ state.* and that it matters what base you start with when you make changes. Put another way, starting with good materials gives the option of remodeling down the road.
Oh, and it’s pretty neat that those who live in a System z neighborhood can ‘modernize’ those old structures when the original windows get drafty rather than to start from scratch…… isn’t it?
Upcoming BLOG topics: Thoughts on Chargeback and Systems z, Good Things we forget about System z, notes on design Methods and...your ideas?
* ‘Greenfield’ refers to clean slate projects versus ‘Brownfield’ efforts built on pre-existing structures. See: Eating the IT Elephant
Another Blog- One of my team members asked: 'Why another blog?' It is a good question. We have Mainframe which has lots of good detail for big iron. We have millions of pages online, hardcopy, and in e-mail to look at! Still, the channels proliferate, and for a reason. We have moved from phone, face to face and mail to a panoply of social networking tools, channels and media forms which compete for our attention. Filters, feeds, and federation attempt to address the mixed blessing. We have enterprise systems, distributed systems, and grids in the sky. The complexity is not going away.
Connections that make things possible like never before. In this weekend's New York Times, there was an article called 'Can You Become a Creature of New Habits?'. It talked about how we approach problems as humans through analysis, proceedure, collaboration, and innovation. Built on on the familiar tools of analysis and proceedure, it is through new connections, combining existing things in new ways, and leveraging existing expertise, wherever it is, that gets us the breakthrough approaches called innovation. System z knows about innovation, so it makes sense new techniques of collaboration are something this community is a part of.
At IMPACT 2008, Sandy Carter announced the new Smart SOA Social Network which will add Line of Business and Business Analyst communities to the existing Developer and Architect communities that vehicles like Developerworks has done so much for. Social tools will be integrated that include Orkut, Second Life, MySpace, Xing, Twitter, Facebook, and LinkedIn. Add 400 universities participating in the Academic Initiative where students learn about the evolved enterprise systems, and Destination z, where System z Business Partners provide an amazing array of capability, and this is clearly a new world. (Look! I am not wearing white shirt and tie! There is no black dial phone in front of me! My inbox is not a physicial box! My office is where again? )
So, we are going to add to the mix. We are a group of IBM Architects called zITAs, who have background and focus on System z. Here you will see observations, perspectives, and comments on what we see from our experience across gobs of technology, customers, and you don't what to know how many years. (Let's just say the hair on our heads is graying.) What will we talk about? It could be chargeback or cashflow, workload or method, technology or trends.
Hello, and welcome aboard....