Enterprise Class Innovation: System z Perspectives
systemzblogger 2700017BYR 1,631 Views
Wow.. a month already! I have seen a number of indicators, including server acquisitions for System z, that suggest the economy may have bottomed out, or at least that pent up demand to continue running the large enterprises had to make some decisions on acquisitions. Certainly, the zEnterprise has been adopted nicely as reflection in the Mainframe Zone and comments floating back from individual customers.
Next week is the annual Rational Software conference: Innovate, and the opportunity to hear Grady Booch, Walker Royce et al share developments (get it? developments?) in the area of Software and System Innovation. Streams range from Application Lifecycle Management, Enterprise Modernization, Application Security and Compliance, to Strategic Planning, and Software and Systems Innovation. Here is your chance to get updated on OSLC, Jazz, and all of the Portfolio pieces that contribute to new business outcomes. As the recent issue of CIO says, IT alignment and IT Value are no longer enough, business outcomes are what matter. (IT Value is Dead, Long Live Business Value -- May 15, 2011 edition)
Oh, and one more thing of interest: The Register reports today that "IBM Guns down Mainframe accelerator in Texas...". So, zIIPs and zAAPs will continue to be used as IBM intended, and under the rules and restrictions that have been set over time.
They sum it up by saying:
"The hired legal guns were blazing, and when the smoke cleared, the US District Court for the Western District of Texas, located in Austin, granted a permanent injunction in favor of IBM in a long-running lawsuit Neon Enterprise Software, killing the controversial zPrime mainframe acceleration program."
well... that settles that I guess... until next time!
systemzblogger 2700017BYR 1,795 Views
The 13 Volume ABCs of z/OS Systems Programming is a classic set, and the release of version 5 of Volume 9, dealing with UNIX Systems Services, seems a good time to highlight the series for those who may not be aware of it. For review, the series is aligned as follows:
systemzblogger 2700017BYR 2,421 Views
Well, I just got off US District Court jury duty and it was quite an interesting view into a different world. As one who had not avoided but just not participated before, you can imagine my little IT head's dialogue: where are the pictures? There is a whiteboard there, why don't they use it? Couldn't they put together a better compiled document of evidence briefs than this? I admit, I do the same thing when I go to the doctor's office and see the wall after wall of paper file folders and watch the physician dig through the handwritten notes jammed into the folder. Anyway, if a vacation is a change of scenery, than maybe that is what I had, but it sure did not feel like it!
Did anyone notice the Novell SUSE extended support in Enterprise z last week? Novell Offers Industry's Longest Enterprise Linux Support Program This adds to a more than 10 year presence in the enterprise space -- something easy to forget I think. (brochure)
How about the Amazon cloud outages and Sony security exposures? Characterized by the headline in the Economist as: Break-ins and Breakdowns, it seems we are seeing more of these sorts of things almost weekly. Again, whether it is cloud or general platform selection, put your enterprise systems hat on and access the Service Level Agreement section, remember the decades of work to define qualities of service and non-functional requirements that Enterprise Systems represents when you think of moving workload. Just because you are used to it all being built in and there, does not mean it will be in a new environment unless you make sure of it!! (Yes, System z and Enterprise Systems work with cloud and are still the most secure....)
A final quick note, and a reflection of what I am sure we all do... as I was looking across the functional portfolio of our acquisitions, I thought to myself: what is Jeff Jonas of SRD doing these days?
Well, it turns out he is speaking at IDUG, infusing his solutions into ILOG and Infosphere portfolios, and still thinking hard about privacy, in stream analytics, applying these new ideas to IBM 'Smarter' solutions, and earlier this year talked about his G2 or sense-making project that came out. quietly back in January. He talks about it several places and there is a nice slide deck on Slideshare here. where he talks about the skunk works G2 project, sense-making and the larger picture of work he is doing with the deck entitled: Confessions of an Architect. This is good stuff around both advancing and controlling the next generation of analytics which, as he puts it, deals with Privacy and Performance, and is Smarter and More Responsible.
Just what you expect from IBM, right?
systemzblogger 2700017BYR 1,564 Views
Good Day! I was going to post a draft I had about now the 3/29/11 issue of the Lex Column in The Financial Times had a nice summary and reference on how IT has had a drop in costs of 3.5 per cent last year -- while health care went up 6 percent-- and then talk about what we are all seeing in cost control efforts. You know, 10% across the board, except how well Enterprise Systems are doing since they run the world and help implement new business models. (Really, it was all set!) Then, I was going to talk about the recent cloud announcements a couple of days ago, since we have had a thread about cloud, and talk about how important it is to be careful as you step onto you first cloud and think about the risk, who understands and has developed mechanisms dealing with security, backup, availability etc versus what could be just a little server in a rack somewhere. But then, another thread quietly weaves its way into my head this morning, and I realize it is time to post an entry.
Sometimes large events have quiet taglines along the way. It is so short, I include it here from announcement 111-078 : (First the overview)
In Hardware Announcement 110-177, dated July 22, 2010, "IBM® zEnterprise BladeCenter® Extension (zBX)," IBM introduced a new dimension in computing with the announcement of the IBM zEnterprise Server (zEnterprise). This first in the industry offering makes it possible to deploy an integrated hardware platform that brings mainframe and distributed technologies together -- a system that can start to replace individual islands of computing and can work to reduce complexity, lower costs, improve security, and bring applications closer to the data they need.
As part of that announcement we provided a road map for IBM's hybrid capabilities, the delivery of special-purpose workload optimizers and select general-purpose IBM blades. In 2010 we began to deliver, first with our business analytics solution -- IBM Smart Analytics Optimizer -- and then general-purpose POWER7™ blades. In February 2011 we continued with the announcement of the IBM WebSphere® DataPower® XI50 for zEnterprise (DataPower XI50z), a multifunctional appliance for the System z® environment that can be implemented to help provide XML hardware acceleration, and to streamline and secure valuable service-oriented architecture (SOA) applications.
The next step of the road map is to incorporate select IBM System x® technologies, originally targeted for the first half of 2011. The reaction to delivering IBM System x capabilities has been very positive, with our clients also asking that we support Microsoft® Windows®. Therefore, today we are revising our road map to include planned support for Windows on System x as well as a revised schedule for IBM System x blade delivery on the IBM zEnterprise Systems.
...and then the Statement of General Direction:
In the third quarter of 2011, IBM intends to offer select IBM System x blades running Linux® on System x in the IBM zEnterprise BladeCenter Extension Model 002.
In the fourth quarter of 2011, IBM intends to offer select IBM System x blades running Microsoft Windows in the IBM zEnterprise BladeCenter Extension Model 002.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party's sole risk and will not create liability or obligation for IBM
Today is one of those days we will look back on as a milestone. Want to control and manage you glass house? Want to control costs and manage complexity? zEnterprise. Let's go!
systemzblogger 2700017BYR 2,301 Views
I had the opportunity recently to visit the zITA Boot Camp and speak with a community of Architects who watch over the mainframe or Enterprise Systems world. For those of you who don't know, zITAs are the System z IT Architect community in IBM. These are the guys who advise and guide the largest institutions as they keep the big systems evolving to meet current business initiatives with the appropriate technologies. Now I am not saying that there was not a few missing or grey hairs in residence, but there was a nice mix of what some HR folks might call 'vitality' representatives as well. While this community knows the intricacies of where virtual storage and channel commands started from, they were discussing developments that ranged from best fit and use of workloads across platforms, futures of zEnterprise capabilities, and guiding IT leaders on strategies for decisions based on Total Cost of Ownership based on the full range of use and expenditure categories that real clients experience. .
These folks are really the Trusted Advisors we hear about, the technical conscience overseeing change in our Enterprise Clients. While they were being vetted on topics from future extensions to current technologies, they were also looking at application in spaces such as Cloud, Industry specific frameworks and solutions, and the next stage of low level integration across component boundaries using open technologies to accelerate and increase options across technology providers that can give more options for solution innovation.
As at any meeting or conference of this type, the conversations at breaks and meals are often some of the best. These are not individuals who are stuck in the quagmire of detail without a clue to what is going on with the business side at their clients. I heard conversations that included topics such as: how can we better understand industry trends our clients are dealing with, how can we help our customers better link IT and LOB initiatives so they are more effective and valuable once implemented, and what lessons can we leverage from other fields such as negotiation and the legal system to better communicate across disparate technical communities?
Sure, these guys know their availability strategies, but they also know what is going on with Web 3.0 and Linked Data, Economic and Risk Models of the Cloud Infrastructure, and what changes in compliance legislation might mean for reporting strategies and the systems behind them.
Who better to have guiding our largest and best institutions? Thank you zITAs.
systemzblogger 2700017BYR 2,427 Views
z Growing: In the recent z/Journal I noted some impressive numbers for growth as shared by Bob Thomas on the publisher's page. In just the fourth quarter of last year, MIPS grew by an amazing 58% (the highest growth in a decade) and sales grew by 70% in the same period. zEnterprise systems pushed over 450 shipments representing 1.5 million MIPS. Whoa... no editorializing needed there, right?
GUI GUI everywhere... I was talking with some folks in a session about the new interfaces for development (RDz and IDE vs ISPF and lovely green screens) and the enhancements with sysprog, sorry systems programmer, tools with z/OS Management Facility and it made me flash back to when, in the late 70s, we worked with a customer to demo a function where you could actually start to put customized screens in color on ISPF. It was a pretty exciting thing-- that unfortunately went nowhere between the novelty, customization, and the fact that despite multiple run throughs ahead of time, the demo ran into (ahem) unforeseen technical difficulties. BUT the point was just the idea of color something we all agreed would be nice someday. Heck, even the 3290 gas panel display with its pretty orange characters was enough variety to get many speculating. (I mean come on, color TV only broadcast around 1967, right?)
..and Smarter everywhere... This all made me think about how we are simultaneously getting further away from our systems while interacting with them more intensely all the time. With phones and iPads with touch screens the distance and links in the chain is longer than ever and yet the response and interaction is greater too. In a matter of minutes I have watched a demo where a CICS transaction gets enabled with EGL and modern tooling from Rational to expose a banking application on an iPhone. Between CICS events, Webshere process engines, and appliances like DataPower, or ILOG rules engines, the options for dynamic interactions in flight are now astounding. Change a business rule, a process, a partner or supplier and it is not 2 years of waterfall development and testing, it can be moments. It is not just intelligent and instrumented devices as part of a smarter planter that are interconnected, there has been a lot happening back at the old IT shop too! Don't forget those web transactions have at least as many CICS transactions to match them every day (around 30 Billion the last someone looked at it a few years ago). And yes, most of the data is still back home in the datacenter for enterprises-- and moving there more daily with private clouds too. So go and play, do real work, whatever.... don't worry, we (z and enterprise systems) got it covered behind the scenes.
systemzblogger 2700017BYR 1,947 Views
As IBM moves to consolidate security solutions, gets into Social Business courses for free, or --of course --having the Watson machine play on Jeopardy, you see the span of influence grow and expand beyond the traditional boundaries of IT. So, it becomes more important to think about how we talk with the folks in these difference places. For instance, I was recently at an IBM sponsored session where the now long established Eclipse client was a base for a number of Enterprise level development tools from Rational and there was someone not long out of school watching next to me. Suddenly, I heard: 'Oh look, it looks just like Visual Basic!'.
Exactly. We in the large systems, and for that matter distributed world, in IBM have been trying to get the message out for years about the evolution of tooling and the use of GUI interfaces, perspectives, and attractiveness with productivity for the next generation of IT coming along. We talk about skills, the changing of the guard, and the Academic Initiative and yet I don't think I heard in any of the materials someone actually come out and say the perfect thing to link to the next generations experience with programming!
In Management Speak:
Another example has to do with the aging workforce OR the aging technology challenge that goes on cycle after cycle. Sometimes it seems that this is a new crisis business has not had to deal with -- the whole issue of either old technology or the baby boomers being half of the large systems crowd and depending on what study you look at, the majority of them at least eligible for 'transition from the workforce'. And yet... while reading the current issue of Strategic Finance (the publication of the Institute of Management Accountants) there was a nice article about business as usual for planning the transition of executive leadership. They call it: Succession Planning.
Now, why the heck don't we use that term?? That is all Y2K was, or the first Virtual System, Telecommunication Network, SOA Solution... just business as usual and Succession Planning. For those of us who are in IT, or contribute to the changes around IT, we should remember this: The guys making the decisions are used to and familiar with a whole process around the discipline of succession planning. It fits into existing governance and strategy processes. It is immediately understandable and rings the bell in the cranium. Who is our next leader, the next big thing, zNext (sorry, not the official moniker-- though many of us have heard it!) technology? Use the right phrase or lingo and it is: '...No problem, we know how to deal with that. ...'
Why didn't you says so..?
So, let's try to see if we can get the eyes to light up, the bell to ring, the light bulb shining. Try to find the right phrase to link to the experience and world of your non-IT partners.
After all, it is called Enterprise, that means we need to connect everywhere and with everyone.... Use the right connection phrase and it can be, as the Cowardly Lion said in the Wizard of Oz: "...why didn't you say so?!..."
systemzblogger 2700017BYR 2,923 Views
This headline came across my virtual desk last week, and I don't even remember how I found that, but I rushed immediately to two places. First, the direct link from the Register, and secondly to IBM news and System z news. This is pretty big stuff, and I will share a couple of key extracts and let the article do the talking:
" The second big change that's coming with the zBX, according to Doris Conti, director of System z marketing at IBM, is that Microsoft's Windows operating system will be supported on the Xeon blade servers inside the zBX complex. IBM has hosted over 300 workshops with mainframe customers discussing the new hybrid system, and customers were not exactly happy that IBM was restricting Linux to Xeon blades and not supporting Windows.
"We heard the feedback and we very much intend to deliver Windows support on zBX," says Conti."
...and a little bit later.....
"You may be wondering why Windows and Linux support on the Xeon blades in the zBX didn't ship back in November with the Power-AIX blades. Jeffrey Frey, the IBM Fellow and System z architect who designed the zEnterprise 196-zBX hybrid, says that the Xeon blades are coming later because IBM's Power-AIX customers were the ones Big Blue felt would take to the hybrid computing model first. (IBM is also fixated on preserving its market share in the Unix racket against resurgent Oracle and HP.) The plan now is to get Linux support on Xeon blades out the door this year, and then add Windows support as soon as possible.
Frey said that IBM was not sure how deeply it would have to get into the operating system or hypervisor code to manage AIX, Linux, or Windows when it started the zBX. So AIX, which IBM has the source code for, and Linux, which is open source, were the easiest places to start. IBM didn't want to get involved with Windows until it knew what it might need from Microsoft in the way of cooperation. "As it turns out, there is very little of that," Frey explained to El Reg, referring to the need to get into Windows code to make the OS work on the hybrid system.
Frey also let the cat out of the bag on what hypervisor IBM is using on the blades. IBM's own Processor Resource/System Manage (PR/SM) type 1 hypervisor and its related z/VM operating system (which can function as a type 2 hypervisor) are used to dice and slice the zEnterprise 196. The company's own PowerVM hypervisor is used on the Power7 blades to carve them up into logical slices and to virtualize I/O on the blades. IBM has chosen a variant of Red Hat's Enterprise Virtualization (RHEV), the commercial-grade implementation of the KVM hypervisor for x64 iron, for the Xeon blades; this tweaked version is known as RHEV-Blue, predictably, and is made to cooperate with IBM's mainframe firmware. Power VM will support AIX 5.3, 6.1, and 7.1, and Conti says that if mainframe shops want to run the IBM i 7.1 operating system (formerly known as OS/400) on Power blades, IBM will consider it, says Conti. As for Linux on Power, Frey says there will be a need for it, but that Windows on Xeon blades is more important to get to market given the installed base of machines at mainframe shops. "
Responding to client feedback, there are no excuses now, just let the z196 and zBX flow across the datacenter like an amoeba ....... Read the article and think about it.
systemzblogger 2700017BYR 1,945 Views
Well, a quick personal note to explain the absence of postings. On Oct 11, I started work and had a chest and jaw pain that, long story short, led to a heart bypass and my absence from this blog and work for too much of the fall. It was quite a surprise and hole to fall into, so welcome back me!
Some of my main observations from the sidelines include laughing at the new phone with tiles which lets you know there is activity for mail or social networking sites (see Lotus Notes basic concept that is how old? 10-12 years?), the zEnterprise system becoming available, and continued tech announcements from CMOS Integrated Silicon Nanophotonics (as in 10x density, with electrical and optical devices on the same piece of silicon) to a range of announcements on security, smart solutions, and the cloud.
I notices the magazine, Government Technology, had a good article on Modernization, as did zJournal, and Mainframe Executive had a cover story on the creation of the new zEnterprise systems. In the legacy systems article in Government Technology, there is a good reference to the NASCIO survey in '08 that has the top 2 drivers as changes to business processes, and inability to support LOB requirements. Nice reminder about where to start in an enterprise before any tech change effort, huh?
OK, back with more, gotta ease into things again ya know...
I was listening to a popular weekly technology podcast (hint hint) when they mentioned how Intel has made a chip for a Gateway PC which, when you need to, you can upgrade in place with a $50 upgrade card. According to PC Magazine (here), the card is used to unlock hyper-threading and additional cache. While the first reaction may be, and was on this podcast, 'wait a minute, why aren't they giving me all the capability?', one of the participants quickly added: '..wait a minute, this is a really good idea... think of the costs savings, extending the life of the processor and...'. OK, obviously I paraphrase, but you get the point. Capacity on Demand has come to the desktop.
While reading many of the news items related to this clearly shows the initial confusion reflected in the podcast I listened to, others are starting to realize the implications. How long until we have extra processors available to turn on? When do we finally get raid devices for disk that ensure we don't ever lose things on these sometimes fragile home systems? Will we ever get to the point where our desktops get dynamic snapshots to enable variable leasing schemes? ....or the ability to turn on more power for temporary periods of time?
My peers and I joke about how we have been cheating for decades by knowing about large systems and watching those capabilities stream down to distributed platforms. Now, maybe, we're able to talk about and share some tech insights with our kids or grand-kids!
"All right Grandpa, turn up the computer and turn down the thermostat, the grand-kids are coming!"
I am heading for some travel so wanted to get my mid month update done now. One of the interesting things about technology is the lag before announcements start to filter out into the world. It has been about 6 weeks since zEnterprise hit the stage and I notice through my RSS feed aggregator that suddenly this last week or so there are lots of 'news' posts that, golly gee, IBM has the fastest processor in the z196 at 5.2 GHz --while one typo'd it at 5.6. While that is great, they usually stop there since the PC culture, I guess, seems to think that is all that matters when it comes to performance; systems performance.
They don't tend to continue and say:
(Jennifer Dennis discusses the implications and some of the new offerings related to Application Management, Application Resilience, Security, and Asset and Financial Management. )
Another question-- or class of question -- is about fit and where to use it, could I use it for.....what? Naturally, this leads to a discussion and as the realization hits I hear comments like:
systemzblogger 2700017BYR 2,641 Views
Back when the the largest processor was a MIP (million instructions a second vs the 50K MIPS of just the z196 Central Processing Complex...) and a megabyte of memory on the 'mainframe' was a million dollars or so (versus about a thousand fold less now with memory advancements and the latest announcements), we saw not only the creation of distributed systems and strategies to optimize the effective use of those key resources, but also looked hard at individual user's cycle use. Of particular focus were systems programmers, developers, and 2 or 3D drafting. We know that CAD stations evolved, and that each TSO user (sysprog or AD) were looked at very closely since --and I remember doing these studies --- each user could easily be linked to say.... a percent or more of CPU usage as recently as the 80's. (Ouch!)
Fast forward to 2010 and the latest evolutionary strategy includes a strong focus on both cycle savings and productivity savings for users. For systems programmers, we have the z/OS Management Facility (which we have mentioned a few times) which is accessible via a browser. For developers on system z, there is the Rational Developer for System z (with Java or EGL). Besides being and eclipse based IDE, the workstation based tool does local syntax checking and -- with the new Unit Test feature -- also offloads still additional cycles. With the focus on effective use of system resources, and the emphasis on maintenance and operational support costs (since they represent a large portion of IT budgets still) these approaches should not be overlooked since they not only address both concerns of animate and inanimate resources (people and machines), but do so in an integrated way.
It is also nice to note that these are just part of a strategy to have tooling that are similar across multiple roles and platforms which address the challenges of models that involve extended development infrastructures for globally integrated enterprises that involve development centers located ...well... anywhere. (See near shore, off shore, on shore and models that are complicated to manage for shore! ) Groan....
..and these are just the first level tools since Rational also has strategies for communication across teams (Rational Team Concert for z ), for modernizing the UI and accessing legacy applications through the tool Host Access Transformation Services, and for advancing SOA services strategies that optimize component reuse by looking at what is out there today via Rational Asset Analyzer .
OK... I won't go off on portfolio reviews here, but will also mention that you should not overlook one of the most obvious areas for effective use of resources and that is: make sure you are current on software compilers since Big Blue has quietly been enhancing them on ALL platforms, but certainly System z...... and the impact over the last 5 years or so can add up to total improvements that may double or more performance across a range of language/subsystem/platform combinations.
So, don't get caught short with old tooling. Refresh, renew and refurbish. Change things out and up. Your mom may have told you to change out your old under things since you never know who may see them. I say change out your older system things before an audit shows you have been missing out on big savings and you have egg on your face to your management team. (How is that for mixed metaphors?) Oh, and do take time this labor day to take a break and set down that phone and e-mail system!
In case you have not seen the Fit for Purpose materials from your friendly IBM site or local team calling on you enterprise, there are a number tools available to help you determine 'best fit' for servers and workload. Based on insight from some external studies, they focus on workloads related to business intelligence and analytics, Web and SOA, traditional transaction, and suites like ERP, CRM, SCM etc. Best Fit has been around a long time with concepts like Balanced Systems (a nod to Ray Wicks et al), 'loved ones' ( and a tip of the hat to Seibo Freisenborg), and constant fiddling with levels and amounts of cache to keep data flowing to the big engines crunching away ---even when those engines where what we would now consider little guys.
Yep, some workloads need more or less qualities of service (QoS), or non functional requirements like availability, security, performance and so on. And some are more compute or I/O intensive, shorter or longer transactions, spread across infrastructures or limited to running on isolated or even specialized processors. This all makes sense to the technical mind, but it is a good idea to remember the power of inertia, decisions already made, and what one buddy called Fit for Politics. It is tough to make changes when decisions are often not re-examined or justified -- they get cast in stone or aligned with factions and can be perceived to be linked to career paths even. Don't forget that in you consolidation or movement plans for the z196 and zEnterprise we have talked about! And don't forget that part of the power of being able to QUICKLY move workloads across platforms in the new complex is that you can quickly try things out and over time learn to trust the idea of decisions not caste in stone, maybe not doing lots of analysis ahead of time and just freaking try it!! (Hey, there's an idea....)
On another 'how things have changed' I cleaned my office recently and found I tossed both round and square backup discs galore. Between more reliable cheaper drives, and backup schemes offsite I realized it had been awhile since I did that hours long data backup and labeling fun time. The other thing I found was a folder of magnetic shapes representing e-business server types. Ten years ago or so when the idea of creating these e-business infrastructures with customers was new I'd find a magnetic whiteboard (easier than you think for big companies) and slap these blocks on the wall with new names like web servers, application servers, portal, gateways for voice or files or B2B....and we would plan out the new world of opening up the enterprise to partners, suppliers and customers. Those concepts and server types are common place now, so I felt pretty secure in setting them free too.
...but maybe I should think about making a new set of magnet blocks with lines of business or service types, with event categories and collaboration options as discussions for the next wave of Smart systems start getting built?
systemzblogger 2700017BYR 3,659 Views
Ta-da! The zEnterprise is out!! I admit I hinted a while back, and yes a little bit last time, about continued evolution and here… it…. is… the System of Systems, the third dimension of not just making processors with faster engines or specialized function, not just growing the number of processors, but now pulling in other platform systems into the z complex.
After going to the teach the teachers session, I took my notes and summarized them for some internal calls with teams, so have been summarizing and netting out, boiling the ocean and relooking at materials. I won’t try to give an announcement here, but just touch on some highlights.
First, realize the vertical integration this reflects, the added dimension of creating the hypervisor of hypervisors (Universal Resource Manager) on top of the other platforms so that the zEnterprise system now wraps its arms around --and the amount of integration represented here. (I understand our friends at Gartner used the term: 'brilliant' in describing the Universal Resource Manager !)
Next, look at the concept of being able to manage, as in Service Level Agreements, not just workload, but security, availability and virtualization targets. (..and the implied amount of monitoring and reporting behind the scenes…)
Then, look at the absolutely killer numbers BOTH in the base complex of the z196 for the kinds of technology improvements we are used to seeing in new z generations, and also in the incredible impact on space, energy, and operating costs compared to an infrastructure before (yesterday!) the zEnterprise complex.
Note that in the z Blade Extension (zBX), that the Power7 and SAO optimizer blades are first, followed next year by DataPower and x-series blades. Take a hard look at what improvements to certain star queries-- that may be 80 fold (or more)-- might do to the concept of how you build your analytics and process systems. And….. if you have looked at the System z Management Facility, look at the new CICS Deployment Assistant that may reduce administrative time up to 80% !
OK, I am giddy… I promised not to repeat the announcement here. It is just so packed with improvements it will take us all time to fully absorb it. So, dive it, we’ll talk later.
Looking through the current issue of Mainframe Executive, (you are subscribed, right?) and saw a nice interview with some of the Academic Initiative students. I also noted that there will be university program representatives at Share this year to talk with mainframe shop managers in August in Boston (see z Events ). The theme continues in IBM Systems Magazine for the Mainframe
with the article: Educated for Success.
These items made me nostalgic, thinking of Dr. Seuss and "all the wonderful things they'll see!" We old fog-gees saw virtual storage, MVT, SVS, MVS, and up to z/OS and they may see operating systems so many levels of complexity and abstraction above what we have watched it boggles the mind. We abstracted platforms with middleware running anywhere and then raising the bar by abstracting run-times with the evolutionary result of early CSP and VA/GEN to the current Enterprise Generation Language: EGL.
We watched virtualization from basic storage to VM, server consolidation, and federation, and they start by taking steps on the cloud!
On the note of systems evolving, it seems I am hearing about more enterprises looking hard at long term systems that were build over decades to perform incredibly efficiently but, alas, in many cases (since they are rigid and tightly coupled), when it is time to introduce the change monster, the prospect of 'different' overwhelms them. Projected costs, time, & risks start to look pretty scary. Hey, just remember that the remodeling industry is bigger than new housing construction, and build that value case regardless of how large the 'maintenance' is to your application or infrastructure base.
And don't forget, there are many more options with componentization, messaging, event-based architectures, SOA and web services; not to mentionand modernization transformation strategies and tooling that weren't there just a few years ago. Remember VSE to MVS migrations? How about Y2K? The longer you go without changing, the bigger the bump --whatever the system.
Just remember: start small (skunk works and prototypes) , draw some good pictures (architectural models), bring extra sandwinches (resources)....maintain your perspective (humor).
Or maybe, wait for some magic that we have yet to see!!
Let's look at some more developing trends or patterns... IBM announces the Financial Markets Framework at the annual Financial Services Technology Expo in New York and not only refers to microsecond latencies and millions of transactions, but also builds on existing industry frameworks, adding feeds from disparate sources (think Info stream and stream computing from last year...) adding analytics and process extensions to address risk, regulatory and compliance areas. Do you remember the 20 plus times volume system that was announced about a year ago? Do you think that middleware is evolving still? (Do we need to mention what systems the Financial Industry runs?)
I stumbled on another source I should perhaps have known about called the Dancing Dinosaur blog, pretty interesting reading.
I found it after seeing one of the sessions at Innovate that had a demo to connect System z to smartphones, seeing the redbook, and curious to see if there were any mentions of it on the net. This ties in with a number of announcements to support phones like the recent Android support by collaboration software. Integration of the user to mainframe continues in all kinds of ways.
Well, I know this is a short one, I am off next week and then traveling. Oh, and take a look at the COBOL Cafe hub for the new Rational Developer for z Unit Test feature. It certainly adds some interesting possibilities for managing the test workload on z!
Just got back from Innovate 2010 and I encourage you to view the Keynotes; they will fire you up on innovation, the future, systems of systems and the future-- fer sure!
Take a look at the current IBM System Magazine to learn, among other things, how tape is far from dead -- 29.5 B bits per square inch demonstrated (44x today's capacity)??
The mainframe mag, zJournal, is now at the mainframezone, and the current issue has an article on mashups and Web 2.0 with the mainframe. (have you looked at CICS support for PHP?)
...and IBM has demonstrated a Graphene transistor based chip at 100Ghz -- as in single layer of carbon atoms exceeding cut off frequencies of silicon chips with the same gate length, is getting involved in more auto systems with Daimler, and there is a nice retrospective on Disney and IBM (with videos!) here.
So, I ran across a couple of interesting articles recently that, as a architect working with large systems and large enterprises made me stop and think.
First, in the Financial Times from June 8, a nice bit of thought on outsourcing, governance, and a reminder how key tech is and the importance of managing technology correctly!
Secondly, in the current issue of Strategic Finance (oops, sorry the specific article is for IMA members only it seems..) a good discussion on how to handle Idle Capacity Costs (pg 55). Without going into the detail, the point is execs at your enterprise are looking at these things.
And... what system helps manage resources to minimize excess capacity, maximize utilization?
... and oh by the way have you looked at moving to or consolidated non-System z systems on z lately?
Or would rather wait for your manager to ask why not?
systemzblogger 2700017BYR 2,605 Views
Last time we talked about acquisitions, and then IBM sets up to acquire Sterling Commerce and extends the idea of extended business networks for B2B (a term you don't seem to hear as much as 10 years ago) interactions. Does this affect System z? Gee, let me think, what runs most large enterprises? Who is concerned with process flows, transactions, CRM, SCM, PLM, ERP etc and has more than one partner? With enterprise transformations squeezing costs and increasing speeds, it just might be an interesting one to watch.
On the news from 'what, IBM is involved in that?', Texas A&M works with IBM to speed up drug discoveries to deal with Tuberculosis, big blue is working with Guang Dong hospital for analytics of treatment efficiencies, Linux hits 10 years on System z, and System z Expo is only a few months out (October 4) and is now called System z Technical University.
(With a slogan of: z can do it!, I can't help but think of Adam Sandler movies with Rob Schneider saying you can do it! )
Has it really been 10 years for Linux on z? (OK, a couple of other nice links for Linux and System z: : datasheet, white-paper)
Linux has had a pretty large effect helping consolidation efforts, moving the open movement...
Another smaller milestone happened In the last couple of days as I saw an item in the Financial Times on Google moving only to either Linux or Mac systems.
Who would have thought all those years ago when IBM added Unix Systems Services, it would lead to where we are today?
Sidestep to a USS z/OS implementation redbook reference: Redbook where it notes:
"In 1991, the US Federal Information Processing Standards (FIPS) Document 151 stated that MVS must incorporate support for popular UNIX interfaces. So began the challenge of including UNIX functionality into the MVS operating system. The first implementation was known as OpenEdition (or OE, or OMVS), then it became OS/390 UNIX System Services, and then finally z/OS UNIX System Services, as we know it today." (Open Edition first came out in MVS/ESA V4R3 in1994)
Happy Birthday Linux!!
systemzblogger 2700017BYR 4,645 Views
Well, I had some folks wondering when the next System z is coming along, since it was February '08 that the z10 came along, and we had those nice Power7 announcements this fall. Naturally, I can't say even if I knew, but I did see a nice counterpoint on the System z skills discussion on gomainframe.com with a document from Clabby Analytics called: The Alleged Mainframe Skills Shortage. It is part of a counterpoint to a recent consultant release on mainframe skills. I will leave you to read it, but not that there are some nice tidbits in it, including: IBM's $100M investment for easier management interfaces for the mainframe, some detailed demographics vs anecdotal information on mainframe support, and a few comments like 'silly' and 'daft' to suggest that there is anymore a crisis than for IT skills in general. Of course, you add the thousands of new graduates and 100's of schools that are flowing from the Academic Initiative, and one wonders what issues are really behind this urban myth.
With the recent bid by SAP to acquire Sybase, I noted in the Financial Times and other sources how IBM is setting aside $20B for acquisitions at a pace that will double the last decade in the next five years. Wow! I maintain a site internally that helps keep track of acquisitions at a high level, guess I will have to ramp it up. And yes, the wheels are spinning for candidates in my brain....
On an unrelated note, while we know the next major phase after services may be cloud, but looking back to a push that is still very much alive, I have been noticing how Enterprise Modernization is very much alive and well, incorporates some aspects of e-business, SOA, and really underscores some potential culture clash issues between the free wheeling internet boys and super systems management Systems z folks. If you happen to be modernizing a part of the company and IT that has not been touched by that upstart internet, just be extra sure you look at ALL the security implications, and review infrastructure, development, and governance! (Hey, there are some acquisitions that relate to these too... like ISS, AppScan (Watchfire), Ounce Labs, and of course Rational itself which now has the system z Rational Developer for z we have talked about.)
systemzblogger 2700017BYR 2,720 Views
So, what is going on?
Green Chips Stocks had their view of the tightening of Environmental standards reflected in the IBM Suppliers announcement on Global sustainability, while IBM acquires Cast Iron Systems for cloud initiatives and the delivery of SaaS, and also now has a follow on game to Innova8 with the new CityOne game.
Wow.. you really don't have to work that hard to see these patterns of Big Blue just moving along on their long time stated intentions of moving the bar forward on basic beliefs of making a difference for their customers and the world... while being a strong business entity. I mean, hey, let's be good citizens, let's help our suppliers do the things we have been on green initiatives, and let's continue to bolster the cloud offerings we are developing! (Yes, I am going to point out the obvious: what systems predominate in the cloud of the future, or those huge system outsourced floors IBM runs? Hint: Enterprise........Systems!)
There is a new Poughkeepsie green manufacturing system, there are new Business Process components announced at Impact today in Las Vegas, and IBM has signed an agreement with Drawbase software to support the smart building initiatives. (see previous entries on the amount of energy used by buildings and the growth of cities....)
OK, so there are patterns other than development or design ones. Here is one: I just got back from a meeting (yes, I am an internal IBMer architect) where the Enterprise Systems architects from North America got together for some updates -- and to see one another face to face--- of course.
Besides learning about some of the cool stuff coming down the pipe (oh how I wish I could share!!), there was lots of comparing notes on what clients are doing, talking with those with decades of experience with banks, government, health, and manufacturing institutions, and plenty of 'what would you do' chats on behalf of clients.
And the pattern? The big boys are as dedicated to Enterprise Systems as ever, to growing and modernizing them, and to adding workload (as we have talked about with Chordiant, ACI, WAS, Development, Warehouse and others --see Solution Editions).
And, that while thinking of the conversations I had with other IBM'ers, looking at the news, and the stuff coming... well, it all reminded me of the IBM Values:
Well, I almost missed IBM being named the top Security Company by SC Magazine. Naturally, it got me thinking of how the company is pushing ahead on many fronts so the technology is there when clients need it.
Besides the announcements on making plastics from plants, Tivoli has been looking at security needs for SOA and clouds across different environments and believe me, it is a whole lot more complicated than just you typical e-business application from a few years ago!
Across platforms, the internet, and different infrastructures you run into SAML, WS-Trust, Kerberos certificates, RACF Passtickets, LTPA tokens, keys and oh yeah, passwords and IDs. Yikes, it gets complicated quickly... I used to think in terms of Tivoli Identity and Access Management (TIM and TAM), but we now need to think about Federated Identity Management (TFIM), and a relatively new set of functionality called Security Policy Manager (TSPM) to address the life-cycle of policies, credential translation, and identity propagation across an SOA environment. Now I am certainly not going to go into any detail here, this is more of a heads up on where to look if you are getting into the next phases of this stuff. (And look at the Redbook too!)
Oh, and take a look at the IBM Security Framework. With the zSecure suite, the ISS contributions with the Proventia suite, the acquisition of Guardium for DB security, and AppScan products for applications (as just a few examples), security is being layered and added by Tivoli security and others ALL through the IT infrastructure.
Hey, guess what? If you take things out of the black box and distribute them all over the place you need more than RACF!
Did you see the entry on IBM being the first to eliminate these PFOS and PFOA compounds from its chip manufacturing processes? The range of announcements from Big Blue seems to be growing with a broad range of technology, collaborative centers for IT and even awards from the American Bar Association ---not for Patent Reform this time but legal assistance to non-profits. I was wondering if IBM is getting better at publicizing things in addtion to doing new and different things. It made me think of today's post theme as: "No one knows unless you tell them".
This started after I visited a client who went on and on about the wonders of virtualization and how being in a rack brought things close together. All true, but how about so close you avoid any network at all ( like hipersockets ---which has been around for years), or being at memory speeds at microseconds (yes, that is one millionth of a second) instead of potentially milli or even just plain seconds delay across a distributed infrastructure with systems laid across server farms?
System z's heritage of getting data close to where it is processed, handling huge amounts of data, and getting transaction levels of tens thousands a second, has lately been referred to in guidelines that say: Get your data and applications back together if one is on and one is off of System z! Our job as large systems folks includes telling others about this often silent world that sits behind the enterprises that run the world too I guess.
No one knows unless you tell them that object oriented programming started with COBOL (commercially oriented business object language), that high availability systems like GDPS can run synchronously and asynchronously and can be across huge distances in anticipation of disaster, or that innovation keeps moving on many fronts at IBM that are not always very visible. (Like the Innovate 2010 Jam that is going on right now internally to IBM, or a presentation I saw a bit ago that revealed System z clients grow twice as fast as non System z ones !)
I had another case recently that falls under the mantle of 'you need to tell or remind folks about things' that related to changes and technology. For those of use who have been around a bit it is like a part of our DNA to understand the key concept of how IT applied with custom implementations can provide true competitive differentiation for clients. In a conversation with some folks, we were reviewing the syndrome of how some enterprises either fall so far behind the technology curve, or get frustrated with IT's ability to deliver, and they abandon huge investments in 'legacy' resources as represented by their application base build over decades.
Now don't get me wrong, sometimes getting new suites of function makes a lot of sense, but sometimes leveraging or modernizing existing assets and processes can by done with strategies that not only take a lot less resource, but are more timely and effective. Think everything from web services to SOA to screen scrapping strategies as part of modernization strategies that can extend differentiation and advantage rather than throwing everything out and starting over again.
Alert System z clients have legacy capabilities that have been around for decades but also are continually updated and refreshed with these approaches. One example I crossed paths with recently addressed some of the older technology bases like Natural, Adabase, RPG, Cool:Gen 4GL. Rational Migration Extensions not only has the capabilities, but partnerships with specialists who have been in this space for years.
Before you throw anything important out with the bath water... see what can be renovated, updated, modernized. You might find with a little elbow grease you have a classic that works great in the 21st century and is surprisingly valuable!
systemzblogger 2700017BYR 2,837 Views
Consider this an 'extra'. If you have been watching the news lately, here is another great example of IBM depth in large systems. This all started when I did, way back in 1978. There have been articles in the 90's galore up to now....
systemzblogger 2700017BYR 1,981 Views
OK... Daylight Savings kicked in yesterday, even though the Vernal Equinox is not until next weekend, and green things are shooting up out of the ground (think 100's of Daffodils in the case of my lawn). Light is the theme in my brain as I think of the chips that communicate via 'light avalanches' or photon paths rather than copper ones. Are you kidding me? Wow.... Oh, and IBM research -with Stanford- announces environmentally sustainable plastics (or a start). Things seem to be accelerating. Don't forget the game changing p-series announcements with dramatic increases in parallelism, and now the x86 ex5 servers which decouple memory from the processors to potentially ' reduce the number of servers needed by half while cutting storage costs 97% and licensing fees by 50%'. (!!)
OK... when I see something with a SAM in it, I think access method. VSAM, ISAM, SAM, even QSAM for the older fuddy duddies in the crowd. But TSAM? Turns out it is Tivoli Service Automation Manager to automate requesting, deployment, monitoring and management of cloud computing services. First it was out for Linux on System z, and now we have if for cloud resources. Things expand in different directions. We have IaaS (information as a services) PaaS (Platform), and of course SaaS (Software). This is a big jump--- what will TSAM handle next....?
I am getting anxious, or at least eager, for System z to do its next revolution now that p and x are out. Will the platform that first started parallelism have some things in store? Will the platform that really got shared storage have some new memory schemes? What about the platform that started and still leads virtualization..... will it extend its arms further and in what direction? Virtualization software at the x86 level is great, but is still learning to walk compared to the mechanisms on System z that underlay consolidation projects and workload management techniques that maintain service levels while pegging utilization rates.
It is spring and a 'young man's fancy turns to thoughts of love', as Tennyson said, but at least for older farts like myself they also (hey, that is also, not instead!) turn to thoughts of how fast technology is turning.
systemzblogger 2700017BYR 1,686 Views
What can’t run on the Mainframe?
OK, we know about system Z’s heritage for running the big boys, and we have talked about how there are thousands of applications that have been certified over last few years for Linux on system Z.
…. But what about all of that .NET workload out there?
Here is a Hint: If you do start consolidation onto VM or Linux on Z, don’t overlook a key component of the project: monitoring. Take a hard look at Velocity Software, a partner started by ex IBMers that has worked with IBM since 1988, works with the system z labs, and has been involved in VM Early Support Programs since then too.
Note: Another important virtualization
management consideration dealing with multiple platforms is also reflected in
the creation of the IBM Systems Software organization. Covering
components from System Director, VM Control, and Active Energy Manager (to name
but a few), the portfolio addresses management, security, energy, availability,
virtualization and of course, operating systems.
For fun: Take a look at this video on You Tube: How IBM Software Works--- A nice summary of where IBM software is going.
If you saw this weeks Power7 announcements you may have had a similar reaction to mine. First, of course, was whoa... increasing threads to nearly 10 fold? Or increasing the chip power, structure, etc and the middleware is all ready to take advantage?? When you look at the details, or if your friendly local IBMer gives you a presentation, look at the performance and cost charts and see the dramatic bend in the curves for Power7. This is a game changer... really. It will force us all to take a hard look at design point, fit and sweeping changes could come to certain datacenters this year. Secondly, if you have been around awhile, and remember the design charts for large systems processors over the years, and the way System z has dealt with mixed workloads, you may have had a sense of deja vu. I often have said since starting in large systems in the 70's that I have sometimes felt like I have been cheating as I watch large systems concepts move over to other systems. Well, this is a big one and I feel as if I have lost some of my crib sheets!
On an unrelated note, I was looking at BPM BlueWorks, the new LotusLive facility for customers to be able to freely model and share their strategies, capabilities, and processes as well as leverage sample models and maps from their industries. As I understand it, you can then flow these models to tools like Business Modeler for more elaboration and the development flow. Powerful stuff indeed. I always thought that SCP stood for systems control program, but I guess it is now: strategy, capability, and process....
Finally, start studying (as I will be) the preview of z/OS v1.12. It appears the future does keep moving in z land and it we pay attention our crib notes are still there: Predictive Failure Analysis, automatic partitioning, avoiding VSAM data fragmentation, workload driven provisioning of capacity, the extended address volumes coming into their own and further enhancements to cryptographic processing algorithms. The preview is entitled: Heralding a new Generation of Smart Operating Systems. Boy those enterprise systems guys, just when you think you haven't heard heuristic or ease of use in a while you get a statement like: 'IBM has taken the long-term outlook by simplifying a mainframe system from the inside out and from end to end.' There is a lot associated with this announcement and phase: Tivoli Service Management Center, CICS Explorer, Rational Develop for z, ZMF, all the health checks...
Well, better get out of here and start reading!
systemzblogger 2700017BYR 3,773 Views
As I went through my feed aggregators the last few weeks, another software acquisition (National Interest Security Company) showed itself along with the Smarter Planet exhibit at Disney World. These juxtaposed themselves with articles and blogs about how technology continues become 'commoditized', how hard it can be to explain the value of z technology ( even though it quietly runs the world's major institutions ), and a haircut where I actually had a barber ask me: 'Does IBM still make computers?' Ouch.
I have been tracking the software acquisitions (well over 80) and acquisitions of other companies that do things like process mortgages, automate and move workload, and other related to systems outsourcing for about a decade. It has been a fascinating puzzle picture building for those who want to watch if form, and the crystallization of the Smarter Planet strategy is one that makes real sense if you are both a tech watcher and futurist.
While some some talk about the cloud as the latest marketing spin (see a recent YouTube video by a sailing officianado..), others recognize it as the next tier in the 'distributed' model.
The pieces are coming together, and the case studies or examples are starting to pop up from hospitals, cities, power grids etc. Imagine if all of these endpoints of intelligence not just in devices, but instruments, industry, and networks, from PCs and phones, temperature regulators and flow valves, assembly lines and portable diagnostic devices.... This is so big and exciting it hurts the brain a little bit BUT... that is why it is so important as system z and Enterprise Systems folks to explain what has quietly been going on behind the scenes for the last 40 some years.
We have been building and refining ways to deal with mind freezing amounts of data, processing, and workload balancing. Virtualization, automation and availability strategies, and frameworks have been improving quietly in this world the general population and popular media rarely becomes aware of.
The pieces are coming to together, and we get to explain the story about not only what happened 'in the beginning', but be able to explain about how 'that is why they were able to make cities livable, find ways to live sustainably and save the planet'.
And then not say: Then End.. but instead: The Beginning....
systemzblogger 2700017BYR 2,325 Views
OK, so maybe it is because it is the start of the year and I try to refresh bookmarks, or I was thinking about talking with a new contact at a system z partner, and what sources to point him to... But I found myself entering the waving fingers, doodledy doo music, and time transition to compare what finding things was like in the 70s versus now.
Announcements: Then, we had blue letters for announcements that came in secure shipments that we would read through at our desks, and then reread again for the following week or so. Now, we go online, don't have desks, and may have them delivered via RSS feed aggregators or podcast series. We would schedule auditoriums in every even close to large city, and announce with fanfare the yearly changes after doing the same with the branch after staying up all night learning about them-- having gotten a signature secured box the night before!
Systems Engineers evolved into Architects and IT Specialists, and Business Partner counter parts, and Orange books became Redbooks. In the branch (the what?) I started at, we had ONE dot matrix printer, a green screen dumb terminal in a room for all to share and could not have imagined the social sites for large systems that include: destination z, mydeveloperworks, the mainframe blog on typepad, and system z on Facebook ! We would have never imagined the need for the academic initiative, the SOA Social Network, or that there might be IBM channels on youtube...
As an architect I, and many of you, have to keep up on executive topics, and so look at magazines for the CIO, CFO, Mainframe Executive, in addition to IBM Research journals like the Systems Journal. I go to the Institute for Business Value, as well as Investors Business Daily, the Financial Times, Wall Street Journal, and the wonderful NY Times Reader.
When I make time, I may use igoogle, my yahoo, feedly or other aggregation methods to look at news from tech, industry, and partners who build solutions. I have a feed for pictures from Flickr, press releases looking for key works like acquisition, and created a search widget or gadget for IBM when it pops up out on this interweb in any of 'those tubes'. The information overload and anxiety drives a person to feel he needs these tools, just as business intelligence and information analytics is a needed response at the enterprise level. Without these tools we just would not know what is going on. Things have changed for sure, but fortunately there are ways to not feel as if we are waiting for the newspapers to show up from the ship landing from the 'continent'....
What are you pet techniques to stay current? Let me know....
systemzblogger 2700017BYR 2,476 Views
So, looking for patterns, have you noticed the increase in Smarter Planet activity involving government entities, power, water, traffic, medical records etc? This stuff is building and becoming real and is not just some marketing Public Relations. Take a look at the press room or the sites related to Smarter Planet for this growing trend!
As part of this entry's miscellany, I noted some announcements related to Solution Edition (for Chordiant's Customer Experience Management solution - think CRM plus...), a z-series Linux Enterprise Server for consolidation strategies, and ran across an entry that talked about how the top 50 banks are all running z-series. (We've seen lots of top enterprises running z stats, I had just not seen that particular one before..)
Tech folks are, from my experience, optimists, eager to apply innovation for the betterment of their enterprise, and yes, the world. As a new year starts, I know you join me in being ready to go, to dig in, help out, and make a difference. It is not just IBM values to make a difference in the world.. I think all you you techies love doing so too... I look forward to joining you in doing just that in 2010 ! !
systemzblogger 2700017BYR 4,833 Views
So, you probably saw the announcement for the Linux enterprise server and maybe thought of the recent Solution Edition for Linux.
If this doesn't line up nicely with all of the virtualization and consolidation efforts going on, I sure don't know what does!
Being December, I find myself looking back and thinking about some of the high-level enhancements in enterprise systems in 2009. From z/OS management facility to CICS Explorer to the 45th Anniversary of the mainframe (CICS turning 40, or COBOL 50), and the Academic Initiative hitting new levels of participation, it has been an interesting year.
Whether from increased integration through enterprise modernization (especially CICS Web services), insights through deeper analytics, or efficiencies from green characteristics, it seems system z keeps pushing forward in meaningful ways on many fronts.
Being December, I find myself paging more than normal -- as many of you probably do --but I wanted to stop by and especially wish you and yours the very best of holiday seasons and hope you join me in looking forward to the exciting developments awaiting us all in 2010.
‘Scientists, at IBM Research - Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses. ‘
I asked one of my friends if that meant that when you talk to it …it ignored you.. (drumroll please!)
Another indicator about how things are about to change in some pretty big ways our recent developments from IBM and cloud computing. As I listen to these developments, I can't help but reflect on how many of the goals tie back nicely to enterprise systems kinds of goals. Think of being able to Log onto an essentially dumb terminal (in the old days a WYSE terminal, today through a browser). Imagine the cost and management savings of not having to maintain thousands of endpoints beyond their just being endpoints! (think POT: plain old telephone)
When I hear the story of the Pike County school system (and their implementation of cloud computing for the desktop), there is a part of me that starts to jump up and down and scream: ‘It's coming! It's coming!’ We really are moving into the age of industrialization for information technology. (Our old friend Irving Wladawsky-Berger talked about it earlier this year when he talked about the industrialization of services, and there are other recent articles such as this one by strategist Gene Wright where he references Simon Wardley’s lecture on YouTube.---Look for the cats in his slideshow!!)
I heard a great analogy for the ‘older ones’ in the
audience: …think about when you had to use an operator for phone calls, then
for long distance, then for overseas, and now you don’t even need a phone!
Oh, and what do we think will be running the clouds most efficiently in that sky? Enterprise systems??
systemzblogger 2700017BYR 2,347 Views
System z Personal Development Tool
Well, it is fourth-quarter and that means it's crazy busy time for those in the IT world, the workplace world, or those just anticipating getting out of shopping for the holidays.
Just back from a productive visit to a bank, I noticed last week that IBM has made available to its Independent Software Vendor members of Partner World, the previously internal only zPDT tool. With this tool, you can run z/OS on Intel or Intel compatible platforms and use it for development, education, or demos.
It comes in three sizes -- one, two, or three virtual machines. Having seen this in operation, I can tell you that some of the combinations can be quite complex including middleware stacks, Linux environments, and network connections. If you go to Partner World to this link, there's all the information you need as an ISV to guide you through the process for how to purchase your licenses and get your hardware key.
It's a powerful tool and a great option for those looking for something in our distributed world to work with and show others system Z.
Did you miss the last months birthday? Yes, COBOL turned 50 last month. I could blather on about how there are 30 billion COBOL and CICS transactions a day, or how their are still well over 1 million COBOL programmers, or about how there may be as many as 5 trillion lines of COBOL code out there running there largest institutions. Instead, let's just say Happy Birthday, to this commercially oriented business object language and, like your grandpa, remember to give it some respect!
As a field architect, I have the opportunity to run into all kinds of customers and situations. This means I get to read all kinds of technology. Recently, I had the opportunity to take a closer look at one aspect of virtualization across our server platforms in IBM and found myself very encouraged in the direction we're going. You may recall last year at this time IBM announced, and later followed through on, the acquisition of the Transitive company.
About six months later, IBM announced PowerVM, which provided the capability to consolidate sets of applications across power systems (AI, IBM i, and Linux). This included the rapid deployment of workloads in partitions, and even the live transfer of running workloads. There's a lot of detail in features about the resource sharing, the implementation of micro-partitioning where you can have as many as 10 dynamic logical partitions per processor core and so on, but the exciting thing is to see the direction of virtualization and that concepts started on System z percolate down across other platforms. I recently heard about the impact at a conference where a video running out of a partition was moved across physical machines live on the conference floor -- who wouldn't like to have seen that?
Remembering how the i-series was moved to power systems, learning that Transitive helped move Apple systems across chipsets, and seeing examples everywhere of increased management and utilization of processor resources (such as the recent z/OS enhancement for zAAP eligible workload on zIIP engines), it just gets one itching to see the next result of that virtualization acquisition in Transitive!
systemzblogger 2700017BYR Tags:  system z resilience engineering research ibm 1 Comment 4,466 Views
I was reading the October issue of Popular Mechanics when I came across a column by Glenn Harlan Reynolds entitled Ready for Anything. In the article he defines resilience engineering to include the idea of designing and creating maintenance systems so they have some give, are able to offer extra capacity, handle sudden loads, provide plenty of warning when things begin to break down, backup systems in case they do, and so on. I immediately thought: hey this sounds a whole lot like what System z has been doing for decades!
I wondered what IBM is doing for this newly coined approach (though it was mentioned that resilience engineering was born as an academic idea in response to the 2003
In case you did not know, IBM Research is the world’s largest industrial research organization with about 3,000 scientists and engineers in eight labs spanning six countries. IBM has produced more research breakthroughs than any other company in the IT industry and has led in U.S. patents for the past 15 years. It’s always nice to see our scientists at IBM Research-- which recently celebrated the 20 year anniversary of moving atoms -- ,and the engineers across IBM, are continuing to stay ahead of the curve in designing our systems!
systemzblogger 2700017BYR 2,516 Views
I had the opportunity to hear a podcast this week of a Town Hall meeting where the story of Baldor Electric and their evolution from a mixed infrastructure to almost exclusively System z was compellingly shared. The journey, also shared on Mainframe Executive here, took 12 years and serious focus to achieve technical and fiscal performance they can be proud of.
The Baldor story reminded me of Project Big Green, where IBM is moving 3900 servers to 30 some System z servers ( which I think is now around 19 wth system z10). We are two years into what may be a five year cycle of better optimizing our own systems and part of a transformation which has already seen a reduction of CIOs from 128 to 1, and data centers from 155 to 5 (east and west coast of North America, Australia, Asia and Europe for the curious out there).
It occurred to me that neither story would be possible if the full usage or exploitation of the platform had been happening right along --including at deal old Big Blue!! Growth, politics, and taking your eye off of optimization in the big picture is an easy thing to do across an enterprise, but fortunately the promise of System z and the Mainframe Charter (innovation, community and value) continues to roll out with happenings this summer including the Solution Edition offerings, IFL and memory pricing actions, and Academic Initiative thresholds being passed as more students and universities become part of the program.
systemzblogger 2700017BYR 1,928 Views
After seeing the news about IBM scientists (working with Caltech) using DNA molecules as scaffolding with carbon nano tubes as part of new sub 22 nm lithography processes, I found myself thinking about progress on the storage side as well.
Remembering how storage, back in 1956 with the 5 MB and 50 platter devices, has evolved in 50 years to multiple TB sizes, or how with solid-state storage now can completely eliminate seek and set sector activity, it does all blow one's mind. (here's a nice white paper on SSD performance) Still, I guess we need these kinds of improvements when data doubles every year and a half, or there are estimates that is growing at 60% compound growth rates which are depicted as seemingly relatable examples as how many libraries of Congress per day we add.
Fortunately, at least in System z land, we have decades of evolution in management systems which can help government, control, and manage the scale of technical resources we use. Whether that storage is in the mainframe or on the floor, or it is memory, specialty processors, programs, tasks, or workloads... it's not version 1 approaches we have helping us. (A example of continued platform simplification is the recent announcement of the z/OS management facility -- announcement letter here)
By the way, this month's z/OS statement of direction notes IBM intends to use DVDs to deliver systems, adding the latest V2 Internet Key IKEv2 support, pulling DCE (technologies come and go!), and based upon customer feedback, not dropping support for VSAM IMBED, REPLICATE, and KEYRANGE attributes. for all the details just see the link above.
systemzblogger 2700017BYR 2,583 Views
The new Solution Edition offerings make a pretty big statement. Designed as integrated hardware, software and services packages that help customers deploy new enterprise workloads, they also make a great compliment to the zRewards and the Mainframe Charter. Building on the kind of packaging IBM has done for SAP, there are workloads for Data Warehouse, Application Development, Disaster Recovery, Security, ACI Payments, and SOA. So, without belaboring the point, some of the barriers to migration of key enterprise applications to z/OS are being lowered and made even more attractive for consolidation, performance, management, and fiscal effectiveness. Take a look at the detail page here.
Now, like every third person I have tried to work with this summer, I'm going to get out of here, take some of my vacation days, and keep this blog post short. If you're reading this, and it is still August, please get your rear end out to the car, airport, or backyard and make sure you get your days in too!
systemzblogger 2700017BYR 2,749 Views
The Mainframe Executive site has so many good postings, Iwas tempted to refer to a recent (July 16) entry that reviews some of the overwhelmingbenefits of System z in an interview with IBM Fellow, Gururaj Rao, Ph.D.. While I still encourage you to read it,since there some enticing discussion of futures including extremevirtualization and expanding the platform to even more architectures, I wantedinstead to share an interesting experience I had.
I happened to be looking at some of the System z benefitmetrics in the same day I was looking at some of the new presentations onbenefits of the evolving cloud environments. Some of these included studies on the kinds ofresources dedicated to test environments, and the parallels you can draw themwere intriguing.
For instance, how about going from server utilization ofless than 10% to up to 90? Would youlike to provision a test system in minutes rather than weeks? Or improve your release management by thesame kinds of numbers? Would you like toreduce software costs, reduce configuration errors, and reduced labor cost byover 50%? How about increasing yourUtilization by over 75%, and drastically reducing defects through automation? Did you know that distributed serverproliferation also includes a large chunk dedicated to the testing environment,like 1/3 to 1/2, of all the servers in a typical IT shop?
These sound a whole lot like the System z platformcharacteristics don’t they? So, forexisting System z shops looking for workloads to fit on System z, it might be agood idea to ask the question: Should we be making our capabilities more visible to upper management as they look at future sourcing decisions? (Oh yeah, and what are we doing with test across our wholeenterprise?)
I had a conversation this morning with a longtime consultant who has worked with 'the mainframe' since at least the 60s and is heavily involved in both SHARE and CMG. We found ourselves shaking their heads at the durability of perceptions would suggest, in spite of overwhelming and increasing evidence, that the System z platform is far and away the most cost-effective platform in the world.
One nice recent example of continuing evidence comes from The Clipper Group, and their newsletter, The Clipper Group Navigator (April 23, 2009), which talks about how well System z fits into upcoming Cloud strategies.
This was on the heels of a session I attended that talked again about consolidation efforts. These results showed energy savings of 80%, space savings of 85%, software savings of 35%, and labor savings of 54% -- while reminding us that average servers utilization leave 85% of their capabilities unused.
Next, I took a look at the recent announcement letter, which previewed z/VM V6.1 (letter 209-207). Besides a raft of improvements related to storage, networking, and Linux enablement, my eyes perked up (while I was reading not listening), when I saw this:
Running more Linux server images on a single System z server:Considerably more images than are currently supported by the LPAR modeof operation (up to 60 on z10 EC and z10 BC) may be supported with z/VMguest support. These Linux on System z server images can be deployed onstandard processors (CPs) or IFL processors. Running multiple Linuximages on an IFL-configured z/VM system may not increase the IBMsoftware charges of your existing System z environment. Clients runningz/OS, z/VM, TPF, z/TPF, z/VSE, or Linux on System z can add z/VM V6.1on IFL processors to their environments without increasing IBM softwarecosts on the standard processors (CPs).
Then, at the end, there is a statement of General Direction that talks about a z//VM Single System Image whose intent is to allow all z/VM member systems to be managed as one system, across which workloads can be deployed. There is also something called z/VM Live Guest Relocation, aimed at moving a running Linux virtual machine from one single system image member to another.
Wow, way to move the hypervisor along!
It is easy to think of System z as just the hardware, or focus on the z/OS operating system and forget about the 40 year contribution that VM has made in virtualization; especially over the last few years with consolidating Linux systems.
With each additional step of on demand capabilities, the numbers for total cost of operations improve and the picture of how all these new 'clouds' can actually start to form becomes clearer.
systemzblogger 2700017BYR 2,724 Views
Have you been reading, google-ing, listening to podcasts on ‘The Cloud’? (Not The Blob, that was a 1958 movie with Steve McQueen, Aneta Corsaut and Jane Martin. )
I’ve decided not to rush to understanding this still forming next wave and am starting to see a set of cloudy parameters for watching the waves we will all be riding for the next decade or two…Besides the obvious cost savings public and private clouds promise, here are some patterns I see evolving: (what do you see??)
Immenseness & Immediacy: This cloud is so big, our brains may have trouble understanding the associated scale.and orders of magnitude. Mainframes were a big deal with thousands of green screen terminals, then distributed computing brought applications to millions of new users, and the Internet now connects billions of people, sites, and programs. But you haven’t seen anything yet.
New intelligent units are getting connected into networks that generate connection points or interactions that go way beyond a billion to a trillion (10 to the 12th )to .. well…. who knows, really big numbers like Nonillion (10 to the 30th).
There are devices everywhere that are joining up. We add new intelligent devices; like phones, cars, refrigerators, game stations, stoplights, valves, pacemakers, cement mixers, cameras, bottles and dog collars. Sensors, RFID tags, intelligent end- points, input devices and computing nodes all join the mix. You don’t churn your own butter, you don’t have to walk to the corner to place a phone call, and as anyone with a smartphone can tell you, you don’t need a computer to work with applications.
Incredibleness & Industrialization: We are only beginning to see the industrialization (standardization, automation, virtualization) of information technology. (Start a sound track in your head; say, a Carl Stalling score in a Warner Bros cartoon where they are stamping something out in a factory!) Just as each generation of Tool & Die makers create templates for more amazing creations, the adolescent IT world is growing up, reaching past the apprentice stage, and creating its own master works.
Besides the visible advances of putting computers in nearly everyone's hands, or the important connections between individuals across social software, major advances in technology, infrastructure and services science have been building in places like IBM Research and Development enabling startling and incredible announcements such as financial trading systems with 20+ fold improvements, inline analytics (system s) with microsecond response times for the study of Space Weather or new, complex Neonatal monitoring.
Immersion & Intensity: We will all be interacting with more of our senses (think Second Life and gaming), different devices, and on new scales. When you smash things together in new ways you can end up with wonderfully creative results or horrible tasting desserts. Fortunately, enterprise sytems has been playing with the pieces (many of which we have invented) for a while now. (Think AP, MP, parallel sysplex, VM, GDPS, WLM...)[Read More]
systemzblogger 2700017BYR 1,588 Views
I'm been on vacation, but I came across this item and it was initially confusing to me, so thought I would add this as a quick post to maybe clarify your confusion on the subject.. thanks.
In case you were wondering where the IBM technical journals are (Systems Journal or Journal of Research and Development):
If you are like me and have had the journals show up via a subscription, and may have been wondering where the next one is showing up, starting in 2009, they are only online now. There are options for institutions or libraries to purchase subscriptions, or individual articles can be purchased.
The journals have also been combined into one Journal starting this year. Oh, and the new combined site for IBM Technical Journals is here.[Read More]
systemzblogger 2700017BYR 1,498 Views
Well, it has been almost 2 months since IBM announced the in full sphere warehouse for System z.
The announcement is particularly well-suited to customers who understand and appreciate the qualities of the System z platform. (see a 2006r Willie Favero post on why warehouse and z make sense together : here)
The offering enables operational and warehouse data to be on the same platform (think co-location), facilitates the possibility of near real-time analytics linked to key applications (think in-line), and delivers multidimensional, no- copy OLAP capabilities!
To build the solution, you take your existing DB2 on z/ OS, and on Linux for z you add both the new InfoSphere Warehouse, and then for presentation, add Cognos BI. (Note that for the presentation layer, the solution is designed to easily go to DataQuant or Excel as well.)
There are built-in tools for physical modeling via Design Studio, data movement and transformations via the SQL warehousing tool, and ‘no-copy’ OLAP analytics via built in Cubing Services modeling and a ROLAP engine to build those Materialized Query Tables we need to build those fast reports about sales of products by line, region, well….you know the drill.
Now, it is true that this is aimed at getting a good start on warehousing, and you can still augment and connect to other related parts of the information management portfolio related to pulling together your information for business intelligence.
Good examples of this include the already mentioned Cognos 8 BI, and InfoSphere Data Architect for complete logical and physical modeling, the Industry Data Models we have been working on for a few decades now, the Information Server for federating panoply of data sources, and even Master Data Management connections via the MDM Server.
You know..... all those tools to ensure your data is clean, accurate, and trustworthy –well, you get the idea. (I won’t say kind, courteous, cheerful and so on like the Boy Scout oath!!)
So, take another look at the volume, kinds, and speed of data your enterprise needs. Ask yourself what your perception of the risk and cost of setting up warehouses is before this announcement. Consider the current Gartner ratings of our BI portfolio, the value of system Z, and start to think about what you could if you put the whole thing on the platform you already put your key business applications on.
See if you can be more cheerful about getting information out of your enterprise data![Read More]
systemzblogger 2700017BYR 2,226 Views
Last week found me reviewing a series of case studies that showed the risk in migrating platforms without a careful analysis of design points and the related total value cases in different infrastructures. As you might guess, these had to do with System z, and for these unfortunate institutions, they moved from z; hoping to garner significant benefits.
As you might also guess, that did not happen. It did not happen to the tune of having new environments that cost multiple times as much as what they moved from. Just as I wondering how the message (after 45 years) of managed virtualization, lower manpower costs, and Qualities of Service still seem to be underestimated for System z, I heard about a new case.
It seems there is a State institution out West that took several years and 10’s of millions of dollars to ignore their well running CICS application base and create a new distributed set of applications that resulted not in the sub-second response time they hoped for, but coffee break long response times.
Many in System z land have heard of the metaphor for System z where they show a draft horse pulling a large load and then another picture with scores of chickens hooked to the same load. In this case, being out West, I envisioned Buffalo and Prairie Dogs as horsepower instead hooked up to a covered wagon.
Care to take a guess which one might move the wagon load best?[Read More]
You may remember, we have mentioned Hoplon Infotainment and their massively multiplayer game, Taikodom, which resides on System z and uses Linux and the gaming chip used in Sony Playstation (sometimes called the ‘gameframe’), a couple of times in this blog. (I think was a year ago in February, and again this past October.) They now have announced plans to go global and could be hosting as many as half a million users by year-end, with plans including graphic novels, a possible TV show, and other tie in activities.
The announcement includes a great You Tube video where the Founder and CEO, Tarquinio Teles, talks through the selection of System z and their gaming business model. He discusses the characteristics we are all so familiar with in System z like availability, security, reliability, creating virtual resources quickly, and running Java workload. However, perhaps the best reason for picking a z based platform is reflected in his final comment where he sums up by saying:
“…fun is something people take very seriously, and you don’t want to mess with people when they are having fun.”
Or - In the Club: More Detail
(I wrote a Blog a while ago, and it was suggested I add my more detailed version..so, here you go!)
The big boys on System z platforms have decades of experience tuning and using techniques to optimize mixed workloads and databases. Workload Manager and Static Plans for DB2 are examples of component function which are widely used on System z but not always leveraged well during integration with the distributed world. DB2 static plans are used to improve database workload performance, management, and security. Workload Manager assures high utilization, high service levels, and high value of the System z platform. How can distributed applications leverage them for their benefit? Let’s take a look at both areas.
Workload Manager looks at transactions, jobs, and even database work then lets them run and use resources based upon priorities, profiles and things like intended service levels and velocity through the system. As e-business applications came along, and the new transaction manager on the block, Websphere on z, joined CICS, IMS, and JES2 as a transaction manager and was integrated as a full member of ‘the club’. So Websphere on z workload, and its associated DB2 work, could then be managed, secured, and line up to get the kinds of resources it needed. (See Say What? for a great overview of ‘the good old days’, packages, plans and such.)
Distributed Workload Needs a Ticket:
Unfortunately, as things evolved for the e-business applications (and here we mean both Java and .NET workload) off of System z, the kind of integration necessary to achieve the same kinds of benefits sometimes needs a little help. It turns out that when requests are sent to DB2 they tend to show up in the hopper as undifferentiated pieces of work --unless the Java programmer knows to specify an indicator (as SAP does by specifying differentiating parameters such as the correlation ID). Usually though, there is no ‘handle’ for DB2 to differentiate the incoming workload (or for subsystems to handle Workload Management, Monitoring, Reporting...). So, by default, everything settles in to a low priority and a low level of service. (For DB2 the default service class is 4 --which is low!).
If you don’t have that ‘handle’, System z can’t prioritize the workload so it gets its fair share competing with all the other workloads running on System z. Without the ‘handle’, you guarantee your incoming distributed work is at the bottom of the bucket!
Another large issue affecting overall performance for distributed Java workload using DB2 on the mainframe (and by the way, .Net applications implement that scenario more than to any other IBM database), is taking advantage of the first technique mentioned above: the use of Static Plans.
In the distributed environment where the paradigm is for interpretive rather than compiled programs, it is tough to even think of techniques using a pre-compiled approach as an alternative. However, there are ways to leverage the concept of a ‘static plan’ for DB2 by distributed workloads too.
Welcome to the Party: PureQuery:
PureQuery now enables distributed applications to bind static packages set up ahead of time.
A package, also called an access plan, can be thought of as a precompiled subroutines with the information to line up execution details ahead of time, (such as: access paths, indexes, etc ). Once created, the packages can then be stored in the DB2 catalog and are bound to a program. Note: Using packages can also add a layer of abstraction and security to DB2 access.
See some good references from developerworks and DB2 Magazine;
These static packages can be created and waiting on System z in the DB2 catalog for the Java applications to ‘call’ or connect to and use through a binding process done by the Websphere distributed admin or programmer. (By the way this is done through techniques that require far less heavy lifting than an earlier technology called SQL J employed. See some good tutorials on PureQuery again in DB2Mgazine and DeveloperWorks.).
Gift Bags for All:
If both approaches are used, distributed Java workloads can then benefit from the serious ($$) performance, security, and management effects of engaging DB2 and Workload Manager, and from using those static packages waiting at the ‘Enterprise Data Hub’ (System z).
But wait a minute.. there’s more…
Okay, that sounds great, but let’s stops and thinks about this. Put another way, what if you could now have the assurance that your database access on the mainframe from distributed applications could now have much better performance, much more predictable performance, like on the level of the service level agreements they use up there?
What if the mainframe land of sometimes ‘unknown-ness ‘ for distributed folks was something that now be counted on to give you fast data from lots of data sources, like federated, like services, like real time… and across not just your infrastructure of distributed and centralized, but across partner networks??
Since many application suites are on distributed platforms, how might your plans be affected for BPM, real time intelligence, MDM….? (See the article on MDM and PureQuery on Developerworks).
These are surely some things to go back and look at… don't you think?
systemzblogger 2700017BYR 1,733 Views
Enterprise Architecture: Galactic Everything.
What can’t you include in these two words: Enterprise and Architecture? Over the past few years there has been a huge emphasis on innovation, and with it, a focus on new business models. Eventually, this leads to line of business managers talking to the technical arm of the company for ways to implement the new processes, functions, and capabilities. Whether you call yourself an analyst, project manager, architect, or something else, if you sit between the vision and the reality, then you are involved with modeling and modeling frameworks.
Whose View of the World?
Maybe you use the Zachman Framework, or TOGAF, or Catalysis, or MDD, or Michael Porter, or Business Scorecard. Maybe you go back to the 1960s, with Peter Checkland and Soft Systems Methodology, or are a fan of the Rational Unified Process. There are frameworks that focus on views and perspectives from a technical standpoint, or organizational development, social change, construction and problem solving. They all fight for a balance between simplicity and the tendency to elaborate one more level. Do you use a telescope or microscope? Do you crank it up or down one level? Do you bother to include society, culture, and economics?
Can It Fit in Your Head?
Whatever framework your company or customer might be using, keep in mind some simple rules of thumb. First, the point of building models is to communicate. This means you need to consider who is looking at the model, what you leave in or take out (elided), and how well your audience can in turn communicate it to someone else. Second, remember the studies that show our brains can only handle a handful of items at a time (The Magic Number Seven ). Third, do not try to get it all on one piece of paper. Think of an architect building your house, and the fact that he has a set of plans with each sheet depicting a particular viewpoint for wiring, plumbing, landscaping, etc. For each picture, diagram, artifacts, work product, or exhibit, justify its creation by its intended use for this particular project.
All of this means you do not ask your client or organization to deliver your particular framework or model to you on your first day (as I have to confess, I tended to do in my early years). Instead, try to tease out the things they use and refer to. Ask what pictures, diagrams, charts, or reference material they find essential when talking about the problem being addressed. Don’t try to vacuum cleaner up all of their information from a raid on their file cabinet and a quick peppering of questions where you try to suck their brains out. Instead, take a lesson from other agents of change in organizational development, consulting, and even psychotherapy and take time to build collaborative models together. Yep, that’s right, roll up your sleeves, grab a marker, and take more time than you thought you needed to find out ‘what they mean by that’, ‘what happens next’, and ‘who cares about this?’. Keep a simple framework in your head and start with things like financial, logical, and physical aspects. Elaborate from conceptual to more specific and concrete as you need. Don’t forget to focus on context and end results as barometers. Oh, and test models with subject matter experts, constituents, end users and implementers along the way.
How does relate to System z?
Well, I was cleaning through some files and ran across examples of both cases where systems were designed with very narrow scopes, and those, like enterprise class System z solutions, had larger perspectives, and did things like lasting better over time, scaling better, and ending up costing less in the long run. Naturally, I started to think about some things that characterized the design of those solutions. Today, when divisions and applications and infrastructures are being called on to be more efficient, and integrate together more effectively, we are all being called on to evolve a ‘brown field’ solution architecture to its next incarnation. If you are involved in leading that change, please remember that most projects still ‘fail’ against their original objectives, most requirements are gathered incorrectly, and that it’s incredibly easy to get lost in detail or jargon and fail to communicate across the business and IT chasm effectively.
Where to start…
Sme good thought frameworks I have personally found useful, that you may not have run into, include Ellen Gottesdiener (Requirements by Collaboration), David Sibbet (Graphic Facilitation and Process Methodology of The Grove), Peter Checkland (Soft Systems Methodology) and of course for business models there is always Michael Porter, Balanced Scorecard's Kalpan and Norton, and Eric Helfert for insights to financial frameworks. Look for approaches that organize large, complex systems in other fields -- you might be surprised what can be leveraged.[Read More]
You may remember five years ago, when CICS get the 30 billion transactions a day level. Now, RFID tags have hit 30 billion worldwide, and there will soon be 1 trillion intelligent device endpoints; 50 million of these being Wii devices! Part of the smarter planet initiative includes expanding the concept of a truly dynamic infrastructure, which talks about merging both digital and physical infrastructures in an enterprise. A couple of weeks ago, IBM continued to add capabilities in the space by announcing new governance models, industry solutions, specialty partner programs, and products and hardware and software.
The Tivoli portfolio enhanced its strategy of visibility, control and automation with Tivoli Service Automation Manager (a base to manage and automate cloud computing environments), Tivoli Key Lifecycle Manager (centralizing and managing encryption keys across their lifecycle) and Tivoli Monitoring for Energy Management (including both IT and overall Facilities energy devices).
On the hardware front, there are new options for dealing with the 15 petabytes of data the world generates every day now (8 fold what is in all US libraries! ) both with the "ProtecTIER®" Deduplication Appliance (a result of the Diligent acquisition last year) which can reduce duplication for as much as a 25:1 savings, and the addition of full disk encryption on the DS8000 to our already proven tape encryption offerings.
Also included were new IBM Service Management Industry Solutions from Tivoli and a new Dynamic Infrastructure Specialty Program for which the first wave of Business Partners is already gaining certification, including Sirius, Mainline, Vicom, MicroStrategies, Agilysys and Computer Integrated Engineering System. Check out the new Dynamic Infrastructure Journal too.
With so many institutions starting to step back and look at technology from a truly enterprise-level, maybe it is time for us to rename IT to ET? Enterprise Architecture, Enterprise Messaging, and Enterprise Technology -- -- whatya think?[Read More]
As mainframe guys, we know that one of the key strengths of the System z platform with the z/OS operating system is that it is designed to run to 100% utilization. Managing jobs, tasks, and transactions, he assigns workload to resources based upon priorities. Now, that means you need a discrete name for each kind of workload. Without a name, there is no ‘handle’ for Workload manager to associate things like service levels and priorities and velocities so that he can figure out whether that piece of workload deserves whatever amount resource and computing capability. There is also a long-established technique on the mainframe that involves compiling programs so everything gets resolved including the database access component -- where SQL statements get converted into access paths using the right indexes, getting to the right databases, etc. This process gets done ahead of time to make actual runtime execution more effective. Okay, that is not too exciting and most of you probably are well aware of what it means for program to get compiled ahead of execution time rather than being interpreted dynamically at runtime.
Now let’s turn to the distributed world where one of the paradigms is interpretive rather than precompiled and where (surprise, surprise) many of the technicians supporting the environment did not spend the last few decades of their lives working with the mainframe. It turns out that if you don’t know how to specifically classify your workload flowing to DB2, that all the distributed workload, (whether it is Java or.Net) gets put in to a bucket which automatically defaults to a low priority and low allocation of resources and service. If enough care is not paid to differentiate the incoming workload and provide it with the ‘handle’ we talked about above, all of the subsystems of the mainframe, (including your DB2 database, workload manager, security, monitoring and reporting components) can’t give your application the attention it deserves. Also, while there have been some techniques to accomplish it, being able to associate or buying static packages to distributed application programs is something that has not been done very much; due both to awareness and the perceived heavy lifting associated with those techniques (see SQLJ).
While there have been ways to do both of these techniques, distributed and glasshouse folks do not always talk as much as they should, and the techniques were not always particularly easy, with a low threshold level, or were just considered ‘heavy lifting’. A good example of this was something called SQLJ which, while it enables the use of static SQL, has had very low adoption rates. Now, with the advent of pureQuery (which is been out about a year) distributed workload can take advantage of the concept of static packages (that precompiled effect) and ,as a part of the larger data studio suite, pureQuery delivers techniques to the distributed technicians to more easily classify their workload as it shows up to DB2 on the mainframe.
If you do utilize these two techniques, not only is this distributed workload finally eligible for workload manager to give him whatever level of service he needs, but that workload can benefit from the efficiencies of static packages on the mainframe (since the programs now have an easier way to bind them to his deployed programs).
Okay, so now we have a ticket to the party and can use a lot of the same mechanisms that COBOL programs or batch jobs have historically used to get their fair share of resources. One of the key things in this means is that distributed workload can integrate better with the mainframe. It can more effectively use the data on the enterprise data hub. The kinds of response times available to these distributed applications from the mainframe can be far more predictable with better performance. There is even a security benefit, since using these static packages means that users are authorized at the package level and there is not a potential exposure of direct database access.
If you are an architect or an IT manager, this also has large implications for designing applications across your whole infrastructure; including applications that work with other enterprises and partners. Do you have applications that need to go to other locations? Or that would like to become real time, federating data from multiple sources, integrated into business process management flows, and relying on applications and information across platforms and networks? Are you starting to look at master data management, greater infrastructure integration with endpoint devices, or folding in new groups of users that just happened to reside on different networks, platforms are infrastructures? Having a way for your workload to integrate better with enterprise resources, level the playing field for use of resources, and achieve better performance and predictability can be a huge step in an end to and systems design.
Oh, and by the way, it should be no surprise that there are also new opportunites for zIIP engine usage.[Read More]
systemzblogger 2700017BYR 1,633 Views
You may remember, I mentioned that there were a number of articles in progress on the system z10 Server. In the interim, the IBM Systems Journal and the Journal of Research and Development have merged, and those z10 articles are now available in issue #53 at this site. I was going to dedicate this blog to highlighting some of the articles in this issue, but I wanted to take a moment and share with you some of the progress that has happened in the last 90 days since IBM’s Sam Palmisano, gave his speech at the Council on Foreign Relations back in November on a Smarter Planet; the largest Enterprise System we humans deal with.
Since then, there is incredible amount of information, which has been made available both on the general IBM site, and some pretty amazing visibility through the media and Internet channels. On CNN, Sam had his first TV interview wth Fareed Zakaria, and on CNBC, we saw him with our new President Obama. If you look at YouTube, there are many new videos related to a Smarter Planet and what it means across industry, business, technology and society. Smarter utilities, telecommunications, energy, money, retail, infrastructure and more are being highlighted at the IBM and other sites including:
· A new blog called: Building a Smarter Planet
· YouTube: see the IBM socialmedia Channel
As I walked through these stories of Smart power grids, traffic management, new approaches to telecommunications, financial systems, food distribution and other solutions, I started to realize that this may be not only the largest initiative from IBM in decades, but a real tipping point for technology being applied to the world in a meaningful way. Whether you are an IBM employee, Business Partner, customer, or a citizen of the world, this initiative will affect you, and it is worth starting to understand what it involves and where it is going. I highly recommend taking the time-- I think you will be inspired.
Did anyone see the Game-Frame computer in the latest Popular Mechanics? It is the ‘Game-Frame’ version for a PC from HP where they use VooDooDNA water cooling. Mmm… I’ll bet you large systems types thought that the Game-Frame was the Hoplon System z with integrated cell processors for massively multi-player use.
Well, it is nice to see water cooling in a PC, since it is pretty well established, say 4+ decades, that water cools at around 3500 times more effectively than air. Oh, by the way, if you have been keeping track of IBM Green initiatives like Cool Blue, Big Green, and others, you may have noticed a whole range of cooling techniques that take it to the next stage including technologies like Rear Door Heat Exchanger, cooling arteries, Hydro Clustering, and Cold Batteries (i.e. ice) which uses advanced thermal exchange techniques.
With datacenters running out of power, costs escalating daily, and barely over 2 in 10 enterprises having done energy audits, it is time for large installations which have System z to take a look at an area that may have not been investigated in depth for quite a while.
(picture from flickr ibmphoto24 stream)[Read More]