Ever get the sense that marketing jargon is getting out of hand? Consider this sentence: �Siloed management must give way to a new paradigm of holistic business value.�
�New paradigm,� in particular, seems a little doubtful. I learned a long time ago not to talk about paradigms, especially in the context of shifting. But I think the rest of it just needs a little rephrasing.
Let's try this: �IT teams and technologies should collaborate more to work better.�
That's not so bad, is it? It's easy to find an example, too: security and storage management.
These two seemingly separate IT domains turn out to be flip-sides of the same coin: data protection. And a coin is probably a good metaphor here, because data is often the most valuable asset an organization has.
Imagine your organization. Now imagine how productive your organization would be without any data. See what I mean?
Security and storage management are your vigilant friends with specialized military training who hang around your data and keep it from being threatened, damaged, mutilated, spied upon, lost, kidnapped or murdered in cold blood. And to get that done, they work best as a collaborative team.
Encryption delivers powerful protection for almost any form of data
To pursue this idea in a little more detail, consider the most traditional form of backup media: magnetic tape.
It's inexpensive, commonplace and even today, in extremely widespread usage. And it's also a gigantic potential security hole, because the stuff that gets backed up onto it is quite often the stuff organizations want to protect the most. So it typifies the natural link between security and storage, and underscores the fact that organizations should think about connecting these domains a lot more naturally.
Anne Lescher, Product Marketing Manager with IBM Security Solutions, agreed with me on this point when I talked to her.
�Critical data protection should utilize encryption, along with key management, in the event that identity and access controls can be bypassed or storage media is removed or stolen,� she said. �Everyone's worst fear is that their tapes might fall off the truck in transit or be stolen for malicious use.�
Absolutely. Encrypting data everywhere you reasonably can, including backup tapes, leads to better security and a better business outcome.
So solutions that optimally manage encryption keys, like IBM Tivoli Key Lifecycle Manager, are already pretty compelling and getting more compelling by the day. They help organizations serve and manage those keys in a centralized way, as long as the keys are in use, and directly integrate with tape drives (from both IBM and third parties) to encrypt data as it's stored on tape.
So if a tape, as Lescher puts it, falls off a truck, it's useless to anybody who finds it because all the data on it is already encrypted. That data is much better protected because this organization's security and backup capabilities have now collaborated to work better.
Scale that idea up to the level of production servers and it gets even stronger. Enterprise infrastructures, of course, are chock-full of critical business data kept on disk arrays. Can the same IBM solution help protect that data as well, in basically the same way?
It certainly can. And because you're using the same solution to do multiple jobs, you avoid making things overly complex as well -- a common enemy of progress in the world of IT.
Another point: encryption can also help organizations more easily comply with government regulations (example: HIPAA) concerning sensitive data (example: patient health records). That�s more important than ever, given the way compliance failures increasingly lead to stringent fines -- not to mention negative publicity and serious brand damage -- if data is exposed and customers are affected.
�Effective data protection can be complex to the point of seeming like rocket science,� said Lescher. �The complexity of encryption technology can scare storage and security administrators away from using effective protection controls. So simple, integrated security is essential for both peace of mind and critical data protection.�
Data protection means never having to say �it's gone forever�
Of course, backup tapes are just one element of storage. You can make essentially the same case for storage management in a larger sense. Generally speaking, you want to be able to protect data as comprehensively as you can, everywhere you can, while introducing as little new complexity as you can to get it all done.
Talking to Rich Vining, IBM Tivoli Storage Marketing Manager, drove that point home for me.
�When someone says data protection, do you think of backup and recovery, or encryption and access control?� he asked. �Because they're both directly relevant and they both need to be addressed. Are you confident that during your next data disaster, the right person with the right training will log into the right system, restore the right data to the right place, do it quickly enough to limit any losses and not break anything else? If you've deployed a number of different point solutions from different vendors to address the complex needs of your business, the answer is probably no.'
This scenario illustrates data protection from a fundamentally different angle -- the idea that even without malicious attacks or inadvertent backup tape losses, an organization can put its own data at higher risk through problematic storage management. It can slow down backup and recovery processes, skip data that should never be skipped and ultimately lose critical data.
That prospect is enough to give business leaders the heebie-jeebies.
It also underscores the charm of backup solutions like IBM Tivoli Storage Manager that centrally and comprehensively back up, archive and restore all enterprise data, everywhere it exists, quickly and cost-effectively.
�I like to think of data protection as being comparable to health insurance,� said Vining. �When something goes wrong, whether it be the flu, an accident or something much more serious, you better have good insurance to keep from ruining your financial as well as your physical well-being. Same thing with data protection -- its value comes into play when something goes wrong, avoiding the huge costs of lost data and business downtime.�
It's an interesting parallel, and a timely one given the nation's current interest in healthcare reform and the various ways we might go about it.
In healthcare reform, the fundamental problem reformers would like to address is escalating costs, i.e., insurance premiums that climb every year. A direct parallel to that situation exists in the world of data protection, in the form of escalating data volumes, which similarly grow every year. Data is also increasingly scattered -- distributed over more endpoints and servers than ever before, and in more ways. Conventional backup solutions and strategies often no longer suffice to handle it all, and even if they have the capability, they often don't have the time.
That means more and more data goes unprotected every year. And that's just not acceptable given how critical data is to business operations and strategies. What's the fix?
Vining's answer: Smarter backup solutions, like Tivoli Storage Manager.
�One of the biggest, if not the biggest, cause of data growth is performing full backups every week, which most data protection products force you to do,� he said. �That's because of needless redundancy. Your full backups probably contain more than 90 percent of the same data you backed up last week, and the week before and so on. Why not avoid creating all that duplicate data by only performing incremental backups -- forever?�
Indeed, why not?
Get your thumb on the pulse of data protection
If you'd like to find out more about these subjects, think hard about attending Pulse 2012.
You'll get a chance, via technical demos exploring real-world scenarios, to see how security and storage management can work hand-in-hand to protect your data -- everywhere it lives throughout your infrastructure -- and direct specific questions to solution and business process experts from around the world.
About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
IBM recently conducted tests in its labs that revealedIBM Cognos BI v 10.1.1 to be at least on par and better by 14 to 46 percent when compared to Microsoft Windows 2008 Server.
IBM Cognos BI application performance between similarly configured IBM POWER6 and IBM POWER7 systems showed significant performance advantages for IBM POWER7 servers.
IBM conducted a variety of tests to match the different ways of using IBM Cognos Business Intelligence services and system resources. The test systems used similar server configurations and current processor generation. Download the free report here.
Other findings included:
�Performance improvements of as much as 41 percent for workloads such as running HTML and PDF-based reports and portal navigation
�Performance improvements of as much as 26 percent for workloads such as running large and highly formatted PDF reports, locally processed calculations, interactive analysis activities and complex queries mixed with lighter workloads
For example, an IBM customer had developed a Cognos Business Intelligence application to distribute PDF based reports by email; as implemented and before optimization this application was performing at a rate of 11 multi-page reports per minute.
After the customer applied recommended AIX tuning parameters, the application performance improved to 150 multi-page PDF reports per minute.
On average, most applications might see performance improve two or three fold by applying AIX level tuning.
To provide a comprehensive view of the potential performance impact of optimizations made in Cognos Business Intelligence v 10.1.1, IBM used a broad range of tests. See the graphic that lists the performance improvements for the 20 different tests used.
For more information:
�Downloadthe whitepaper, �Best Practices and Advantages of IBM Power Systems for Running IBM Cognos Business Intelligence,� to see the full performance results.
NOTE:Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors.
For 50 weeks each year, Tennis Australia is a typical small business. But for the two weeks of the Australian Open, tennis fans all over the world stampede to australianopen.com for a deluge of news and content � including real-time analysis of every shot in every match, if they want it. Since 2009 IBM has helped Tennis Australia take this annual traffic spike in stride, with a private cloud solution that has handled a 45% increase in visits while driving down the cost per visit by 35%. Learn more about how IBM helps this company quickly, gracefully and economically expand and contract its IT infrastructure as needed.
You say you�re a midsize business all 52 weeks of the year? Not to worry: we�ve got a cloud solution for you, too. Register for 3-minute demo (and other resources) to learn how enChoice, an IBM Business Partner, can bring bottom-line cloud benefits � including pay-as-you-consume pricing and enterprise-class performance, reliability and disaster recovery � to your company.
With over 50 customer speakers, analysts and a wealth of IBM experts in security, Pulse 2012 will be an excellent opportunity to learn more about best practices and emerging topics in information security. Rather than me telling you over and over again about why its such a great opportunity, I'll just show you by taking a look back at Pulse 2011.
Steve Robinson gave a keynote address that really set the stage for the rest of the conference. He reinforced where IBM is focusing its security capabilities and how we see the security landscape changing. Mobile was one of the key topics of his speech and there is little doubt that BYOD became one of the most talked about topics in IT last year.
This year, expect to hear not only more about mobile security (make sure to visit the mobile security ped in the Expo center), but also the trends we think are going to become important in 2012. With the additions of Q1 Labs and formation of the new IBM Security Systems division, this will certainly be our most exciting year at Pulse to date.
The topic of advanced and targeted attackers is consistently one that generates a great deal of discussion. In this Pulse 2011 breakout session, Tom Cross, Manager, Threat Intelligence and Strategy, IBM X-Force, talks about approaches that you can take to combat these sophisticated attackers.
This year, Tom will be back to talk in more depth about the constant evolution of sophisticated attackers. This is sure to be one of the can't miss session at Pulse again this year.
We love hearing from the analyst community at Pulse. Last year we were lucky enough to have Scott Crawford join us and share his thoughts on the conference. One of the topics he noted was the importance of application security and how IBM had demonstrated real leadership in this area. Who are we to disagree! However, he also went on to briefly talk about ideas around integration and information management.
This was a trend that had been similarly observed by IBM, and later on in 2011 IBM acquired security intelligence vendor Q1 Labs. This year Pulse will have some leading analysts from around the world (and across different areas of IT), and IBM Security is thrilled to welcome Forrester Principal Security analysts Stephanie Balaouras and John Kindervag, who will team for an entire breakout session highlighting Forrester's view of how to rethink and redesign security for the future.
Make sure to also visit us in the expo area where we will have experts across all disciplines of security present to talk about how our products and services can help business challenges. If you joined us last year, you were one of the first ones to meet "beast," or the IBM Security Network Intrusion Prevention System GX7800 as it's also known (doesn't roll of the tongue the way "beast" does though).
We introduce tons of new technology all the time, and have certainly announced quite a bit since last year, so make sure to stop by to learn more about the latest and greatest.
More than anything, if you are security pro who is also invested in other emerging areas of technology, this is the conference for you. IBMers and our clients from around the world will be talking about new technologies like cloud, mobile, smarter computing, Watson...Grady Booch will be interviewing Steve Wozniack... there will be plenty about how security is going to be a part of shaping the greater IT landscape. We'll hope to see you there.
� "My business is changing on a weekly and sometimes daily basis, and in order to stay competitive I need quick access to the data without IT getting in my way."
Are these comments common inside of your organization?
It�s an interesting battle of wits: Business users needing that fast agility to get at information, and IT needing to ensure governance and control.
IT is often painted as the bad guys because they create roadblocks and are unable to deliver what the business wants � quickly and consistently.
Business is viewed like spoiled brats who have no patience or vision and ultimately rebel.
It�s a dysfunctional relationship that thrives only because these �factions� are more similar than they realize. And, they need each other. It should be very symbiotic�if only they realized.
They are both working towards the same goal: driving the business forward.
But, in order to feel like they are accomplishing their goals, they need freedom from each other. Some might say they need an �open relationship.�
IT doesn�t want to be strapped to a barrage of mundane requests. Business doesn�t want to be constrained to the complicated systems and processes IT has set up for them.
Ultimately, business wants to live in a world where they can easily access the information they need (from any source), manipulate the data without having to be a spreadsheet programmer, and share it with others.
IT wants to be able to leverage the analysis the business user has been working on and still maintain the governance and control to ensure consistent information and use of that information across the organization.
Depending on which side of the aisle you sit, there is an answer and an easy way both can be successful. In fact, both sides can have their cake and eat it too.
We invite you to view the first chapter of our Business vs. IT story in the video below.
When you hear the phrase �team-building exercise,� what comes to mind? If you're like me, you get an image of a bored group of people listening to a consultant. The consultant asks Person A, who is blindfolded, to fall backwards, trusting that Person B will make a rescuing catch. (In the Hollywood version, Person B is unexpectedly distracted and Person A brains himself on the concrete floor.)
The trouble with this sort of exercise, as I see it, is simple. There is not, in the usual course of business operations, much in the area of wearing blindfolds and falling backwards. It just never comes up.
This being the case, a better team-building exercise would recreate more accurately the specific challenges that people do experience every day in their jobs.
Furthermore, it would do that in a more accelerated and quantified manner than would be possible in real life. That way, any lessons learned could be learned much more quickly than would be possible on the job, and participants could get a sense of just how effective (or imperfect) their collaboration really was.
If you've ever seen the military simulations that fighter pilots use in training, you know what I'm talking about. The basic idea is to give these pilots a way to learn that
(a) closely recreates the real experience of flying a plane
(b) can be executed much more quickly than really flying a plane
(c) assigns the pilot a score, and thus puts performance in very clear terms
(d) doesn't risk the daunting possibility that a stupefied newbie pilot will steer an $80 million Lockheed Lightning plane into a mountain.
What if you could take that basic premise, and apply it to IT complexities -- creating a kind of simulator of them? Wouldn't that be a powerful learning experience, capable of teaching people all kinds of complex lessons in short order?
Are you up for the challenge?
Going beyond the fighter-pilot simulator described above, this Simulator focuses not on individual performance, but on team performance.
The idea of the Simulator is to assemble a team of 15 to 20 players in a room, assign them different job roles and simulate a real-world organization facing typical real-world business and IT challenges.
Then hammer them with those challenges and see how well they do.
The roles vary widely both in terms of hierarchical rank and job duties:
Senior management (executive team)
Service desk staff
Technical support services
Furthermore, the hypothetical logistics organization where they work focuses on shipping and fulfillment, and like all such organizations, holds itself to an incredibly high standard of performance.
Remember this slogan? �Fed Ex...when it absolutely, positively has to be there overnight.�
You can see the guarantee implied by that kind of language. So the collaboration between team members at this organization has to be as seamless and friction-free as possible to increase the odds of that guarantee working out for the maximum number of shipments.
When issues arise (and the Simulator is just merciless in this respect), those issues have to be isolated to root causes, assigned to the right people and resolved lickety-split. Otherwise, deadlines will be missed and the organization will sustain a quantified business impact.
And since the scoreboard in the Simulator is constantly updated to reflect revenue and profitability in specific dollar terms, that impact will be painfully clear.
Pull a few ITIL rabbits out of the hat
Now, it does help quite a bit that this organization (just as real-world organizations) has a powerful resource to draw on in accomplishing all of these goals.
I'm referring to ITIL -- the Information Technology Infrastructure Library, aka The World's Leading Best Practices Framework for IT People. In the latest version (v3), ITIL was updated specifically to address service management issues of the sort you'll find in the Simulator.
However (as those with experience in best practices have discovered for themselves), ITIL isn't really a 1-2-3-4 ops manual. It doesn't talk about particular solutions from particular vendors, or how to use and combine them. It instead talks in more abstract terms about basic tasks (like trouble ticket assignment or server provisioning or resource allocation). Then it leaves the implementation up to you.
So, while team members can lean on ITIL concepts and practices to get a higher score, they'll need to figure out all the details for themselves. Just like they'd have to do in the real world.
A session with the Simulator runs for several hours, and teams will get a chance to play several rounds (each taking about an hour). They're usually going to need several rounds, too.
This is a hardcore, no-holds-barred, spit-on-your-corpse-and-laugh sort of game, and it's not for wannabes. For instance, when stuff goes wrong, and it's always going wrong, a loud horn blares. If you don't like that, or if you find it distracting, too bad. Perhaps you can find a Pac Man machine somewhere in Vegas and play that instead.
But for those who really engage with the Simulator, and make a serious, sustained effort to learn and improve, the payoff will be considerable: a drastically improved comprehension of what it takes to make ITIL concepts fly in a pressure-packed environment that closely recreates the real world.
David Ojalvo, from IBM's Service Management group, can bear witness to that. Watching an early version of the Simulator in 2011, his observation was this:
�After three hours and three rounds, the group was both exhausted and exhilarated� I had a chance to interview several of the participants after the session, and they were all effusive in their praise for the workshop. Clearly, the workshop far exceeded their expectations, and they were anxious to share the experience and apply some of the best practices at their own organizations.�
Holding a mirror up to real life
Toward that end -- practical application -- the Simulator has been tweaked to reflect the way organizations have changed in recent years.
For instance, beyond ITIL implementation and service management complexities, it also now incorporates a second organization as well as the logistics company. This second organization is an external service provider that handles some (but not all) of the IT services the logistics organization is responsible for.
If that has a familiar ring to you, ponder the phrase �third-party cloud host� and consider how much more popular those have become in the last year or two. IBM is aware of that development and has taken it into account.
The result is that the game now actually involves two hierarchies, two infrastructures and twice the total required collaboration -- all of which makes it harder than ever. (I told you it was merciless.)
And, of course, the challenges that come up vary not only in nature, but also in timing. So don't be surprised if you get slammed with four different challenges simultaneously, and have to conduct an improvised triage to decide what to do first. This represents a challenge in itself, and it can make or break the eventual score teams get -- just as problem prioritization can make or break real-world businesses.
Maybe all this sounds a little intimidating? Well, it's meant to be. If it weren't, it wouldn't be much of a simulation. But more importantly, when all is said and done, it's also fun.
�In my opinion,� said analyst Rich Ptak of Ptak/Noel after attending the Pulse 2011 Simulator Workshop, �this was by far the most fun and engaging workshop I've attended in a long time. This opinion was confirmed with other attendees... I wasn't ready to quit at the end of the three hours. I was really involved and want to go for more. If you get a chance, take this workshop, but watch out: the scorekeeper has lots of surprises for you.�
Think you're ready for the Simulator? Register to attend Pulse 2012 and find out!
The Workshop will be held Sunday, March 4, from 2:00 to 5:00 pm in Room 306, located on Level 3 of the MGM Grand Hotel Conference Center. To receive additional information, email Tivoli Marketing at firstname.lastname@example.org and include the following details: confirmation that you want to attend along with your name, title, email address, and cell phone number. You will receive a return email from David Ojalvo confirming your participation in the session.
About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
In the simplest of definitions, analytics is all about maximizing probability.
In other words, how do you use the information around you to gain a better advantage?
For marketers, business analytics has become an easy way to measure and prove success, but also to support the decisions that drive campaigns, help anticipate customer actions and even guide the selection of messaging and content.
Yes, a scientific approach has become an absolute necessity for modern marketing.
Lest not scoff at the idea of cold, clinical data driving marketing decisions. Heck, it�s been proven that spending $1 on business analytics technology will yield almost $11 in return.
Using analytics to drive better customer experience unshackles the organization from ignorance and maximizes the probabilities for increased customer loyalty, better up/cross-sell and sales conversion.
These organizations focus their analytics capability to gain insight on cost reduction and not at consumer personalization.
Most marketing efforts focus on segmentation efficiency, such as increasing the conversion of a selected group of customers by reduction and removal of messages (for instance, avoiding delivery of identical catalogs to multiple household members), thus lowering the cost of communication.
These firms can increase customer retention by up to 9 percent, capture 2 percent more wallet share and convert an extra 3 percent of inbound contacts into a cross-sell event.
Stage Two � Sharing the Goods
To keep pace with the mobile generation, organizations within this second stage must have a clear customer analytics strategy that enables information sharing across any digital device and channel.
In fact, research shows that tri-channel buyers spent an average of two and a half times more than single-channel buyers.
The most sophisticated marketing organizations in this stage apply analytics for marketing event optimization, an approach that leverages analytics as a �horizontal control tower� to optimize the organization�s various direct marketing events over a given time period over multiple channels.
This better aligns marketing with customers� needs � and varying personas � related to those devices/channels.
Stage Three � From Reaction to Action
This stage focused on information responsiveness.
These organizations are leveraging �raw� data as it streams customers� social commentary and changing moods.
To avoid a veritable data deluge, these organizations focus on identifying the questions that � if answered � will impact their business the most.
This acts as a filter on data collection and helps an organization avoid the task of collecting all available information and then deciding what to do with it after the interminable wait to standardize and analyze it.
Companies able to perform real-time external data analysis combined with rules-based actions have experienced average conversion rates of 16.9 to 38.2 percent.
Stage Four � Next Best Action
This stage focuses on executing a strategy that enables information on demand.
This approach combines all the skills developed in earlier stages with in-depth segmentation approaches and leading-edge work in multichannel customer monitoring and real-time action recommendation (read: Decision Management).
Using predictive analytics (combined with business rules), organizations are able to engage with the customer throughout the buying cycle � from the point of needs identification through the exploration and discovery phase to the purchasing cycle.
Those companies able to apply real-time predictive analytics while executing a multichannel next-best action strategy had an average converted response rate of 24.1 to 64.3 percent.
� Understand the different stages and get a better handle of your organization�s analytics maturity by downloading the full "Customer Analytics Pays Off" white paper.
� Also, read the "Five Steps to Improving Business Performance through Customer Intimacy� white paper.
�Registerfor the �Customer Analytics Pays Off� webcast (Feb. 15 at 1:00 pm ET).
For now, you can continue to try and buy LotusLive services as usual; soon they�ll be called IBM SmartCloud for Social Business
You can also try the Beta for IBM Docs (previously called LotusLive Symphony), a set of cloud-based tools for collaborating on documents, spreadsheets and presentations.
IBM also announced that the next generation of its social networking platform, IBM Connections, will incorporate sophisticated analytics and real-time data monitoring � so that organizations can quickly analyze the flood of data generated by social interactions, integrate it with data from other sources, and make faster, better decisions on the fly. Which apparently impressed more at least two people at the show.
While much has been made about analyzing data to generate business intelligence, many of those same principles can also be applied to the concept of security intelligence. We can leverage security information gathered from all different points across an IT infrastructure and relate one event to another with the intention of identifying and remediating threats. The more data we can normalize and correlate, the more security intelligence we can begin to build and develop. We can put together different puzzle pieces in order to help us develop a better understanding of what it is we're looking at.
In security, the creation of meaningful data, and the automated analysis of that data, presents us with the opportunity to really wrap our heads around the overarching challenge of information security, and Identity and Access Management is an excellent example of this. Historically, even with identity and access as a core foundation of information security, you may view the task of user provisioning and the administration of access rights as more of a technical, systems management challenge. X person needs access to Y files, a system admin checks/verifies your rights and either provides or denies access. While that is all well and good, if you are an executive and concerned about the possibility of something like insider threat, that approach to identity and access management isn't particularly helpful and won't provide you with the understanding that you are probably looking for. However, this technology was in a position to tell us more.
First, logging into something can be a meaningful event on your network, and with it comes some pretty useful information, such as who is touching what and when. If you are trying to assess what happened during a security incident, or looking for anomalies, that information is invaluable. It is also most valuable when it can be understood within the context of other activity from across your IT Infrastructure. The promise of security intelligence is the ability to understand one activity or event within the context of another.
Secondly, role based analytics can help transform identity and access management into a far more strategic and manageable initiative. In a company of thousands of people, provisioning identities and access rights individually is neither practical nor effective. It also reduces your ability to make larger, more meaningful business decisions. However, if you take a role-based approach to security, where you group common people based on common access needs, you can start to do a better job of defining what types of people are accessing what types of information. Not only is this approach more efficient and scalable, it helps provide insight into which users have access privileges that aren't consistent with other similar users. Perhaps someone changed roles and can still see things they should no longer be able to see. This example might represent a security risk, and identifying that risk means that business leaders can now take steps to decide how to manage that risk. On the one hand you might remove those rights, on the other hand, organizations are complicated and not everyone will fit neatly into a defined role. Understanding which users have risky access profiles gives businesses something meaningful that they can monitor.
This field of Identity analytics is an area that IBM is investing heavily in. At the end of 2011 we acquired Q1 Labs, a Security Information and Event Management company that helps organizations do a better job of analyzing and understanding all of their different security events and data.
Security Role and Policy Modeler is now available as part of IBM�s software for policy-based identity and access management governance offering. The new software allows companies to efficiently collect, clean up, correlate, certify, and report on identity and access configurations. Specific new functions include:
� Scoring metrics and analytics that give business users the ability to produce a more effective role and access structure. Users can be identified by specific role they play in an organization. For example, a marketing team manager can only allow employees to access marketshare data but not human resources information.
� Clearer view into the role structure �such as organizational hierarchy charts, and access exceptions due to business needs -- that can be managed throughout the users' lifecycle. For example, if an employee moves from one department or function to another, that employee can be assigned--or restricted from--accessing particular applications or business assets based on their role structure within the organization.
� Single web-based interface to create, apply and validate roles that have multiple members. For example, a "physician" can be the group role and "cardiologist" or "radiologist" is the member role. Each role can be assigned different access and can be mined to identify outlying behavior and validated for violations.
IBM's Security Role and Policy Model is based on IBM Research innovation. IBM has a rich history of innovation coming out of our research labs. IBM set a new U.S. patent record in 2011, marking the 19th consecutive year that the company has led the annual list of patent recipients. IBM inventors earned a record 6,180 U.S. patents in 2011, including more than 100 security-related patents, adding to more than 3,000 patents in IBM's security portfolio. The 2011 patents granted include advances in identity intelligence for authenticating user identity when resetting passwords, verifying personal identity and detecting fingerprint spoofing.
By leveraging innovation in areas like identity management, analytics will continue to not only play an increasingly significant role in both how we manage and provision user identities and access rights, but also how data around user activity can be understood in the broader context of a company's overall security posture.
In my last blog post, I said that one very common idea underlying best practices today is this: �faster is better.� There are different ways to get faster, though. And some are certainly more appealing, in a given context, than others.
For instance, consider the context of IT development. This is a world of business logic, algorithms rendered in specific code and the software development environments, in which the first is alchemically transmuted into the second, to create software-driven services.
Faster software-driven services mean faster (and more) business transactions. This is certainly better than slower (and fewer) business transactions.
Now: What's the most efficient way to make your software faster?
If you're an IT ops guy, you probably see the world through the lens of technology infrastructures. So your response would be something like this:
�We need to buy a faster host. Or, even better, redeploy the app on a grid or cloud architecture. That means we need to get the IT dev guys to rewrite the code so the app's work can be distributed in discrete chunks across that architecture for parallel processing. At that point, to get more speed, we can just add more physical hosts and virtual servers, as well as other resources like virtual storage or network bandwidth as required. Easy as pie.�
But if you're an IT dev guy, you probably got a headache reading all of that, and you see IT ops guys as the enemy. (I'm kidding. Everyone knows IT management is the enemy.)
The idea of completely reworking and redeploying mission-critical applications along these lines sounds slow, risky and impractical. It's difficult enough doing the thing the organization already asked you to do: add new software capabilities to the existing codebase, which was created by completely different guys, at a completely different point in time years ago and intended for completely different hardware.
As far as performance optimization of the whole codebase goes? Well, every neat little trick you might add to the code, to speed it up, introduces the possibility of that app now breaking unexpectedly. And that is a totally unacceptable concept, because your organization depends on the software to create value for customers and thus miraculously make headway even in the current gloomy business climate.
So to you, the IT dev guy, what is the best way to speed up mission-critical software? Ideally, it would involve:
(a) no new coding or code-tweaking required (b) no new risk that the code will break (because of the clever tweaks you added to speed it up) (c) no catastrophic service downtime (that creates lots of media attention and generates an estimated $1 bazillion in lost revenue) (d) no pink slips allocated to IT dev guys, due to the above (e) no new hardware required
That sounds pretty dreamy. Is it actually possible?
Recompile your code, get faster software-driven services
Turns out that it is. I was fortunate to be able to talk to Roland Koo, Product Manager for Compilers at the IBM Software Solutions Toronto Lab, and he gave me the inside story.
�Upgrade your compilers,� said Koo. �Move to better compilers, and all of that can happen. The compiler's job is to make life easy for programmers, so they can focus on getting the business logic right.�
How do compilers deliver on this value proposition? Just consider what they do -- and how they work. After a programmer writes up business logic in code (using a specific language, like C++ or COBOL), the compiler then cruises through the code, translating it into machine code (processor instructions) for a specific processor. This machine code, in turn, is what actually runs on the IT production servers (or mainframe).
And because compilers are not all created equal, some do a much better job than others at generating fast machine code. The smarter the compiler, the more efficient will be the machine code it generates -- translating directly into faster software-driven services.
In this sense, then, compilers are much more than just one more technical element of a software development. They are the most direct liaison between your software development team, which speaks one language, and the hardware your applications run on, which speaks another. So by investing in superior compilers, organizations can get both superior software and a superior business outcome from it.
Koo put matters even more directly than that: �You cannot maximize your return on investment unless you stay current with compiler technology.�
I have to agree with him. Note how quickly organizations can get that improved ROI: simply install the new compilers, recompile the code as-is and deploy the new applications the compiler generates. No risky code-tweaking is required. No new hardware is required. No new business risk of service downtime is introduced, because the code itself wasn't changed -- only the efficiency of the software.
New IBM compilers offer accelerated performance with no hardware upgrade required
Look at how that applies in the case of IBM System z compilers, for instance. System z mainframes run some of the most mission-critical services in the business world -- customer-facing online banking services, for instance. Better performance is always needed for such services, yet customer tolerance for downtime is practically zero.
So banks need a way to accelerate services without introducing new risk. That's exactly what IBM's new System z compilers, for COBOL, PL/I and C/C++, can deliver -- and not just for banking, but for any industry in which mainframe-based services face the same context.
Koo emphasizes that no new hardware had needs to be purchased. �You do not need to upgrade hardware to upgrade compilers,� he said. �In fact, upgrading compilers is a cost-effective way to get more out of existing hardware technology. You can take advantage of new improvements in both optimization and programmer productivity.�
In that second category, programmer productivity, another point to consider about IBM's compiler technology is that it leverages IBM's strengths in related areas, such as development tools, middleware, databases (like DB2) and transaction systems (like CICS and IMS) and modem application development tools such as IBM Rational Developer for System z and Rational Team Concert for Enterprise Platforms providing a high productivity environment for developing business critical applications. Because IBM offers them all, it can also optimize its compilers in ways no competitor can, to deliver even better performance for code that involves IBM middleware via integrated, pre-processor support.
Finally, while hardware upgrades aren't essential to get impressive, measurable business benefits from a new compiler, a new hardware/new compiler combination is unquestionably a great way to go, given the option.
In fact, there is to a 60 percent performance improvement on zEnterprise (the eleventh generation of System z mainframes) for C/C++ applications , when compared to running the same applications on System z10. That's what IBM's own internal tests have shown, and that's probably not too far from what organizations with IBM mainframe-driven services can expect to get as well.
How are you accelerating mainframe applications these days?
About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
First on the list was the risk of failure: Most scientists I approached favored their own individual projects and career tracks. And who could blame them? This was an effort that, at best, would mingle the contributions of many. At its worst it would fail miserably, undermining the credibility of all involved... I was willing to live with possible failure as a downside, but was the team?.
Then there was the solitary and ego-driven nature of scientific research: Scientists, by their nature, can be solitary creatures conditioned to work and publish independently to build their reputations. While collaboration drives just about all scientific research, the idea of �publishing or perishing� under one�s own name is alive and well.
As we now know, Ferrucci was able to entice enough researchers to his cause (the team grew from 12 to 25). And yes, to a member they were indeed brilliant and accomplished. But Watson was a project unlike any other. Ferrucci knew he'd need to change the way team members worked with each other.
This is where collaboration comes in. Ferrucci writes:
From the first, it was clear that we would have to change the culture of how scientists work. Watson was destined to be a hybrid system. It required experts in diverse disciplines: computational linguistics, natural language processing, machine learning, information retrieval and game theory, to name a few.
Likewise, the scientists would have to reject an ego-driven perspective and embrace the distributed intelligence that the project demanded. Some were still looking for that silver bullet that they might find all by themselves. But that represented the antithesis of how we would ultimately succeed. We learned to depend on a philosophy that embraced multiple tracks, each contributing relatively small increments to the success of the project.
Ferrucci and Watson succeeded because of vision, collaboration and a willingness to break down cultural barriers. Whether you're at Lotusphere or simply following along, I invite you to think about where those attributes can take your own organizations.
The curtain came up on Lotusphere and IBM Connect this morning down in Orlando, and if the initial flurry of social media activity is any indication, the first IBM software conference of 2012 is off to a great start. With an opening general session featuring guest musicians OK Go and actor Michael J. Fox, it took less than 30 minutes for attendees to drive #ls12, "Michael J Fox" and "Social Business" to the top of Twitter's trending list.
There's a lot more collaborating to do and I'm looking forward to tuning in again tomorrow. Remember, if you can't be there in person you can watch tomorrow's opening general session and follow along with the fun through the Lotusphere / IBM Connect Social Media Aggregator.
Understanding Big Data: Analytics for Enterprise Class Hadoop and Streaming Data is written by a team five (5) IBM data management experts. It tells, in very straightforward terms, what �Big Data� is and why it�s important. It provides a terrific primer on the what, why and how of Hadoop, the open source Big Data platform � plus lots of the other lingo that all Big Data customers (soon to be everyone) need to know. And � truth in advertising here � it describes IBM�s Big Data solutions in some detail.
Oh, one more thing: It is flying off the virtual shelves. I don�t think it�s an exaggeration to call it the Big Data �must-read� of the moment. So download your copy before the moment passes.
Must-see Social TV. Starting at 8:00 AM ET on Monday, January 16, you can watch Lotusphere 2012 General Sessions � including the ever-popular live product demos � on the Livestream IBM Software Channel. And you can return to the channel often for interviews from the show floor, recorded sessions, and other videos you can watch live or on-demand. It�s nowhere near as good as being at Lotusphere, but it�s required viewing if you wish you were there. Tune in.
Some people might argue, but former rapper and musician Vanilla Ice was a visionary.
Truth be told, he probably wasn�t talking about business analytics when he eloquently penned those famous lyrics in �Ice, Ice Baby.� But, he could have been.
We live in a collaborative world today�whether we like it or not. The realm of �social� is slowly morphing personal and professional, ultimately making life more efficient and transparent.
And some people and organizations are still rejecting this notion altogether.
Which is why at a company of approximately 400,000, with team members spread across the world, collaboration is a way of life, and a necessity in the IBM survival kit.
It bridges the gap of the world of social with the world of business. It allows us to now connect people and insights to gain alignment inside of the organization, as well as hold people accountable.
Decision making is no longer a game of telephone where important elements of that decision are lossed as it is passed on�one person at a time. When the decision is finally executed, does anyone even know if it was right, if the right people were involved, who made the decision, or why?
That�s where the power of business analytics and collaboration come together.
Organizations can lose tremendous productivity as they search for invaluable information hidden in various meeting notes, manual processes, emails and people�s notebooks.
Collaborative business intelligence(BI) streamlines and improves decision-making by providing capabilities for forming communities, capturing annotations and opinions, and sharing insights with others around the information itself.
It also allows organizations to communicate and coordinate tasks to engage the right people at the right time.
In fact, industry analyst Dave Menninger from Ventana Research commented that �innovative organizations recognize the processes involved in BI are as important as the technology and take steps to provide collaborative support to their BI activities.�
With built-in collaboration and social networking, collaborative BI harnesses the collective intelligence of the organization to connect people and insights and gain alignment.
What was once a dysfunctional buffet style decision making process is now a formal dining experience, with collaborative BI as the lazy susan passing reports and dashboards around the table for feedback and discussion.
Everyone now has input into the process, can easily connect with and understand context with others who are relevant to the decisions being made,and can now learn from history with a centralized corporate memory.
But realistically, before we can all sit down and enjoy this collaborative feast, it must be an accepted practice in the organization.
Culture is at the heart of this. It has to want to happen. Collaboration cannot be forced.
And, once you have embraced it�well, there�s no turning back.
Before too long, you have access to the people and expertise you need to discuss and refine ideas, data and information for the best results.
Had Vanilla Ice lived in today�s world of social networks and business analytics, he might have been able to lengthen his career, better market himself, sell more records, write better songs, connect with fans and shave less eyebrows.
Ok, maybe not.
But, he would have lived true to his mantra of collaborating with his producers and writers and listening to the general collective before making any decisions.
(I apologize if you now have Vanilla Ice stuck in your head for the rest of the day, but at least you�ll be thinking about how you can establish collaborative BI processes across the organization.)
Learn more about IBM Cognos Collaboration by:
� Registering for the January 17 IBM TechTalk: �Enabling Better Decision Making Through Highly Collaborative BI� (Begins at 12:00 pm ET)
� Watching the demo to see how to use built-in collaboration and social networking tools to connect people and insights
What's the ideal private cloud? The one you never have to think about. Lately I've done a certain amount of writing on best practices. And as usual, when I write about something, this means I find myself wanting to apply the root idea in all sorts of new ways.
So almost against my will, I come up with best practices for things like making espresso, playing scales on a guitar and finding a restaurant in an unfamiliar part of town.
This morning it struck me that there are underlying concepts that practically all of these best practices have in common:
1. Faster is better. 2. Cheaper is better. 3. More consistent results are better. 4. More automation is better.
That last idea is particularly powerful.
The less I have to think about any of the details of (for instance) making espresso in the morning, the more I can think about whatever I actually need to do that day instead.
This is helpful because without coffee, I am nearly useless and can be confused by doorknob locks. I need all the brainpower I can get. So it's nice being able to pull a shot or two of good espresso -- not a trivial job -- completely on autopilot.
It also occurs to me that if you apply those four concepts I list above to IT infrastructures, you get a pretty clear sense of where things have gone in the last 10 years -- and where they're heading. To wit: higher performance, lower costs, greater consistency and smarter automation.
And once again, the fourth one is particularly powerful. Especially in a cloud context.
A significant chunk of the appeal of cloud architectures, to business leaders, is simply this: You don't have to think about the details of the technology (which is relatively unimportant). You can instead think about what you're trying to accomplish with the technology (which is very important). You can more easily go, in a business sense, from early-morning, pre-coffee haze to clarity about your strategies and their execution.
And this is surely part of why, in recent years, pay-as-you-go public clouds have flourished, but private clouds (despite lower operating costs over time) have been a little slower off the mark. Private clouds, being private, require creating and maintaining the actual cloud infrastructure in-house. I imagine business leaders look at that idea and say, �Oh, geez, we're back to the details of the tech.�
This being so, the more IT solution providers can simplify and accelerate private cloud creation and maintenance, the more successful private clouds are likely to be.
Day one: No private cloud. Day two: Killer private cloud.
Well, the IBM SmartCloud initiative, launched last fall, strikes me as being directly on point in this respect.
When I talked to Murtuza Choilawala, Product Manager for Cloud Solutions with IBM Tivoli Software, he agreed -- pointing specifically to provisioning as a key cloud capability.
�Simplifying and accelerating private cloud rollout -- of new services, new servers or the whole cloud -- is really what SmartCloud is all about,� he said. �A big part of that comes thanks to a particular element of the offering called IBM SmartCloud Provisioning. Using it, organizations can easily get a private cloud up and running, starting with nothing but the hardware, in less than a single business day. If you're looking for a fast, effective implementation of business strategies, where you don't have to struggle with the technical details? You can't do much better than that.�
A single business day? To go from no private cloud to an up-and-running private cloud? �Game-changer� is not a phrase I like to use, but it seems to apply in this case. In fact, an upcoming Tech Talk on Cloud Computing to be held January 18 with Choilawala will show you why.
Further conversation revealed that this is no coincidence. SmartCloud Provisioning was designed specifically for the scenario I described above: An organization is interested in private cloud. It likes the idea of the reduced operating costs that come from owning its own cloud. But it's leery of the expected setup, maintenance and management a private cloud will require.
Forget about the usual hassles of setup, maintenance and management
What IBM SmartCloud does is reduce all three of those problem areas to a bare minimum.
Setup, for instance. In this area, SmartCloud Provisioning discovers new host (node) hardware, then allows administrators to create new virtual servers that will run on that hardware, and provision those virtual servers from an image library, at blistering speeds. How blistering? IBM cites up to 100 VM�s in less than 3 minutes
This strikes me as an incredibly fast, Usain-Bolt-level rollout. I'd watch it happen with a sort of stunned expression, blinking at the painful memory of what it was like to be an IT guy manually provisioning one server at a time, and envious of IT guys who will never have to deal with that experience.
The setup capabilities get stronger yet. Let's say your private cloud services are more successful than you expected and demand is higher. So you decide to scale up your private cloud by adding more hardware.
Turns out that you can simply add that hardware and start using it right away -- basically, hot-swapping in new hosts -- without bringing down your cloud or cloud services. This is because SmartCloud Provisioning will automatically detect the new hardware you've added, reflect the addition in your management console and give you the option to create new virtual servers running there.
Then your services will automatically leverage those virtual servers, scaling to meet the higher demand. It's really that simple.
Maintenance is similarly effective because IBM SmartCloud Provisioning can leverage these same capabilities when things go wrong. Let's say one of those physical hosts, for whatever reason, goes offline. As it does, SmartCloud Provisioning will detect that change. It will also try to solve the problem on its own via rebooting/microbooting (or reinstallation of PXE, which is used to boot servers remotely).
Either that will work, or if it doesn't (due to true hardware failure), the IT team can simply pull the problematic hardware and replace it with a new host. Then SmartCloud Provisioning will detect the new host and reprovision it on demand.
�SmartCloud Provisioning is actually so advanced in this area that we're using the term self-healing to describe it,� said Choilawala. �The idea is that when things go wrong, the private cloud should always notice that and, whenever possible, fix the thing that went wrong by itself -- like somebody with a headache taking an aspirin. It should hardly ever be necessary to go to the doctor. So medical bills (operational costs) go way down. And cloud productivity? Given the headache-free reality, that goes way up.�
If you've gotten this far, you can probably see that management -- our third potential pitfall for a private cloud -- is also really straightforward. SmartCloud Provisioning keeps administrators constantly apprised of which virtual servers are up and where they are on which hosts. It also supports multiple virtualization environments ranging from VMware to Xen to KVM (Linux). The everyday management functions it doesn't handle automatically, it empowers IT team members to handle with minimal ado and (more importantly) practically zero service downtime.
Learn more at Pulse 2012
All this means that the private cloud becomes a much simpler, less intimidating and more pragmatic possibility for organizations today. And over time, as IBM continues to revise and enhance SmartCloud, that's just going to become more and more powerful an argument.
�SmartCloud Provisioning is best understood as a foundational offering,� said Choilawala. �Already we offer complementary solutions like IBM SmartCloud Monitoring, that tracks cloud assets and status levels in more granular detail, to give organizations higher service availability and performance with what-if analysis and capacity management. And as more capabilities are added to SmartCloud, in areas like service management, it's going to become an increasingly superior private cloud platform.�
Interested in knowing more? Attend Pulse 2012, to be held in Las Vegas March 4-7. Attendees can expect both an in-depth look at SmartCloud's capabilities today, via technical demos, as well as a sneak peek into its future roadmap of development for the immediate future.
Register for Pulse 2012 About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.