Good strategies require good information; I think we can all agree on that. If you want to fly to Austin, for instance, it's important that you establish whether you mean the Austin in Texas or the Austin in Minnesota before you buy your plane ticket. Failure to do so will threaten the success of your Austin-visiting strategy at a deep level.
You might also think of this in terms of the military phrase �actionable intelligence.� If the intelligence isn't very good, the action you're contemplating probably isn't very well advised. (You could call that kind of information �actionable stupidity.�)
For many organizations today, however -- especially the larger ones that have been around a while -- ensuring that information is good is far from easy. This comes as a consequence of many factors, including:
- The total volume of information, which is vastly higher today than it's ever been before. Is big data always a resource to be tapped? Or is it sometimes a challenge to be overcome?
- The age of information -- in too many cases, it has long outlived its usefulness, may indeed be flat-out wrong and if it plays a part in strategies, those strategies will likely be compromised.
- The way information can change as it's used in many ways, by many people, to achieve many goals. The game of Chinese Whispers (also called Telephone) illustrates pretty well how easily and thoroughly that can happen.
Information governance can help you fulfill the promise of big data
- The fact that information can occur in multiple versions which differ from each other in subtle or blatant ways. Reconciling these different versions to arrive at a single accurate truth, and eliminating the versions which aren't true, is no simple matter.
Recently I discovered a blog
on these subjects by an IBM expert, Dave Corrigan -- aka, IBM's Director of Product Marketing for InfoSphere -- and was intrigued to find him discussing these various ideas in terms of trust.
It makes perfect sense, of course. If you're building an omelette out of eggs, or a house out of wooden beams, you need to be able to trust that they aren't rotten. And if you're building business strategies, processes and decisions out of basic information, the same logic applies.
This, in short, is the heart of information governance
- maximizing the business value of information by maximizing its quality and trustworthiness in a variety of related and interconnected ways. A quick phone call with Corrigan confirmed this interpretation.
�Information Governance establishes trust in information,� he said. �Without trust, organizations fail to capitalize on new insightsBut when business users can trust information, they act upon insights from analytics and reports, and operate more efficiently when using enterprise applications.�
This struck me as particularly interesting because of the implications. Picture a CIO who, having invested heavily in big data solutions, proceeds to collect piles and piles of data, runs his shiny new analytics tools on the piles and generates lots of impressive-looking reports, only to round-file the reports because, at some basic level, they just don't seem very trustworthy. Or, possibly worse, he uses the reports to make major decisions anyway, despite profound doubts about the wisdom of this course of action.
Talk about an indictment of technology! I asked Corrigan how common that scenario really was.
�More common than you'd think,� he said. �Recent studies
tell us that one in three organizational leaders frequently make decisions based on information they don't trust, or don't have. Half say they don't have access to the information they need to do their jobs. And 60 percent, a clear majority, think they have more data than they can use effectively.�
Information governance is all about solving that problem. The idea is to make data more trustworthy so that you can then proceed confidently to use it in more ways, solve more problems and create more value -- both for yourself and for your clients, customers and business partners.Six pillars of governance to support business goals and strategies
This, of course, is easier said than done. Fortunately, you don't have to do it alone. Corrigan explained to me that as a result of IBM's hundred-year history in business and an endless list of successful customer engagements, IBM has learned a thing or two about how information should be governed for best results -- actually, six things.
�Trusted information, as we see it, is dependent on six key technology aspects,� said Corrigan. �Basically, you need to ensure that information is understood, clean, holistic, current, secure and documented.�
Let's walk through those aspects briefly.Understood
information is information that has a clear, established context. That means its structure, its source and all associated metadata. Information has to be understood in this sense before definitions and policies concerning it can be shared across projects.Correct
information is just that -- correct. It's been standardized and cleansed, is in the right format and is known to be accurate. Logistics companies that ship products, for instance, will need to be quite sure they have the correct shipping address or customer satisfaction is going to take a major hit.Holistic
information is information that's been reconciled across all repositories, so that inaccurate versions of it are removed and a single accurate version is left. The logistics company above may have a correct shipping address on file for a customer, but it will also need to get rid of the other five addresses it also has, in other databases, all of which are completely wrong.Current
information is chronologically accurate. Keeping all information forever, as if it were all perpetually useful, will inevitably create problems. Instead, information should have an expiration date (rather like milk, water filters or members of Congress). This minimizes the odds it will influence decisions in ways it shouldn't.Secure
information has been protected and monitored over its lifecycle to verify only the right people have seen it, changed it or used it in any way. One of the best ways to increase the trustworthiness of information is to keep the wrong people from getting access to it in the first place.
information has a known lineage to establish its history. This is rather similar to the idea of provenance in the art world, used to reflect changing ownership. If you're planning to spend $50 million on a Picasso, you need to be sure it was not in fact painted eight years ago by someone named Steve. Just as with provenance, information lineage can be used to trace problems, guide decisions and yield a better outcome.
All of these capabilities are provided by IBM's InfoSphere famil
y, which includes leading solutions like InfoSphere Information Server, InfoSphere Guardium and InfoSphere Master Data Management.
InfoSphere solutions aren't just standalone tools; they interoperate at a deep level, forming a complete information governance solution. This solution, in turn, helps organizations get the best use out of information even in the most sophisticated cases, where information volumes are incredibly high, use cases are many and it's critical that the information be as trustworthy as possible.
Corrigan sees this interoperable design, in which governance capabilities are logically linked, as fundamentally necessary if major IT initiatives are really going to be successful in a pragmatic sense.
�Common projects that drive the need for integration and governance include newly installed enterprise application, or a data warehouse or big data systems that are the foundation of analytics and reporting,� said Corrigan. �Improving the trustworthiness of information in each of those enterprise projects requires various combinations of the six aspects, through Information Governance technology, to fully satisfy requirements. That's why we see Information Integration and Governance as a common platform of integrated capabilities for data integration, data quality, privacy and security, lifecycle management, and master data management.�Additional InformationFind out more about Information Integration and GovernanceJoin the InfoGov Community and become a governance leaderRead the Forrester report on turning data into business valueGet smarter about smarter analytics at Information On Demand 2012Register now for Information On Demand 2012Listen to this podcast to learn how to manage and leverage information betterAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 8829
I'd like my work environment to go, please. Stick a thermometer into tech development clusters in 2012 and you'll quickly find that nothing's hotter than mobile.
There's good reason for that -- people love the portability and simplicity of the smart-device platform, be it tablets or phones, and they want to leverage that portability and simplicity for business purposes.
Thing is, organizations that are on-board with that plan are still caught in a bit of a bind. They have to find a way to roll out mobile apps
not just quickly, but also effectively -- a way that will take best advantage of the strengths of mobile, yet minimize or eliminate the weaknesses.
So, among other issues, that means thinking about:
- Device-specific implementation. The rapid proliferation of smart devices means that there are, today, a variety of platforms, each with its own look and feel and range of special features. If developers trot out new apps that don't take advantage of all that (aka "lowest-common-denominator" apps), they're failing to tap the device's full business potential. But if on the other hand, they painstakingly develop apps independently, for each particular platform, they multiply the total amount of work required, and delay rollout.
- Server-side connectivity. It's not enough just to deliver mobile apps; those apps will have to link to the back end in a really seamless, smooth way if users are going to perform real work. So it's crucial to take into account the server-side architecture, not just the user's front-end experience.
- Security. Smart devices aren't, as a general rule, particularly smart in this area. But for business purposes, it's obviously essential that core resources like e-mail, databases and line-of-business services are only accessed in the right way, by the right people. The same argument applies in the context of regulation compliance. If the government says only certain guys should be able to access sensitive customer data, you'd better make sure that's the case -- whether those guys are using mobile apps or not.
This, I'm thinking, was the logic behind IBM's January acquisition of mobile application IDE provider Worklight
. IBM doesn't ask my opinion on such topics, but if it had, I would have said Worklight was the goods.
Why? Go through the bullet list above and you'll see why.IBM Worklight Studio makes hybrid development a piece of cake
Jim Zhang, Architect of Web Development Tools for IBM Rational, saw things the same way when I talked to him last week.
�The Worklight platform
That reference to hybrid is probably worth singling out for special attention. If you think about the way mobile apps can be created, the options are, roughly speaking, these:
- Native apps. These are totally platform-specific and as such, they take outstanding advantage of device-specific strengths. But, of course, they also require platform-specific expertise to develop for; development takes longer and is more expensive; and in a world with half a dozen major mobile platforms, that's a real problem.
- Hybrid apps. Here we get a kind of Goldilocks-approved middle ground -- app development that isn't too hard, or too soft, but is just right. The idea here is to develop in the Web-standard languages listed above (under Basic Web apps), but augment those languages with special libraries included in the integrated development environment (IDE); then execute those apps in a native shell to preserve each device's look-and-feel as much as possible.
As you might have guessed, the Worklight IDE -- aka Worklight Studio
-- capitalizes on that third option. Which means that developers can write code once and run it anywhere (rather similar to the Java pitch circa 1995), and yet that code will run in a way that takes advantage of unique device strengths.
As Zhang pointed out, this alone must have made Worklight attractive as an acquisition candidate.
�The original Worklight Studio has features that almost perfectly complement the strategy IBM had set for mobile development,� he said. �Worklight Studio provided layered code structure for cross-platform re-use, the ability to generate native artifacts used by the platform's own IDEs (Android Development Tools or X Code, for instance) in order to bridge the hybrid development with the native development, and a build process that produces platform-specific application code out of the layers. Match that against what IBM Rational had been working on: mobile specific editing (source code development or WYSIWYG UI construction) and testing capabilities. Perfect fit.�
And beyond simplified, optimized cross-platform development, Worklight also helps ensure those apps will work as intended. How? Turns out that developers can also test new app builds inside any standard Web browser -- a lot more convenient than, say, a phone, or four different phones running four different operating systems.
Zhang was quick to point out the additional business advantages of that approach.
�Developers are going to love the way we've simplified and accelerated app debugging,� he said. �Case in point is the browser-based device simulator. This test environment allows mobile hybrid or web applications to be tested and debugged in a desktop browser, making it much faster to test features or weed out bugs than using the native software development kit's simulators.�Worklight helps you optimize not just development, but the complete lifecycle of mobile apps
Another neat thing about Worklight is the fact that it's already integrated with other, related IBM offerings both inside and outside IT development per se. This way, IBM can ensure that Worklight creates as much value as possible, for as many people across the organization as possible -- pulling information from other environments, adding information to them or interacting with them in other ways that make good business sense.
For example, Worklight integrates with IBM's application lifecycle solution Rational Team Concert (RTC). This flexible offering leverages Agile development concepts to help organizations create software that's as feature-complete and bug-free as possible, yet get it all done faster, more easily, and at lower costs than via traditional development methodologies.
RTC's integration with Worklight means that those Agile strengths can be applied to Worklight-based mobile development, too. �The RTC client can be installed together with Worklight Studio, so that the mobile application development can be managed using RTC,� said Zhang. �It's also pretty cool that the various types of builds needed to produce Worklight applications are supported by RTC build systems, too. So RTC capabilities are really being applied in multiple ways for greater value.�
And Worklight is also integrated with IBM Endpoint Manager for Mobile Devices
, a solution that is just about as flexible, and cross-device capable, as Worklight Studio.
What's the relationship between these two tools? You can think of Worklight Studio as the forge in which apps are created, and Endpoint Manager for Mobile as the method by which those apps are delivered to employee devices, secured, managed and updated thereafter.
Endpoint Manager for Mobile even supports creating an in-house enterprise app store -- a centralized repository for new apps and app updates that can be used by all employees throughout the organization.
The Worklight Studio/Endpoint Manager combination thus strikes me as a really end-to-end mobile app solution. Not only does it address every element of the IT infrastructure that involves mobile apps, but it also is end-to-end in another sense -- chronological. Using these solutions jointly, you can build, deploy, manage and ultimately retire apps. That's cradle-to-grave support for their complete lifecycle.
Additional InformationFind out more about Mobile Development and ConnectivityNative, web or hybrid mobile app development � which approach is best for you?Watch this webinar on Harnessing the Power of Mobile in the EnterpriseLearn more about IBM Mobile FoundationTry out the Worklight Mobile PlatformAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 8605
The problem with the phrase �business agility,� if you ask me, is that it has now officially been used too many times, in too many different ways, to mean anything very definite.
I therefore propose getting rid of this particular adjective/noun combo and replacing it with a verb: respond. Four examples follow:
- Customer demand for a service increases? You respond by scaling that service up.
- Customers want a service that doesn't exist? You respond by creating it.
- Competitors are coming out with a similar service? You respond by doing yours better... and rolling it out first.
- Mainframe development teams are hobbled by legacy testing tools and processes? You respond by empowering them with better tools and processes.
That last one, in fact, seems particularly important because in many cases, it's the one that makes the first three possible.Distributed to mainframe
Thing is, many organizations that use mainframes for production services often haven't really figured out the best way to use them to develop new software. Common problems in this scenario include:
- Really slow testing of new apps -- think: multiple months -- because test tools aren't automated and the number of test cases is off the charts
- Difficulty hitting compliance targets for the same reason. By the time tests have finally finished, more regulations may have rolled along
- Unwanted complexity when orchestrating tests for large projects involving many teams, components, new code units, etc.
Fortunately, it seems that IBM Rational not only has been listening to its customer base, but has also -- wait for it -- responded by rolling out solutions that deliver a better outcome. This happens through an idea called �continuous integration
Now, if you develop for distributed systems, you're probably already familiar with this idea. Basically, it's this: You don't get all your developers to write code over a long period of time and then test it -- collectively -- at the end of that period. Instead, developers continuously test new code as it's written, and when it passes quality-control tests, they integrate that new code right away.
As a result, the total amount of recoding declines, projects are completed faster, applications work better and once deployed, they create more value for both you and your customers/users. All of which means your organization is now a whole lot more responsive� and a whole lot closer to modernizing your enterprise
.Speed System z development by empowering individual developers
The point of IBM's new approach -- according to Rosalind Radcliffe, Distinguished Engineer and Chief Architect for Jazz for System z and Power -- is to take this excellent idea from distributed development and stir it into mainframe development.
�With continuous integration for System z,� said Radcliffe, �we're giving the z/OS development community the same capabilities that have long been available for the distributed world. That means, among other things, giving developers their own test environments, and the ability to run automated tests without affecting production capacity.�
There are actually quite a few ideas implied there, involving four different IBM solutions. Let's walk through them in a bit more detail.
If you want new code to be continually tested and integrated, obviously you have to give developers some way to do that. That means giving them local environments in which they can assess code quality, isolate problems and fix those problems -- all the things Java developers working on distributed platforms are used to.
IBM's response to that concept: IBM Rational Development and Test Environment for System z
. You can think of this solution as almost creating a System z in miniature -- running on the developer's desktop or laptop -- which is good enough to test new app code. Because that environment isn't running on the System z proper, it's also not consuming System z resources, meaning that the System z is free to concentrate on up-and-running apps and services. (This is what Radcliffe means by �without affecting production capacity.�)
Given that System z environment-on-the-desktop, how do developers create new local builds to test the new code? Answer: IBM Rational Team Concert
-- a lean, collaborative solution well suited to this task.
Okay, so you now have local testing environments and local app builds running inside them. How do you optimize testing of those builds?
You give your developers the power to automate as much of that testing as possible. If there are two test cases to run, sure, a manual approach may suffice. What if there are 500? A thousand? Tackling a number like that manually is a real drag -- not just to the developer, but to the project as a whole, and ultimately, to the organization.
This is where IBM Rational Quality Manager
and IBM Rational Test Workbench
-- the other two pieces of the continuous integration puzzle -- come in. Using them you can handle the code testing per se, automating much of it for a faster and more complete process. You can also test the web service interface -- the front end through which customers/users will actually experience the app, which may or may not need to change to match the new code.Faster deployment, lower risk, easier project management, bigger smiles
The result? All the theory of continuous integration, and the intended benefits it's long generated in the distributed world, now apply to System z development, too. In particular, new application releases can be tested far more quickly, get rolled into production faster and start creating all the value they're supposed to create.
This is all likely to come as very welcome news to System z developers who may have gotten used to the idea that testing always means slow.
�In the past, application changes took far too long to deploy to production due to these long testing cycle times,� said Radcliffe. �Months were needed to change an application even in a relatively simple way. Not any more. With our new approach, where we have automated test buckets, those months can fall to weeks -- or days.�
There's also an improvement in risk -- a major concern in the case of mainframe-hosted applications that perform mission-critical tasks, on which the success or failure of the entire enterprise may depend.
Think, for instance, of banks that run applications that support online banking. I can't speak for anyone else, but if my online banking experience frequently seemed unreliable or inconsistent, I would most likely pack up my business and split, looking for another bank that knew how to get this stuff right. And banks, which are well aware of this line of thought, are perhaps a bit skittish about introducing application changes if they think those changes could lead to the packing/splitting outcome just described.
�Risk definitely declines with continuous integration,� said Radcliffe. �That's because developers can begin to build up their regression buckets with automated tests for any areas they are currently developing in -- especially for areas that are expected to cause difficulty when changes are made. Then, by tracking and storing these tests, they can establish that actually making the changes will be significantly less risky -- a new build will actually do what it's supposed to do, the way it's supposed to do it.�
Finally, project management is also enhanced. What had been a super-complex, difficult process of orchestrating testing and quality assurance across large teams of developers is now rendered much simpler and more consistent in a couple of different respects.
�First -- the new continuous integration solution means the testing process can be standardized across the entire team,� said Radcliffe. �Because the process is standardized, it's much better understood, runs much more smoothly and yields better, more predictable results. Second -- the products used to carry out the process are also now standardized. Since everyone has the same tools, and is using them in the same way, you get simplified management and you have an easier time scaling projects. If a new developer is needed in a given group, it's very clear what tools she'll need -- and how she'll be using them.�Additional InformationLearn more about Enterprise Modernization solutionsJoin Rosalind Radcliffe for an InfoWorld Webcast -- Smarter Development and Testing for IBM System zCheck out additional resources on continuous integrationStay current on software & systems innovation with The Invisible ThreadFind out how to increase your services without breaking the bank
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10068
Here's a straightforward proposition: Software is more and more critical to the success of business strategies. So it's getting more and more critical to develop that software properly in the first place. Sounds simple enough, right? Just hire good engineers who don't write spaghetti code and who play well with others. Problem solved.
Well, okay, that actually works pretty well for a software startup. At a tiny, new-to-the-world organization, you've got a brand new kitchen to cook in and a very small number of cooks. Project management almost takes care of itself -- the two-topping pizzas zip out of the oven on time and under budget. They taste pretty good, too.
At the enterprise level, however, software engineering can easily go a bit wonky. Ponder if you will the following variables:
- The total size of a codebase � FYI: measured in billions of lines of code
- The number of functional units to optimize and test
- The number of programmers on a project
- The extent to which applications and services rely on each other to work
- The number of years (or decades) in which a particular codebase has gradually and imperfectly evolved
Scale these variables up far enough and you may find you've gone from a simple pizza, perfectly executed, to something else: a monstrous, 50-course, semi-French cataclysm of a meal that nobody ordered, that smells funky and that, if put in front of diners, will be hurled violently back into the kitchen and cost the restaurant its cherished good name.
Well, I can see I've worked my cooking analogy far past its reasonable life expectancy. However, having made my point, I can get to the heart of the matter, which is this:
For the largest organizations and software engineering projects, today's integrated development environments (IDE) are much more than just tools. The IDE is the individual practitioner�s working environment, which is seamlessly integrated to the team wide capabilities. IDEs are collaborative partners -- mentors, even -- that help guide development teams, projects, applications, services and codebases down the road to successful application lifecycle management
and enterprise modernization
.Given a robust, thoughtfully designed IDE, the best practices almost implement themselves
What with Rational Developer for System z version 8.5 < http://www-01.ibm.com/software/rational/products/developer/systemz/ > hitting the streets this week, now seemed like a good time to discuss these and related issues with an expert.
That expert was Richard S. Szulewski, IBM Product Manager for that very offering. Szulewski put matters on an etymological footing that wouldn't have occurred to me.
�Just look at the term IDE,� he said. �IDE: Integrated (that is, you have seamless access to all the facilities you need to do your job), Development (development is far more than just changing the code), Environment (a place from which to not just do your job, but do it effectively and efficiently). That is a lot more than just a pretty editor. That is what Rational Developer for System z offers.�
And in Version 8.5, it offers a more complete and well-rounded rendition of that concept than ever before. The new solution has been designed specifically to help organizations not just get more value from the mainframe, and from their developers, but also get it at a higher level of abstraction -- from development projects themselves.
Consider, for instance, how it addresses the common concern of scalability -- not of the software being developed, but of the project of developing that software. To optimize large-scale project management, as everyone knows, best practices are required, but not everyone actually implements them. A really mature, thoughtfully developed IDE should make that implementation a lot easier.
Szulewski agrees. �Rational Developer for System z V8.5 includes enhancements that ease potential large-team effects as the number of people on development teams using it goes up. The idea is that any given user can access the host as if he or she were the only one using it.�
For instance, consider the way the solution now automatically keeps programmer workstations up to date. Admins can simply upload new configuration files to the System z; once a programmer logs in, if the new file is needed, it'll be downloaded immediately.
That means more cross-team consistency with less effort -- a best practice by anybody's definition. It also means each programmer can spend more time on coding challenges and less on environment maintenance, which in turn leads to more productivity.
Another example of scalability, this one addressing codebase size: programmers can now more easily search for, zero in on and open the specific code modules they want.
In much the same way a Google search provides a preview of the text at a given link, so that you can decide whether to click it, the new Rational Developer for System z generates a code preview. Just mouse over a module, and you can see the first few lines of its code -- it's as simple as that.Write, visualize and test code quickly, easily... and in a way that isn't at all like French cuisine
Enhanced productivity, especially via editor refinements, is another major design strength of Rational Developer for System z V8.5. In the world of software development, editors are holy ground -- such deep investments, in fact, that they compare with religion and politics as reliable argument starters.
Well, the new Rational offering actually includes three different editors, for LPEX, COBOL and PL/l. And strengths that had been limited to the COBOL editor in the past have now been stirred into the LPEX and PL/l editors, bringing them up to par.
While they differ in specific features, what the new editors have in common is the strategic goal of helping developers visually and intuitively understand and navigate the flow of code much more easily. By increasing the time developers stay in editing context, instead of having to wander elsewhere to do various tasks, the new editors also increase the developer's focus on the job at hand.
And the way the three editors have been brought into rough equivalence turns out to be an instance of a larger theme in the new release. �Rational Developer for System z V8.5,� said Szulewski, �includes a conscious effort to get to better language equity in terms of the PL/I and COBOL languages.�
New integrations are another strength. Since organizations often already have fairly well-developed, specific solutions and information repositories that address particular areas, such integration is a great way to leverage those resources more easily and fully -- eliminating the need to reinvent the wheel.
Organizations that already have Endevor, for instance -- a mainframe code management tool -- will find that the new Rational offering can directly display Endevor elements or packages in a tidy, sortable, customizable table.
Code coverage, too, has been improved, making it a much more straightforward matter to visualize how complete (or incomplete) software testing has been at any given point. Straight from the coverage report, it's now possible to launch a view of the source code to see colored annotations that reflect specific testing.
Code review rules have also gotten a tweak for the better, thanks to additional COBOL and PL/l rules and templates in Rational Developer V8.5; you can even now create custom rules using an easy, wizard-driven process. It all illustrates just how serious IBM is about helping organizations pursue best practices through the new IDE.
�Creating an objective means for confirming best practice adherence -- that is what the new code review capability is about,� said Szulewski. �We've made it easier and faster to define what the 'coding practices' you want should look like, and provided an objective way for the individual developer and whole development teams to compare their work against those practices.�
And if unit testing is your particular cup of tea, you'll probably be glad to hear that in Version 8.5, Rational Developer for System z provides an automated unit testing framework, zUnit, which is similar in nature and concept to JUnit for Java and provides similar benefits. Here, too, smart wizards are available to generate COBOL and/or PL/l test cases.
After these test cases are built and run, the execution results can easily be displayed along with traceback information needed to isolate specific issues -- ultimately, helping to bring the software that much closer to a release version that won't remind anybody of French cooking gone horrifyingly wrong.Additional Information
Discover the benefits of Enterprise ModernizationSee what IBM offers for Application Lifecycle ManagementGet up to speed on IBM Rational Developer Version 8.5Watch videos about the features of Rational Developer for System zTry first-hand the new IBM Enterprise Modernization Sandbox, with no installGet more education with IBM COBOL and Rational Developer for System z - Distance LearningVisit the video library of IBM Enterprise Modernization Solutions for System zAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10066
It always surprises me to see tremendous potential go almost completely unrealized and undeveloped. A specific example: Recently I saw a YouTube video featuring a guitar worth somewhere north of a quarter million bucks. Yet the dealer who was trying to sell this guitar had recorded himself playing it with... a standard handheld video camera.
And I thought: �You know, if you want to sell a guitar worth $250,000, maybe you should record it with a microphone that costs more than $0.25.�
Plunking down a few dollars more for a good mike would have made a world of difference to this guy�s sales prospects.
A similar argument, or so it seems to me, often applies to IT. Platforms are bought with a particular purpose in mind, and used for that purpose, but a relatively small added investment might radically increase their total value.
Take the case of IBM Power Systems. This platform offers an extremely advanced processor architecture, IBM's RISC-based POWER 7; advanced operating systems, including AIX (IBM's flavor of UNIX); top-tier virtualization capabilities that allow IT to allocate resources and manage whole workloads fluidly and a host of other strengths too numerous to list here.
So organizations that have made the investment in Power Systems certainly know what an outstanding IT service delivery platform it is. What may not be as clear to them, and should be, is what an outstanding IT development platform Power Systems can be as well.
on Power, they can not only make their investment pay dividends to both sides of the development/operations divide -- creating and deploying better software, faster, and yet with lower costs and risks -- but also take major steps toward enterprise modernization
FYI, RD 8.5 for P7 is IDE: TNG
That argument got a lot stronger this week because IBM's own integrated development environment (IDE) for this platform -- IBM Rational Developer for Power, Version 8.5
To get a sense of IBM's thinking in this area, I had a chat with William T. Smith, Market and Product Line Manager for IBM's Development Solutions for Power Systems Software.
�We saw that many customers were developing their AIX or Linux on Power workloads on some other platform and then porting to AIX, often without optimizing them for Power,� said Smith. �And we were concerned to see them spending premium dollars for Power's unmatched price-performance profile and other unique qualities of service, but then failing to fully exploit those. Many of them are still using green screen or textual tools, or spending time cobbling together and maintaining home-grown OSS-based tool stacks, and therefore not realizing the productivity and other benefits of using Rational Developer for Power. So our goal for Version 8.5 was to have Rational Developer for Power start to play a central role in helping customers exploit AIX and Linux on Power to their fullest.�
I knew exactly what he meant by �green screen or textual tools� because I recall using such IDEs in my distant youth. And the memory gives me no pleasure. There was not very much troubleshooting and productivity, and quite a lot of vertical scrolling and swearing.
It seems to me that last-millennium development tools like that are bound to act like an anchor hung from the development team's neck -- not really the best choice if the goal is to increase business agility. Which, for most businesses today, is a very familiar goal indeed.
But the new Rational offering goes far beyond graphic visualization, which has been part of the solution since 2010.
�This new release delivers three main new capabilities: a new Performance Advisor, a new, highly scalable code coverage analysis capability and a new Porting Advisor,� said Smith. �Together these raise Rational Developer for Power's value proposition in the AIX and Linux on Power space in a profound way. Rational Developer for Power becomes not just an IDE, but an Integrated Development, Porting and Optimization Environment.�
Let's suppose you happen to be a company that has already deployed IBM Power Systems. By deploying the new Rational IDE as well, you can...
- Generate applications that are really optimized for the POWER architecture, and run faster, with more stability
- Simplify moving your applications across platforms
- Identify and eliminate software bugs much more rapidly and easily
That strikes me as a winning value prop. And if you happen to be an organization still using green-screen tools, such as Smith describes above, well, your programmers will likely clap you on the back and buy you a beer. Their professional lives will have taken a gigantic step forward into the intuitive, graphic development interfaces of the 21st century -- a very good place to be.Do you feel the Power?
Let's talk briefly about the optimization capabilities. Among the new features of Rational Developer for Power 8.5, perhaps the most compelling is the new Performance Advisor. This provides key insight needed to leverage Power strengths to the max -- not just in terms of analysis and tuning, but also by performance data management in a larger, more holistic sense.
You can, for instance, directly compare profiles of different builds to identify slowdown, drilling down into the details (like the time spent executing different functions within those builds). You can generate intuitive scorecards that illustrate real-world performance at a glance. You also get recommendations for future changes, each one assigned an estimated probability that the proposed change really will pay off.
How's that for �key insight?� It's no wonder that in its eight-month beta period, this capability was unanimously praised by all participants -- including many non-IBM organizations, of course.
Smith thinks very highly of this particular innovation as well.
�The Performance Advisor really is something new and unique. Unlike other performance tools, it is very much designed for the development generalist, but it is also fueled by deep performance engineering expertise that reflects intimate knowledge of the internals of the Power architecture, the operating systems and the compilers,� he said. �And in addition to being driven by expert advice, unlike other tools, it is also workflow-driven and deeply integrated into the IDE so that you can easily and naturally integrate the discipline and tasks of performance tuning into the routine development cycle.�75 percent of the Earth is covered by water -- IBM Rational covers the rest
Another major attraction: the new code coverage analysis for C, C++ and COBOL (on both AIX and Linux).
Now, there are lots of code coverage solutions out there. They all help dev teams establish how thoroughly code has been tested and therefore how bug-free and feature-complete -- in short, production-ready -- it really is.
What the new IBM solution offers is exceptional scalability of code coverage. No matter how large the codebase, the builds or the test coverage goals, Rational Developer for Power 8.5 is up to the job -- all with little to no perceived impact on developer productivity or application execution time. And in the enterprise, where the codebase and coverage requirements often trend very high, that kind of scalability is absolutely must-have.
For organizations that are looking to migrate C, C++ or COBOL software across platforms (read: to AIX/Linux on Power Systems), there's also the new Porting Advisor to ponder.
Using this tool, which leverages both static code analysis and expert system rules, developers can discover what kinds of issues are likely to turn up during the port, including such commonplace examples as big-endian vs. little-endian encoding, 32-bit vs. 64-bit processing requirements and signal-handling. Then, given that reconnaissance, the actual porting process can be orchestrated more easily and quickly -- a high-quality transition that results in a high-quality outcome.
Finally, if you happen to be using IBM's System i platform, the good folks at Rational have got your back there, too.
�It's true,� said Smith, �that we did put a great deal of emphasis on AIX and Linux in this release, but that doesn't mean we overlooked our IBM i customers. (And by the way: props to them for seeing the elegance in how IBM i is integrated and optimized to simplify development of business applications.) There are several goodies in this release for them, such as the integration of the Remote Systems Explorer with IBM Data Studio, support for multiple build specifications and a new live outline view for RPG.�Additional InformationSee how Enterprise Modernization helps you get more out of what you�ve gotSimplify your application lifecycle managementGet up to speed on IBM Rational Developer Version 8.5
Try out IBM products in the Enterprise Modernization Sandbox for Power SystemsEstimate your savings with Rational Developer for Power Systems SoftwareWatch videos that highlight features of IBM Rational Developer on PowerAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 9824
My cousin's wife told me recently that they wanted to buy a house, but weren't sure they could justify such a huge investment in such a doubtful economy.
So I told her this: �Buy a few square feet. Take a few weeks, try them out and see what you think. If you like 'em, buy some more square feet. Then a whole room. Then a whole floor. Eventually, maybe, you'll have your dream house.�
Of course, this was just a joke. But most of the time I think it's actually very good advice, because it's very easy to apply and it applies to so many different circumstances.
It certainly applies to physical fitness, where trying to accomplish too much, too soon will just burn you out, or put you in the hospital, instead of make you fitter. It also applies to marriage; getting engaged on the second date is generally not considered a love-life best practice.
Much the same kind of thinking applies rather naturally in IT. It shows up, for instance, in the form of kernel-based operating systems like Linux and all modern versions of Windows. The kernel represents a solid initial foundation that handles core tasks like memory management, to which any number of logical capabilities can be (and are) added to form the complete OS.
And these same sorts of ideas apply on a far larger scale in the context of cloud computing
, I think. Because organizations can't know with perfect accuracy in advance how best to develop and utilize cloud for their own particular circumstances, it's probably wise for them not to think of and develop clouds as a monolithic entity -- a thing they have to roll out perfectly and completely on day one -- but rather as a foundation to which they can add new capabilities over time.
If I had to guess, in fact, I would say that it was exactly this reasoning that led IBM to give SmartCloud Foundation
that title. It's meant as the initial �cloud kernel� on top of which you can then subsequently add new layers, new capabilities, that match your business requirements, just as Linux developers add Linux services, all of which run on top of the Linux kernel.Why manage the cloud when the cloud can manage itself?
As it happens, I prefer certainty to doubt. So rather than just keep guessing about IBM's nomenclatural logic, I decided to ask an expert: Marco Sebastiani, Product Manager for IBM Service Delivery Manager and Cloud Solutions.
Sebastiani not only confirmed my interpretation, but ran with it in what I thought was a pretty cool direction.
�You can think of cloud management software almost as a set of nested Russian dolls� he said. �Practically any cloud is going to need to be able to do things like create virtual servers, and track key assets, automatically. That basic functionality corresponds to the innermost Russian doll. We address that with SmartCloud Foundation's entry cloud solution, which does provisioning and image lifecycle management. But then, once you have that set up, you can easily add more capabilities over time: bigger dolls. Every larger doll, in turn, leverages the capabilities of the smaller ones. And the cloud intelligently and automatically orchestrates all of its capabilities based on business policies.�
So, to pursue this analogy, what's the next doll up from SmartCloud Foundation?
The answer, it seems, is IBM Service Delivery Manager
-- a set of capabilities, delivered as a pre-integrated software stack, that can help organizations leverage clouds to do even more, and create more value, in areas where they typically really need more value.
�The idea of this solution,� said Sebastiani, �is to simplify, accelerate and automate service fulfillment. It minimizes the amount of manual work IT has to put into the cloud by making the cloud much more self-governing and self-optimizing. So suppose you're an employee who wants a new service in the cloud. Instead of having to submit a request to IT to create that service, employees can just ask the cloud itself to do it. And, by orchestrating key tasks in logical ways, that's just what the cloud will then do. In this way, service management becomes much easier to pursue because services running in the cloud basically manage themselves, cradle-to-grave.�
This fits Sebastiani's analogy rather well, too. Return to the idea of Russian dolls for a minute, remembering that the innermost cloud doll does provisioning and monitoring of virtual servers.
What IBM Service Delivery Manager does, in turn, is build bigger dolls on top of that, automatically leveraging those functions over time, in ways that fulfill business requirements, while also adding entirely new capabilities that add entirely new value.End-to-end optimization of the complete service lifecycle
This solution, for instance, includes an intuitive portal interface available via any standard Web browser. This, in essence, is the front-end needed to create new services that will run in the cloud.
Using it, one can basically instruct the cloud: �This is what I'd like to do, this is when I'm going to need to be able to do it and this is how important it will be to the business.�
Then the cloud basically does the rest -- ensuring that new virtual servers are created and eliminated on time, provisioned using the right server images, and that this entire process doesn't conflict with or compromise existing services unacceptably. (If that sounds like �automatic IT governance� to you, you're pretty close to the mark.)
To do that, of course, the cloud needs to be able to allocate critical resources fluidly and dynamically
-- resources like processing power, memory, storage and even network bandwidth. This capability, too, is provided by Service Delivery Manager. It is continually aware of the available resources, discovers new resources when they are added to the general pool and doles out resources when and where they're needed. Then, when the demand level falls, the cloud pulls those resources back to the pool, or directly assigns them to another service that happens to need them at that point.
Also worth noting is the fact that all of that happens far more quickly and efficiently than it would if it were overseen by human talent. So, because fewer resources are wasted, fewer are needed in the first place -- a major cost-saving opportunity for the organization, which can now get by on less total processing power, memory, storage and bandwidth than it would have thought possible before the cloud.Real-time monitoring
is another major capability. Service Delivery Manager continually tracks the health and performance level of both virtual and physical resources -- a critically important function given how incredibly dynamic a cloud can be. So let us imagine that a given node (physical host) fails due to a toasted logic board; Service Delivery Manager will automatically notice and report that issue, leading to a quick and accurate failover of the associated service to a different, much healthier node.Cost-tracking
is yet another major strength of this solution. Given the intensely shared and interconnected nature of a cloud, where so much is happening automatically, you might expect it'd be difficult figuring out the costs created by different cloud services and systems -- and business teams and projects that use the cloud. And normally you'd be right.
�Service Delivery Manager changes all that,� said Sebastiani. �It gives you granular insight into exactly how costs are trending in all those different ways -- in as much or as little detail as you need. So if you're using your cloud in a public model, it can tell you exactly how much to charge your customers for their particular cloud utilization, even though all customers are using the same hardware. Or if you have a strictly private cloud, it will tell you how much you should charge back to different groups. This way, it creates the kind of insight that over time can help, or encourage, those divisions to try to keep their costs down.�Additional InformationFind out what Cloud and IT Optimization can do for your organizationLearn more about Cloud Service Delivery & ManagementDiscover the benefits of cloud with the cloud simulator gameAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6647
Mobile is a whole new ballgame for IT security. I've written before about my cordial dislike for the phrase �paradigm shift.� Here's my objection in a nutshell: there aren't very many paradigms and they don't tend to shift. As in all things, however, there are exceptions.
Pen-and-ink ledgers giving way to spreadsheets? Single-app servers giving way to virtualization? Yeah. Paradigm, check. Shifting, check.
We are, I think, currently living in the middle of another such shift, and as paradigm shifts go, this one is arguably bigger and more significant than either of the two I just mentioned. It involves far more people, far more transactions, and creates, as a result, far more change.
I'm talking, of course, about the rise of mobile technology. In one generation, we've gone from landlines to cell phones to smart phones. And the new smart phones are so much smarter than the old ones, the old ones can now be called stupid.
Recently I was reading Jeff Crume's blog
, Inside Internet Security, and I discovered there a video interview of Jeff holding forth on the subject of mobile security and everything it implies for organizations today.
This quote in particular stood out for me:
�Whereas we used to have the data in some glass house, in some controlled environment, now it's sitting in somebody's pocket. Or worse yet, it's sitting in the back of a taxi cab that you took an hour ago. And it's still riding around New York City. And you aren't.�
You see what I mean about paradigm shift being justified in this case. Spreadsheets and virtualization, you have just been dethroned.Successful security strategies acknowledge social realities
Jeff is a Distinguished Engineer, Master Inventor and IT Security Architect for IBM, so it struck me as a good idea to talk to him a little more on these topics. So that's what I did.
As it turns out, we have many of the same opinions. One in particular is that the mobile computing paradigm shift is multiplied because it involves not just a technological, but a social, dimension.
If you move from ledgers to spreadsheets, or from single-app servers to virtualized hosts, you generally (unless you're a complete freak) only do so at work, and only for business reasons.
But for mobile devices like smart phones and tablets, the appeal is far wider. The utilization is far greater. And from a security standpoint, the upshot is that mobile devices have now often become a sort of path-of-least-resistance for employees who want to conduct work activity offsite.
Having bought the browser-equipped device largely for personal reasons, they now want to use it for business purposes, too -- even though it was never originally designed for that job, and isn't particularly secure.
And the fact is that this will happen whether anybody in IT likes it and approves it or not. As far as the user is concerned, the convenience to him flat-out trumps any abstract logic, however correct it may be, that security-poor mobile devices shouldn't be used for business purposes.
�When it comes to BYOD (Bring Your Own Device), we as IT security professionals have to learn to say 'how' rather than 'no,'� said Crume. �Because if we don't, users will do it anyway, and in a far more insecure manner.�
Let me give you an example of the kind of thing he means.
Imagine that Employee Joe buys an iPhone, which of course has as browser pre-installed in it (Safari). Joe travels a lot for work and wants to use his iPhone to check company e-mail. The company supports a browser interface for e-mail, so Joe's goal can actually be accomplished.
Problem is, the company IT policy forbids him to do it. And the company backs this up by seeing to it that browser-based e-mail is only available over a secure VPN-based connection, such as Joe has from his security-rich laptop.
Joe, however, has other ideas. Maybe he loves his laptop (which weighs a few pounds and is portable), but he loves his iPhone more (because it weighs a few ounces and is far more portable). So he's determined to find a workaround to this e-mail issue. And it occurs to him that he can simply set his corporate e-mail to forward automatically to an off-site service such as Gmail... which has no such prohibitions on how it's used.
So now the enterprising Joe is, indeed, using his iPhone to send and receive his company e-mail. This, needless to say, is a security disaster for his employer. Not only does all that e-mail travel over relatively insecure networks to a relatively insecure device, but its full contents are also donated to Google (a company whose business model revolves around milking the world's information for everything it's worth).
The root cause of this increasingly common situation is simple: Personal and business devices and services are getting stirred together into a melting pot which is, to an IT security professional, overflowing with sinister potential.
Or as Crume puts it: �Mobile devices typify the blurring of lines between our work and non-work personas.�Applying familiar security best practices in unfamiliar ways
You can also interpret this situation from a technical standpoint, if you like.
Think about potential security exploits and what can be done to stop them, for instance. If organizations are going to support the use of personally owned mobile devices, the platform may have changed, but the security goals and challenges haven't.
Just as with laptops/desktops, IT will have to pursue key tasks like
- Establishing and enforcing access control rights
- Provisioning devices with recent patches and software updates
- Configuring devices in ways designed to minimize vulnerability
- Encrypting data and transactions
- Fending off the ever-growing body of malware -- much of which specifically targets mobile devices, exactly because the paradigm shift means they're a mighty big target for malware designers
Accomplishing all that is a bit of a puzzler given that most mobile devices typically aren't based on robust, security-rich operating systems like UNIX.
They're based on... well, let's be diplomatic and call it �something else.� And if you happen to be a malware hacker interested in easy exploits, that something else is awfully tempting.
Crume's opinion, which I share, is that organizations need to wake up to these realities -- creating and pursuing a strategy that allows employees to use mobile devices, albeit in a fashion that is as secure as possible.
�The form factor has shrunk, but the threat has not. We can either learn how to surf the tsunami of mobile devices or be crushed by it,� he said. �And since the waters are shark-infested with hackers, the risks of getting it wrong are significant.�
All of this context is, no doubt, directly responsible for IBM's recent, very notable interest in mobile computing -- and it's plain to me that IBM means to get it right.
Consider, for instance, the launch of IBM Endpoint Manager for Mobile Devices
. This solution specifically targets BYOD security for the enterprise, providing security that's as comprehensive and robust as the underlying platform allows it to be.
On Joe's iPhone, for instance, Endpoint Manager for Mobile Devices can leverage Apple's management API (given approval from Joe). This gives the company new power to reach across the carrier link and actually remove key data from the phone, no matter where that phone goes.
If it is, to use the Crume for-instance, sitting forgotten in the back of a NYC cab, that's a shame, but at least it's free of sensitive company e-mail, not to mention all those personal photos of Joe's house and children that Joe would prefer strangers not have.
And on platforms like Google's Android, that support agents, the Endpoint Manager agent can simply be installed -- providing an even greater range of management options, such as device configuration and application updates, that make the device even more secure.
These many security capabilities thus benefit not just Joe's employer, but Joe himself -- a critical point that Joe will probably need to have explained to him before that agent's going to get installed.Additional InformationSee what IBM offers for Enterprise Mobility ManagementAchieve smarter, faster endpoint managementRead about how to secure mobile devices in the enterpriseDiscover how to safely embrace �Bring Your Own Device� in the workplace
Gain insight from this webcast on mobile device managementAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6553
Every retail manager is familiar with the idea of the Return Browser. By this I mean the guy who keeps coming back to check out something he obviously likes -- a car, a guitar, a flat-panel TV -- but just... can't... quite... justify. A similar situation seems to me to apply inside certain organizations as they ponder moving to cloud architectures.
They're familiar with the cloud story: Faster service delivery! Smart resource allocation! Increased focus on strategies, not technical details! Minimum waste, maximum business value!
But still they ponder.
It's an understandable doubt they feel. The promise of cloud computing is, they are afraid, not going to be realized in reality -- at least in their case.
They correctly see that a cloud is going to be a much more dynamic infrastructure, meaning in part that it will be harder to predict exactly what it will do in any given context. You need to think through all the major ramifications very carefully before making such a deep commitment. (And if you happen to see a parallel to marriage here, I won't tell you you're wrong).
This, I think, is a big part of what makes IBM SmartCloud Provisioning
and IBM SmartCloud Monitoring
so attractive. The value proposition they offer isn't just post-deployment -- although that's enormous -- but pre-deployment, too.
By offering organizations substantially enhanced power to determine and optimize how a private cloud will fulfill workloads, these solutions also inspire confidence that maybe cloud computing really can live up to the hype.Provision and monitor your way to peace of mind -- and incredible business value
When I spoke with Marvin Goodman, Product Manager for IBM Tivoli Software, he also seemed to see things along these lines.
Actually, one of the points he made was that even when the cloud is already up and running, much the same kinds of questions will still apply as new workloads are added to it from the more conventional infrastructure -- or somebody proposes said addition.
In that scenario, in fact, a double set of worries may apply.
�Physical to virtual migration plans are a daunting challenge for both application owners and cloud administrators,� said Goodman. �The application owners are under pressure to meet deadlines for the virtualization of their workloads, but face uncertainty about the ability of the cloud to service their customers. Meanwhile, cloud administrators have to quickly respond to requests from those application owners, and be able to determine, with confidence, that addition of those workloads is feasible, and won't affect the performance of existing workloads.�
Fortunately, what SmartCloud Provisioning and Monitoring offers is directly applicable to both sets of worries.
Consider: both the application owners and cloud administrators are bound to like the idea that in the cloud, new virtual servers will be created and provisioned with absolutely mind-boggling speed based on business requirements (thousands of servers per hour, if need be). And once those servers are up and running, workloads can be assigned and distributed across them -- meaning that applications should indeed perform as expected, and that the cloud will simply have taken on another role with ease.
They're also going to like the idea that the cloud's assets and resources are continually and automatically monitored over time to verify that they're performing up to target levels -- or if they aren't, notifications will be sent, steps will be taken and a fix will be made. Because when things are about to take a turn for the worse, the sooner you know about it and the more comprehensive your insight, the better.
�When application owners surrender their workloads to cloud administrators, they lose the visibility into performance they've been accustomed to. Their application is now sharing server resources with lots of other workloads, many of which they know nothing about. So they're uncertain about how their applications will perform in the cloud,� said Goodman. �SmartCloud Monitoring allows cloud administrators to provide assurances that workloads are, indeed, running smoothly in the cloud. It can also leverage performance data to optimize those workloads and their placement to simultaneously maximize performance and capacity.�
Instead of dynamically generated virtual servers and unpredictable resource allocation being something to worry about, in other words, they are simply strengths to rely on -- strengths the cloud was supposed to have in the first place.Future-proofed clouds generate more rain over time
Of course, not even a cloud runs on magic; ultimately there is a limit to what it can accomplish given a finite set of resources. The question is: where's that limit, and how accurately can you establish it in advance?
If you're a cloud administrator of the type Goodman is talking about, capacity planning and management is a pretty big deal for exactly these reasons. Which, no doubt, is why capacity management is one of SmartCloud Monitoring's great selling points.
�Customers trying to grow the maturity of their virtual environments into robust private clouds often grapple with the pressure to add more and more workloads to the environment, at a pace that far exceeds the growth of their cloud budget,� said Goodman. �SmartCloud Monitoring's capacity analytics and planning unlock hidden capacity in the existing infrastructure by freeing up resources through virtual machine 'right-sizing' and optimization.�
So rather than always buying new storage for new workloads, you can often just improve the way you're using existing storage.
Instead of always working harder, you work smarter. Indeed, this gets right to the heart of what IBM has in mind when it talks about Smarter Computing. Maybe you really do need more/new resources or maybe you don't; why not establish as clearly as possible which situation applies, and respond accordingly? It all goes straight to the point of making sure clouds will live up to their original promise.
And if you're really going to design a cloud to be the best possible IT service delivery platform -- the one that really is as optimized as it can be -- you should probably try to future-proof your cloud to ensure it will support change of many kinds: change in workloads, certainly, but also change in critical resources and assets.
For instance, consider all those server images -- the complete software snapshots needed to create virtual servers dynamically. For many organizations, image management is a huge hassle because (a) there are way too many images, (b) more show up all the time and (c) it's not very clear what's inside them.
SmartCloud Provisioning, fortunately, includes some nifty features directly aimed at these issues. Looking for a specific image that needs a security patch, and all virtual servers based on it? You can easily conduct a search along those lines. Or suppose you're trying to drum up the closest possible match to a target image � that, too, is a straightforward matter. This also means it's easy to discover and eliminate duplicate images, consolidate libraries down to the essentials and in short, knock the bullet point titled �Image Sprawl� right off your Fix Now! list.
Along similarly future-proofed lines, note that SmartCloud Monitoring offers support not for just one hypervisor, but for many. Ergo, if you want to add different hypervisors to your cloud over time, you can just go ahead and do it, and rest assured that the IBM solution has got your back.
Goodman sees this particular instance of future-proofing as a serious advantage.
�IT departments want to be able to choose hypervisor technologies based on cost-benefit analysis, and not feel compelled to stay with a particular vendor just because they've become reliant on its management tools,� he said. �As a management solution that crosses different hypervisor platforms, and indeed physical platforms as well, SmartCloud Monitoring allows customers to maintain tool continuity as they move workloads from one virtualization platform to another, focusing directly on availability, performance and total cost of ownership.�
So with all that in mind, let me ask you this:
When it comes to private clouds... what, really, are you so worried about?Additional Information
Learn more about Cloud and IT Optimization
Download the IBM SmartCloud Provisioning 30-day trial
Get 50% off IBM SmartCloud Provisioning (for a limited time)Take a quick and easy tour of IBM SmartCloud MonitoringFind out more about cloud computingConnect, learn and share with Cloud/Virtualization Management expertsAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10828
If you're looking for a can't-miss, bound-to-pay-off IT investment, I'm not sure you can do much better than upgrade your database architecture. I've written about this in the recent past in the context of migrating away from Oracle. But if you've already entered the 21st century, and are therefore using DB2, it's also really worth checking out the latest rendition.
That would be DB2 10
, which is so fresh to the market it still has that wonderful Golden Master scent to it that I still think should be bottled and turned into a cologne.
Recently I caught up with Conor O'Mahony
, Program Director of Database Software for IBM, and he clued me in concerning the many new bells, whistles and deeper-level improvements that have been made to DB2 in this version.
The reported performance increase alone justifies the upgrade in my mind. You might expect that any software environment hitting a double-digit version number would be so mature and optimized by this point that performance improvements would be negligible.
Not so with DB2 10. Instead of getting slower, it's gotten faster -- a lot faster. In fact, there are tests that were run by Intel that show that query processing can go up by as much as 10 times
when compared to the previous version of DB2 9 running on the same hardware.
�For software this mature, that kind of enhancement is jaw-dropping,� said O'Mahony. �Usually such radical optimizations have happened earlier in the development history. But as with so many other things, the devil really is in the details. In DB2 10, we have added breakthroughs for processing certain kinds of queries, for retrieving data, and for accessing indexes. The aggregate outcome is really stunning.�
So you can see why O'Mahony uses the adjective �jaw-dropping.� The thought of being able to handle 10 times as many transaction queries on the same hardware, just by implementing a software upgrade and taking advantage of new features, is probably enough to leave many IT managers stunned and blinking, like small children on Christmas morning.
And when you consider all the things your organization may rely on DB2 to accomplish across a product or service lifecycle -- from transaction processing to inventory assessment to data warehousing to customer service to marketing analytics -- you can see just what kind of business value the new performance optimizations are really likely to mean from a pragmatic standpoint.
Spend your money here and you'll realize not just exceptional ROI, but ROI delivered in many different areas, almost immediately.How many databases can you fit on the head of a pin?
And to continue the Christmas analogy, DB2 10 really is the gift that keeps on giving. For instance, consider the substantial improvements IBM has made in the way DB2 compresses data.
This has actually been a historical strength for DB2, and for good reason. The total costs of any database environment are significantly affected by storage costs; the less organizations have to allocate to storage resources, and maintaining them, the higher the payoff they will get over time.
That's why, in DB2 10, IBM has really gone the extra mile in implementing not just better compression, but a new class of compression. Whereas previous iterations focused on table-wide compression, DB2 10 augments that with page-level compression, thus allowing even more data to be squeezed into a given GB of storage.
How much improvement are we talking about -- and what kinds of savings can organizations get?
�That's a good question; first of all, I must say that compression rates can vary greatly from environment to environment,� said O'Mahony. �However, more than one client has seen 7 times or greater overall space savings, with some tables achieving 10 times space savings .�
Think about that for a minute and you get a sense of what it means. Not only do organizations now pack far more data on their existing storage -- delaying the purchase of more storage -- but all storage-related tasks are accelerated, including jobs like backup/recovery. This is because the data, being compressed, moves from point A to point B that much faster.
So here, too, it's plain that the new enhancements really translate into not just impressive, but impressively widespread, business value.Heating up the database infrastructure
Storage is also the focus of a completely different feature: DB2 10's new �multi-temperature data management.� The idea here is that in any large database environment, not all data is created equal; some data is hotter (more widely and frequently used) than other data.
Similarly, not all storage tiers are created equal. Some are faster and pricier than others. So, ideally, what you want to be able to do is put your hottest data on your fastest storage tiers.
Solid-state drives (SSDs), for instance, are absolutely perfect for enterprise-class databases because of their stellar read/write times and because, given no moving parts, they are much more reliable than conventional spinning-disk drives (or as I like to call them, failures waiting to happen). But because in life you really do often get what you pay for, SSDs are also a lot pricier per GB.
What DB2 10 does is empower you to get the best possible utilization from your highest-performing, highest-cost storage tiers (like SSD). You can put your hot data on fast storage like SSD, and put your colder data on less expensive storage.
Thus, the most important data is not just much more rapidly accessed, but, in the case of SSD, better protected because SSD drives are intrinsically much more reliable.Organizations that don't understand the past are doomed to repeat it
Storage improvements like that revolve around space, but it may interest you to know that DB2 10 is also the master of time. Specifically, it includes a new feature dubbed Time Travel, which, though it does not involve a flux capacitor, nevertheless delivers the goods.
The basic idea of Time Travel is that database queries often have implied time constraints. For instance, an insurance underwriter may need to know what kinds of policy terms were in effect at a certain point in time when a past event occurred, as opposed to the policy terms in effect now.
While many organizations have jerry-rigged their own, hand-coded approaches to that kind of capability, possibly using flux capacitors, DB2 10 provides a formal, vendor-approved-and-supported rendition. This is integrated not just more deeply, but also (let's face it) more effectively -- it is very likely to be faster and more reliable than the homegrown flavor.
The fact that it's included in DB2 10 right out of the box also means organizations no longer have to worry about supporting their own code to provide that functionality, and can instead just build on the IBM version. So new database apps and services are rolled out faster, for lower development cost -- both major wins to any organization looking to get a competitive edge (which is to say, all of them).Additional InformationLearn more about data management from IBM SoftwareVisit Conor�s Database Diary blogJoin the On Demand Virtual Conference on DB2 10 nowRead about Hyper-Speed and Time Travel with DB2Read how to Run Oracle applications on DB2 10 for Linux/UNIX/WindowsGet some user perspectives on DB2About the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
1. Based on tests of IBM DB2 9.7 FP3 vs. DB2 10.1 with comparable specifications using data warehouse / decision support workloads, as of 4/3/2012.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10433
In a previous blog entry I said that one of the surest roads to business success lies in understanding who customers are, what they want and how best to deliver that. But what happens when customers don't know what they want? This is a bit more awkward; now the organization has to help the customer figure that out. A pizzeria can make that happen with a menu... but most businesses don't have it quite so easy.
Netflix tackled this type of challenge via its famous $1 million Netflix Prize. In 2009, the prize was awarded < http://www.netflixprize.com/community/viewtopic.php?id=1537 > to a group who came up with an algorithm that could accurately predict what kinds of movies Netflix customers would enjoy most. It could do this, in fact, more accurately than Netflix's own algorithm, generating results that were more than 10 percent better. That's pretty impressive given the incredible diversity in taste from one Netflix customer to the next.
Modern IT vendors, whose customers' needs and goals vary just about as widely, have an even more difficult puzzle to solve. Typically, large IT infrastructures at established companies have evolved over time via a process that was more about Making Things Happen Now, and less about a long-term, governed plan of IT optimization.
The upshot is that today, IT workloads are often executed in a way the customer can easily see isn't very efficient or cost-efficient. What isn't quite as clear is how to move to a superior arrangement.
This, I think, explains the growing popularity of self-assessment tools in the IT world. Such tools, offered over the web, give organizations immediate insight into not just their needs, but also the available solutions -- often in a surprisingly accurate way, following a Q&A process.
These tools offer, in a limited sense, free consulting. And if implemented well, they can significantly shorten the path any given organization has to take toward creating a better, more optimized IT infrastructure.Platforms are just tools -- be sure you've got the right tool for the job
So given this context, it was a pleasure talking to Penny Hill, a marketing manager with IBM Software Group who recently helped develop two such tools
Hill reminded me that IBM's focus these days is less on the details of a given platform than on the business value it creates over time. She also suggested that this is an area of �low-hanging fruit,� where organizations can often make rapid headway because they've barely gotten started.
�It's crazy,� she told me, �that organizations continue to argue over the merits of a platform vs. looking at the workload characteristics that are best suited for the right platform.�
That strikes me as a really good point. In the time I spent in IT, platform choice was often taken for granted in advance for all workloads -- relatively low-end x86 boxes running Windows or Linux being by far the most common platform.
Then, based on that assumption, subsequent questions were asked: �How can we accomplish such-and-such on our platform?�
The concept that different workloads have different characteristics, require different resources and are better-suited or worse-suited to different platforms was really never taken into account. So the eventual business outcome was rarely as good as it might have been.
Distributed architectures aren't always the rule, either. At institutions like banks, mainframe computing has often held sway as the dominant platform largely because, well... it held sway in the past, going back half a century in some cases. But organizations should look at their current platform as well as others to make workload decisions
What Hill has recently worked on for IBM are two different tools that give organizations a new perspective on this whole area. If you consider distributed architectures and mainframe architectures as the two fundamental approaches, the next logical questions are: What kinds of workloads are best suited to each? And what kinds of variables should an organization consider to match platforms with workloads in every case?
Hill suggests that this switch in perspective -- from platform-prioritized to workload-prioritized -- has a natural analogy in a familiar area.
�Choosing the best-fit platform should be like buying a car,� she said. �You typically look at the qualities you're looking for, i.e., good gas mileage, safety, Sirius radio, and then search for the car that meets these needs. What you don't do is pick the car first, and then try to force-fit in these characteristics.�A tailored white paper of your very own
This is why both of the IBM assessment tools put the focus directly on workload characteristics -- albeit in very different ways.
The first tool, believe it or not, actually generates a customized white paper. Following a short series of questions on mainframe ownership, workload type, number of users and the relative importance of efficiency, reliability, scalability, security and utilization, this white paper
can be downloaded straight to your hard drive in Word format.
Additional questions might appear depending on your answers to the above. For instance, if your workload involves data warehousing, you'll also be asked the total volume of data in terabytes.
While the white paper is generated based on predefined content created by IBM partner IDC, the content is nevertheless chosen based on your answers, and combined in a way that will more closely reflect your particular IT context than any other white paper you are likely to find.
And as a result, it should provide unusually specific insight into the probable challenges that apply, and provide helpful recommendations concerning the pros and cons of different platforms and workload migration strategies.Interactive assessment: It's what all the cool companies are doing
The second assessment tool
offers an interactive experience based on your answers to three different sections.
The first section lets you define up to five different named workloads; for each, you'll need to provide both the workload's task (analytics, transaction processing, etc.) and current platform (whether distributed or mainframe).
The focus in the second section is on the characteristics of those workloads. For each, you'll need to specify eight different traits -- staff skill level, software license costs, capacity and so forth.
In the third section, you describe the characteristics of your current data center. Here, too, there are eight traits to consider, ranging from floor space to hardware maintenance costs to storage and energy costs.
Once you've finished your self-assessment, the tool then provides results for all your workloads. You can actually see whether a distributed model or a mainframe model is likely to yield optimal performance in each case, based on your specified criteria, via a color-coded model. And if you'd like to adjust your previous answers, to see if the results change, you can do that, too.
I found it interesting, entering different combinations to see what kind of results I'd get. Based on the sample sets I gave the tool, it appears my imaginary companies have invested too much in distributed architectures -- not too surprising, really, given the widespread canard that distributed computing is intrinsically less expensive. Quite often, due to hilariously low utilization levels and frighteningly high energy costs, it's the other way around.
Hill endorses both tools as a way not just to assess your current situation, but also plan for future scenarios. Since the tool lets you enter any values you please, you can test not just the values that apply right now, but those you expect to apply in the foreseeable future.
The results might surprise you -- in a good way.
�Looking at the right-fit platform strategy is often a major mind-shift in the IT world,� said Hill. �But once embraced, it opens the doors to major cost reductions and a smarter, more optimized data center architecture -- put simply, smarter computing.�Additional InformationTry out these workload assessment tools for yourselfLearn more about Enterprise Modernization
Find out how you can experience smarter computing todayAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6703
Tell me if this sounds familiar: You're pondering whether to do something potentially risky -- perhaps quit a job, switch to a completely different career path or even start a business. You have many motives to do so, yet the road ahead seems very unclear, and you're uncomfortable with that. And someone else says, �Oh, go for it. Everything in life is risky. You could get hit by a bus any day... but that doesn't stop you from leaving the house.�
Well, that�s true, of course, but as an argument it has a really basic problem: it's number-free.
Not all risks, in other words, are the same. The risk of getting hit by a bus is different from, and much smaller than, the risk of starting a business, watching as it slowly fails and getting into deep debt.
Making such a decision reasonably competently means finding a way to clarify, quantify and prioritize the kinds of risks you're facing in a given strategy -- and weighing them against the benefit you're trying to create.
This, in essence, is a problem confronted every day by businesses making complex decisions. They'd like to create improvements or pursue new goals in a given area. But in a perfect world, they'd also like to avoid getting hit by a bus.
By no coincidence, this is also a major focus of IBM's considerable interest in advanced business analytics -- recently described by Mike Rhodin, Vice President of IBM Solutions Group, as �the silver thread woven throughout our portfolio.� Risk assessment and mitigation are central to business strategies -- almost all strategies, in almost all industries. And advanced analytics can deliver some of the best available insight to accomplish that.Get a moment of clarity -- actually, get lots of them
Toward getting a little more clarity about this area, I talked to John Kelly, Worldwide Market Segment Manager for IBM's Business Analytics group about IBM's perspective in this area... and how that perspective is going to be explored at the forthcoming Vision 2012
conference to be held from May 14-17 at the JW Marriott Grande Lakes in Orlando.
Like me, Kelly sees analytics as a powerful visualization tool -- a way to understand different possible futures, and steer your organization into a future that offers more benefit and lower risk.
�Customers are looking to improve decision making and business performance through increased insight and business intelligence,� he said. �That's exactly why IBM has recently labeled analytics as one of our four major strategic directions -- we know how much potential this area really has. And we'd like our clients to realize as much of that potential as possible.�
Risk assessment and mitigation, of course, have a long history in some areas (like finance) and are less well understood and established in other areas (like technology startups), but the root appeal remains the same in every case. If you want to get the best possible outcome, you need to establish the most likely, and most potentially devastating, pitfalls.
Analytics tools can work almost like a car's high-beams, helping you navigate and get where you're trying to go more safely. That's a goal that almost any business leader, in any industry, at any organization of any size, can understand and appreciate.Regulatory compliance stands out as a growing challenge
And beyond that general value proposition, IBM is making considerable strides in applying analytics effectively in areas that are of particular concern to its clients. One such area: regulatory compliance and policy management.
In the wake of major scandals dating back more than a decade, these regulations have increasingly been created with the stated goal of minimizing various forms of unacceptable risk to the public, to business employees and customers as well as to stockholders. And that, of course, is a laudable goal.
But complying with those regulations can be a headache even for the best-intentioned organizations that are really committed to compliance and dedicating tremendous resources to the job. Even when compliance seems to have been achieved, it hasn't always been. New regulations appear every year; it's not the easiest thing in the world to know which apply in a given case, and under what conditions, and what the best organizational response should be.
IBM, it seems, can help. �Our solutions deliver analysis and reporting, to provide visibility into the state of risk in the enterprise including evidence of compliance or remediation status, trending and point-in-time analysis and ad hoc querying,� said Kelly.
Consider what that means in practical terms. Not only can you understand much more clearly, quickly and easily the extent to which your organization is in compliance, but you can also demonstrate that compliance on demand, in whatever level of detail is required. In the event of an audit, such a demonstration will be essential -- and avoiding potentially hefty penalties and fees will be much simpler. What organization wouldn't be interested in solutions like that?
One solution family drawn from IBM's analytics portfolio is particularly strong in the area of compliance and risk: IBM OpenPages
. This suite of tools focuses specifically on governance, risk and compliance, not just identifying and monitoring risk, but also putting in place a programmatic way to communicate and manage risk exposure across the enterprise to reduce unexpected losses, penalty and fines (not to mention reputational damage), while at the same time improving decision making.
Its compliance capabilities, for instance, are directly on point. Organizations routinely create (and enforce) policies to drive compliance... but not always in as governed and coherent a fashion as they might. (Banking industry, I'm looking at you when I say that.)OpenPages Policy and Compliance Management
automates the lifecycle of compliance policies from cradle to grave, reducing redundancy and optimizing the policies you keep in a way that spans resources, business groups, projects and workflow. Organizations that have a formal implementation of risk mitigation, but would like to tune or enhance it to better align with their current and future needs (not to mention future regulations), will find this solution particularly compelling.A better outcome can result from risk-aware decision making
Risk management is increasingly becoming a strategic, executive-sponsored solution that many organizations view as providing a competitive advantage where risk and performance are aligned and where governance, risk and compliance is part of �annual strategic planning�.
An integrated governance, risk and compliance program also has a wealth of information that can be leveraged for risk-aware decisions. Through business intelligence and reporting, information from an integrated program is being utilized beyond the risk and compliance office and being leveraged by business managers to make risk-informed decisions about resource and investment allocations in product planning.
Optimize your risk management strategies in many dimensions
Other OpenPages solutions -- which inter-operate with each other, via a shared foundation of data -- are available to deliver similar capabilities in related fields like:
- Operational risk management. This offering can identify, manage, monitor and analyze operational risks of all types, all from a single point of command to spur a particularly agile response. From better, more accurate insight comes a faster and more comprehensive remediation.
- Financial controls management: Regulations like Sarbanes-Oxley in the United States are mirrored by similar regulations in other countries around the world -- and for global organizations, each crossed border represents a new set of financial regulations with which to comply. This solution focuses on reporting, offering a centralized architecture for analysis, documentation and data management.
- IT governance. IT has become central to almost everything organizations do today. As a result, risk assessment for IT assets, services and data is needed to ensure that IT delivers the intended value -- ideally, on time and under budget -- even in the case of complex projects that take years to complete.
- Internal audit management. For large organizations that proactively conduct audits of their own, this solution is a natural fit. Using it, they can automate many of the basic processes involved, as well as connect the results logically to other risk assessment initiatives they have in place.
Anyone interested in getting more information on these and related topics should definitely consider attending the previously mentioned Vision 2012 conference.
This is the premier global conference for finance and risk professionals, and the most high-profile stage for IBM to discuss everything it has to offer in this rapidly evolving, increasingly hot area.
When I asked Kelly to sum up in a nutshell what IBM will be discussing at Vision 2012, he said this:
�IBM Risk Analytics enables the Smarter Analytics approach -- turning risk information into insight, and insight into better business outcomes.�
I like the sound of that.Additional InformationLearn how Business Analytics improves business performanceSee what Vision 2012 offers for finance and risk management professionals
Gain relevant business insight through Smarter AnalyticsSmarter Analytics for the financial industryAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7234
One of the key topics at IBM Impact 2012
to be held in Las Vegas April 29�May 4 will be IBM PureSystems
. It�s a new family of what IBM calls expert integrated systems that combines the flexibility of general purpose systems, the elasticity of cloud and the simplicity of an appliance tuned to the workload. And I think that the cloud and workload aspects are key ones here.
I had the chance to talk with Jerry Cuomo, IBM Fellow, VP and WebSphere CTO -- and one of the key presenters on PureSystems at Impact -- about the recent announcement and what it will mean to the world of business and IT. Its impact, if you will. But before I share Jerry�s insights, I�d like to step back and talk about cloud in a more general way � then we�ll see how PureSystems fits in.
I sometimes think one of the most important and underrated aspects of cloud computing is �abstraction� -- the way clouds can empower organizations to move up from a lower level of abstract thought and execution to a higher, better one.
Of course, abstraction is a little... abstract itself, as subjects go. So let me trot out one of my patented analogies to clarify a bit.
Have you ever seen a baby when it's first learning to walk? The job is really quite a complex one as far as the baby is concerned. It has to ponder large muscle groups very consciously, deliberately thinking about using one leg, then another, all while also using small muscle groups to maintain its balance.
But eventually the baby can stop thinking about things on that level -- the level of specific muscle control -- and start thinking on a higher, more abstract, more effective level.
Now it's not �I need to move my left leg forward, and put my weight on my left foot� but, much more simply, �I want to walk into the next room.�
This new, higher level of abstraction the baby has reached gives it new power to pursue its goals (which may or may not include terrorizing the family pet and deep-searching local trash cans).
And if this baby is ultimately going to reach the highest level of competitive motion -- perhaps becoming a world-class sprinter, the next Usain Bolt -- it is going to have to be thinking on a very high level of abstraction indeed. There is just no time to think about such details as which muscles you'll move next, when you're running sprints in the Olympics. There is instead only nine and a half seconds to travel a hundred meters.
That's not a bad metaphor for business today -- a similarly competitive world, in which market agility tends to translate into market success. You don't want to have to think about the technical details; you really may not have the time.
You want to focus on your goals and strategies and services, the heart of the value you're creating in the world, and trust that your infrastructure will be up to the efficient execution of whatever you have in mind.
Clouds -- done right -- can be that infrastructure.The question isn't �What's our tech?� but �How well do we fulfill our workloads?�
All this crossed my mind when I learned about PureSystems and talked with Jerry Cuomo. He agreed with me about the importance of abstraction, but was quick to point out that the new launch delivers far more benefits than just that.
It seems that PureSystems is the end result of IBM's underlying goal to deliver a next-generation service delivery platform solution that fulfills workloads optimally -- even given how dynamically workloads can change across time, both technical and business domains and organizations.
�PureSystems is unique to our industry,� he said. �It represents a bold balance of being open yet prescriptive, and preserving compatibility with your current applications while introducing support for highly efficient new workloads. PureSystems do not just hold the potential to be workload-aware; they are workload-aware. PureSystems do not merely enable workloads; they contain them, including a scalable web workload. They facilitate lifecycle management like monitoring and license management, and what's more, those capabilities work right out of the box. Simply put, IBM PureSystems are not just your cloud-in-a-box solution, they are your workload-aware cloud
What are the ingredients of the PureSystems� recipe? Basically, they're packaged in two groups. The first group � �next-generation platforms,� or NGP -- is a top-caliber variation on Infrastructure-as-a-Service.
But it's in the second group, which focuses on application systems, that the real magic happens.
Recall that IBM, almost uniquely to the IT industry, produces solutions at every layer of the technology stack. That means IBM, almost uniquely to the IT industry, also has the power to combine those layers into optimized packages -- all of which also benefit from IBM's enormous experience consulting with organizations of all sizes, in all industries, on cloud computing topics.
For PureSystems application systems, that means IBM's strengths are multiplied, each helping all the others.
�Today, organizations have choices at every level -- processors, storage, network, OS, middleware and applications,� said Cuomo. �While the last decade of open competition around these components has driven record capability and quality, enterprises trade the ability to mix and match these best-of-breed parts while also paying the very high price tag of labor cost and skills needed to orchestrate the final composition. However, this leaves very little in the enterprise's innovation budget. PureSystems give the customer back their innovation budget. Our hardware and software experts have used our cumulative experience to create an integrated system that also empowers our clients to stir in their own expertise and capabilities -- easily.�
Here you see just what IBM means by �expert integrated systems.� It's not just IBM's expertise that's being integrated; it's also the customer's. This is the magic of PureSystems: it is an ideal foundation for private cloud computing that(a)
delivers the best technologies IBM has to offer, drawn from the industry's strongest cloud portfolio,(b)
combines those technologies in the best ways for a private cloud, in direct support of proven best practices, and(c)
still allows the new cloud to be easily tweaked to create a perfect fit for any given organization's needs.Instant time to value, but also straight-forward tailoring
In fact, beyond merely �allowing� that kind of tweaking, IBM has made it remarkably straightforward.
For instance, cloud services executing on PureSystems can be managed by team members both inside and outside of IT proper.
Line of business managers are going to enjoy being able to request a new service right from a catalog, then have oversight of that service themselves -- an experience they may never have had before, and a power akin to being able to walk, instead of having to ask someone else to carry you.
They're also going to enjoy the fact that cloud management for PureSystems can easily be aligned with job roles, so they can manage their services using the interface that works best for them, as determined by the performance metrics that they deem most significant.
IBM has, in fact, created a new admin paradigm just for PureSystems -- another variation on the theme of multiple levels of abstraction -- and Cuomo is very optimistic about how it's likely to be received.
�One of the aspects PureSystems we think our customers will love is the way they make management so straightforward,� he said. �Via our approach of progressive disclosure, they can administer services at the technical level that makes the best sense for them. Specifically, we support a progression with three levels of disclosure. The first, Virtual Application, only requires you to know the needs of your application -- middleware and hardware are hidden. The second, Virtual Systems, pre-arranges middleware in patterns designed to power specific workloads. Last, Virtual Appliance supports a bring-your-own-expertise model, allowing you to include your own middleware and construct your own patterns.�
This concept of workload patterns is yet another selling point of PureSystems. Thanks to literally decades of experience in IT consulting, IBM has acquired an extraordinary level of knowledge about middleware/hardware combinations and the patterns that tend to apply. That insight is baked in, so you can leverage the patterns right away. And most organizations will do exactly that.
But you can also, as Cuomo suggested, create and roll out new patterns from scratch. And you can combine these two models -- integrating, in a sense, the best of IBM's expertise and the best of your own.
It's hard to get much more expert or integrated than that, and Impact 2012 will be the place to learn more about it.Additional InformationLearn what IBM PureSystems are all aboutFind out more about Impact 2012Register now for Impact 2012About the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 10826
One of the first things you learn working in IT is how difficult it is to get people to switch from one vendor or IT solution to another. Perhaps you start a new job, at a new company, where they're struggling with a technical problem you've solved in the past. Does your new employer want your opinion on the problem?
As a general rule, it does not. The IT group there is already used to technology X, used in manner Y, and it will turn a skeptical eye on any other approach. You could even call this organization �solution-blinkered� -- its eye is covered by skepticism.
Here's another example. In December 2000, I published an essay on Salon.com
suggesting that Apple should pursue a specific, technically complex strategy -- a strategy which was perceived as crazy at that time -- in order to rescue itself from market oblivion and become far more successful.
Six years later, Apple pursued the same crazy strategy I had suggested.
Why did it take six years? Because, although my ideas were correct, and although Apple is known for innovation, decision makers inside the company were skeptical of creative possibilities, and wary of the risks that can come from change.
Most organizations are like that. Often, there is simply no good reason for IT to carry on with a problematic status quo, and every reason for IT to pursue something else that looks a great deal more promising.Want better ROI from IT? Get a better database software.
I ran into the same issue recently discussing enterprise database solutions with Conor O'Mahony, Program Director for Database Software with IBM Software Group.
In this area -- enterprise-class databases � while IBM led the way on mainframe systems, Oracle was one of the first organizations to bring a solution to market on distributed systems. Since then, Oracle has continued to lead the database market on distributed systems. But how much of that leadership is due to Oracle's early mover advantage, and how much is due to its actual capabilities, value proposition and competitive strength?
That seems to me to be a very open question. It has repeatedly been my personal experience, as a former IT guy, that Oracle Database is about as well known for high costs as high performance. And if the Oracle Database performance has declined relative to competition, their costs have not.
That's a real problem, given how deeply rooted database software tends to be in enterprise IT infrastructures, and the staggering impact they have on both IT service levels and IT budgets.
O'Mahony sees things in much the same way. �If IT organizations are looking to identify ways to meet their 'do more with less' mandate, reclaiming some of the IT budget set aside for data management has to be on their radar,� he said. �Data management costs are often a sizeable chunk of an IT budget; and recent advances in database migration technology are allowing them to significantly reduce those data management costs.�
But while competitive options may be superior, organizations often remain blind to those options (i.e., they're solution-blinkered). They have the false idea that switching from one database to another will cost too much, take too long and ultimately create too much risk.
According to O'Mahony, they couldn't be more wrong -- particularly when it comes to the specific case of Oracle Database vs. IBM's own DB2 database solution. Why? Partly because IBM has made it so easy for them to switch.
�Since 2009, DB2 has been adding language-compatibity features,� he said. �Specifically, DB2 directly supports the most popular aspects of Oracle's PL/SQL language. That means applications written in Oracle�s PL/SQL will run natively in DB2 as well -- typically requiring changes of only 2 percent of the code. It also means that even after a migration has finished, organizations can continue to program in PL/SQL if they want. So any programming talent they've hired in that area can carry on programming just like before.�
How does that magic happen? It seems that DB2's capabilities in this area don't stem from any type of emulation (which often runs into compatibility and performance issues).
Instead, they stem from a compatibility layer that really does deliver native performance. Calls made in PL/SQL continue to work just as they did before; they just don't need Oracle technology to do it.
So, to put it simply, you can just pack up your data and applications, move them from Oracle Database to DB2 and they'll run as fast as they did before -- or faster.Lower bills. Higher performance. The end.
And if you do hop from Oracle Database to DB2, don't be surprised when your operational costs fall like a cow dropped from a helicopter.
This is because Oracle Database is, by any reasonable standard, a pricey solution to support over time -- one that typically requires ongoing �help� from Oracle and thus generates excessive annual fees. O'Mahony suggests that this is an area where organizations can really see major positive change right away.
�Instead of spending lots of money on expensive Oracle support and maintenance contracts, more and more organizations are discovering that DB2 is a comparable product that offers far better value when it comes to costs, performance, storage optimization, and staffing levels,� he said. �In fact, some organizations are using this tactic to lower their data management costs by as much as 50 percent, and reclaiming this valuable IT budget for new high-impact initiatives.�
Spend less. Get more. That sounds like the kind of smarter solution organizations always say they want, yet are sometimes oddly reluctant to pursue.
And that's really too bad, because more forward-looking organizations that have already made this leap are already raking in the business benefits: higher performance, lower costs, and all via a nearly painless migration process that often takes next to no time.
�Gone are the days of high-risk IT projects that often missed deadlines and overran budgets,� said O'Mahony. �Organizations are now migrating from Oracle Database to DB2 in literally days. For instance, one of the world's largest banks recently moved a core application from Oracle Database to DB2 in just two days. It was able to do this because 99.5 percent of its Oracle PL/SQL code was supported by DB2 out-of-the-box. And this two-day period included data movement, all code modifications, testing and performance tuning. Such short and low-risk database migrations are literally redefining many organizations' tolerance for database migrations.�
Would you like another example? Ponder the experience of Reliance Life Insurance, one of India's largest insurers and the third-largest private company in India across all industries.
Reliance wasn't satisfied with the performance it was getting from its legacy Oracle infrastructure. Specifically, it took 36-40 hours to process OLTP (online transaction processing) data. This, in turn, meant that the company faced an unacceptable time lag; they needed key information to be accurate and accessible in real time, but the Oracle infrastructure simply couldn't deliver that. And Reliance had no confidence in that changing any time soon.
For these reasons, Reliance migrated to an IBM solution
: DB2 running on IBM Power Systems.
The results? They're now getting the real-time insight they require, because the lag of 36-40 hours they had been getting from Oracle Database has dropped to less than 30 minutes. Customer service is much better informed; customer satisfaction has climbed; and so has application uptime -- 95 percent with IBM vs. only 80 percent with the previous Oracle Database infrastructure. Scalability has also improved dramatically, from 3,000 simultaneous users to 12,000.
Perhaps most impressive of all is the fact that all of these benefits come packaged with far lower ongoing costs. To wit: about 50 percent less total cost of ownership for DB2 running on IBM Power Systems compared to Oracle Database running on Oracle-owned Sun systems.
So let's sum up the case for DB2 over Oracle Database:1. Pain-free migration
. DB2 directly supports Oracle Database applications and Oracle's language -- up to 98 percent direct compatibility. 12. Superior performance
. If you migrate to IBM Power Systems as well as to DB2, you will get a substantial hike in service levels -- in a typical case, as much as three times faster execution. 23. Lower costs over time
. While Reliance experienced an impressive 50 percent drop in TCO, IBM studies suggest many organizations can expect even better -- often, about a 60 percent drop. 3
Tell me: Is your organization solution-blinkered?Additional InformationLearn about IBM Data Management capabilities to better leverage your dataFind out how migrating to DB2 can boost performance and cut costsMeet IBM DB2 10 and IBM InfoSphere Warehouse 10Get this eBook on strategies for lowering the costs of data managementJoin in the conversation on IBM database software newsAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
1. �Based on internal tests and reported client experience from 28 Sep 2011 to 07 Mar 2012� and also at: The facts really matter
2. The facts really matter
3. The facts really matter
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10975
I have a friend in Canada who doesn't trust technology. For years we've discussed this over the Internet -- the standard communications platform for people who want to argue at great length, across great distances, to no apparent benefit or conclusion. His idea is that since he didn't design and build the tech himself, he can't really trust it. But my idea is that in discussing this over the Internet with me, he is trusting it.
The Internet, after all, is really an unimaginably complex �system of systems� comprised of switches, routers, microprocessors, storage arrays, networking protocols, various transmission media including fiber and copper and radio waves, and other elements ad infinitum. All of them interoperate to deliver holistic value for users.
If any one of these elements were fundamentally untrustworthy, so, too, would be the Internet as a whole, and our argument could not continue. But year after year, it does continue... which would seem to endorse the tech.
Lately, I've been pondering just how many other aspects of modern life could similarly be characterized as systems of systems or -- things we casually think of as a monolithic entity, like the Internet, but which are in fact far more complex and have complex systems embedded
in them. A short list:
- Smartphones. Apps, operating systems, processors, cameras, multiple simultaneous radio links. These aptly named devices are not exactly a tin can on a string.
- Cars. Many embedded processors, running dedicated software, handle myriad tasks from tracking transmission performance to calculating gas mileage to determining global location.
- Airplanes. See the cars description, except that in this case, the system of systems also plays a major role in keeping the airplane from falling out of the sky.
In every case, software running behind the scenes is responsible for orchestrating the total functionality, and taking care of that so perfectly we don't even think about all the complexity. Instead, we just trust it to work.
It's a good thing the software developers know what they're doing -- and have some outstanding tools to help them do it.Optimize your software development process and working environment, and you're one step closer to optimized code
One such is the IBM Rational solution for systems and software engineering
. The entire Rational portfolio, of course, helps development teams deliver quality code that's feature-complete, and as bug-free as possible, in a governed, methodical manner -- often, under budget and within target deadlines.
But this particular capability is intended for the kind of ultra-complex, system-of-systems software design I've described above, in which many individual systems, each with its own specifications and requirements, must not only work properly, but also collaborate with other such systems to deliver services.
To get the inside story on what the Rational offering is all about, I was fortunate to be able to speak to Greg Gorman, Program Director of World-Wide Systems Engineering Strategy and Delivery, IBM Rational Software, and Hans-Peter Hoffmann, PhD, Chief Systems Methodologist, IBM Rational Software.
Both of these experts were quick to point out how neatly the capability fits into IBM's larger Smarter Planet story. �How can you even have a smarter planet without smarter products?� asked Gorman hypothetically.
It's a fair question. IBM conceives of the Smarter Planet essentially as the ultimate system-of-systems, the superset of all others because it is, in fact, the Earth's complete infrastructure.
While I have, as a former IT guy, a fairly solid understanding of just how complex software development can be, I've really never thought of it on this kind of scale before. But IBM has.
�Many systems today have millions of parts -- and miles of wiring,� said Gorman. �Our customers create incredibly complex systems -- airplanes, ships, missiles, cars, etc. Each of those consists of hundreds if not thousands of smaller subsystems, that all have to cooperate for the system to perform its intended function. It requires coordinating vast numbers of requirements, tests, source code, designs, changes -- you name it -- across a large team of engineers and developers, sometimes at many different companies and locations. A huge challenge that the Rational tooling and practices help them solve.�
That, it turns out, is key to the value proposition of the Rational offering. It's not just about getting code to interoperate, but, far beyond that, all the stakeholders associated with that code -- developers, architects, line-of-business guys, executives and end users.
And the Rational solution for systems and software engineering also spans the entire software development lifecycle, from project conception to ultimate retirement and every stage in between. Just as today's systems-of-systems need effective, governed orchestration to work properly, so, too, do the projects that make those systems possible in the first place via high-quality software.An engaging past, tied to a bright future
IBM has an incredible depth of experience in business technology, of course -- what other IT solution provider is a century old? -- but I was curious how IBM first got its feet wet in systems-of-systems project optimization of this type. It turns out that the inciting event was America's space program.
�Among other, very early contributions, we built NASA's Apollo mission control in Houston -- you know, 'Houston, we have a problem?' -- that mission control,� said Gorman. �Obviously a complex system-of-systems challenge, to say the least, given the available technology of the time.�
Today, IBM's client roster is interested in a lot more than space exploration, though that, too, continues. I asked Hoffman what kinds of projects he might typically consult on.
�Monday and Tuesday, maybe consulting with an aerospace company on a spacecraft project,� he said. �But Wednesday? A diesel-electric locomotive project. And Thursday could be a medical company, for a pacemaker project. It's all software engineering, but each domain has its specific viewpoint and requires a different approach.�
This really struck me as a diverse range of applications for one guy -- or, for that matter, one consulting company -- to handle in such short order. How has IBM managed to become so expert, in so many fields? Turns out that there are abstract principles that remain fairly constant, though their specific implementation will vary from case to case.
�Based on our experiences in the A&D, automotive, industrial automation and medical industries, we identified best practices that are key to systems engineering, covering both the selected tools collaboration as well as the methodical approach,� said Hoffman. �In our engagements, we help our customers to adopt those practices that give them the most benefits, i.e., higher quality requirements and a significantly shorter development time.�
One particular case study that stands out in Gorman's mind: Invensys Rail
, a European provider of on-board signaling systems used by Portugese and Spanish railway systems.
When you've got hundreds of trains flying along at 200 mph or more, the margin for error in the way you schedule them is very close to zero. So is the opportunity to improve your software later. Getting it right the first time is not just a goal; it's a mandate.
IBM's Rational technology played a key role in ensuring that outcome, while also making it relatively simple for Invensys to comply with various government regulations -- despite the fact that those regulations are in a constant state of flux.
And looking ahead, Gorman sees tremendous potential for even more complex, domain-spanning system-of-systems integration in the future.
�Imagine a car wreck,� he said. �Telemetry technology links to the car's GPS data, then broadcasts the car's exact location, notifying emergency first responders that a medical situation applies. The first responders, in turn, can find out even before they get to the scene who the car's owner is, and having accessed medical records, establish that he is allergic to certain medications and is a diabetic. Checks could even automatically be run to locate cellphones within a specified range, to discover any probable witnesses of the event who might be needed for legal or insurance reasons. The possibilities are really endless.�
Additional InformationSee how Complex and Embedded Systems can help build smarter productsWatch this webcast to learn how to conquer complexity in only six stepsRead this white paper on smarter product enablementJoin in the conversation on complex and embedded systems
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7374
Every so often you experience an eye-opening moment that brings into clear focus the realities of the world around you. I experienced such a moment recently reading a quote from Google's chairman Eric Schmidt. It was this: �Every two days we create as much information as we did from the dawn of civilization until 2003.�
My goodness. We've become quite the information-creating monsters, haven't we? And we�ve got technology to thank for it.
Granted, some of this has to do with a dramatic population increase, and some of it has to do with the fact that as a society we�re addicted to instantaneous communications.
Still, it seems to me the biggest difference between now and the dawn of society is that our lives, personal and otherwise, rely on an infrastructure of zeros and ones as much as they do wood and steel - perhaps even more so. This modern infrastructure that we�ve recently built is both the source of, and answer to, the complexities of this brave new world.
When IBM talks about the world becoming a smarter planet
, this, indeed, is the sort of thing it has in mind. Beyond the obvious tech underpinnings -- desktops, laptops, servers -- think of all those billions of smart phones and millions of tablets out there, not to mention embedded processors, device sensors, switches and routers, RFID tags, software applications, device drivers, middleware and innumerable other instances of binary tech.
Each creates and sends data in some form. And all of it, combined, is the information Mr. Schmidt is talking about. We don't just create information ourselves, directly; we've created tools on a vast scale that, in turn, create information, too.
The question is: What are we doing with all that information? What could, and should, we do with it? And what might be the benefits, if we did?IBM is determined to infuse analytics smarts everywhere it can -- almost literally boosting the Earth�s IQ
I found out recently that I'm not the only one impressed by the Schmidt quote or the volume of data that seemingly permeates our daily lives. It is one of the core topics that Mike Rhodin, Senior Vice President for IBM Software Solutions Group, talked about at the IBM PartnerWorld Leadership Conference
in New Orleans.
Mr. Rhodin also brought up a number of related points during a recent keynote address at CeBIT 2012 that may be worth your consideration:
- There are 1.8 trillion gigabytes of information available in today's digital world.
- Every day the New York Stock Exchange creates and captures more than a terabyte of trade information. That's a hundred thousand times more data created daily than the storage capacity of the first IBM mainframes only 50 years ago.
- Two hundred million tweets are sent every day -- roughly 12 terabytes of unstructured data from tens of millions of devices.
- Given that approach, how are we (on the largest possible scale of �we�) utilizing all that data to make the world a better place -- the Smarter Planet of IBM�s dreams?
- �This explosion of data from incredibly diverse sources is creating tough complexities but also new opportunities to better understand the new realities of our world,� said Rhodin. �We are in the era of big data, and it is ushering in a new approach to computing, one that�s more insight-driven, more intelligent, and leverages technology designed to be more cognitive.
Enter IBM Smarter Analytics
-- a new strategic initiative from IBM that turns information into insight and insight into business outcomes. Basically, it helps organizations transform big data into big business opportunity.
As part of this initiative, IBM is releasing a suite of signature solutions � outcome-based, industry-focused analysis of mass data volumes. These solutions are specifically designed to tackle some of the most pressing analytics challenges organizations face today (reducing fraud, managing financial risk and accelerating consumer intelligence) that drive better outcomes through better insight.
According to Rhodin, solutions of this type are not just needed, they�re critical for long-term success. And IBM has really stepped up by putting analytics front-and-center in its strategy.
Today IBM is weaving advanced analytics capabilities into the fabric of its products and services to support our evolving digital infrastructure and the nearly limitless things we use it to do.
�The role of big data and advanced analytics does not just sit at the center of this evolution -- it permeates it � through every computing system, process, connection and data source,� said Rhodin. �On a smarter planet, analytics becomes the central nervous system through which information is received, analyzed and acted on, in a single fluid motion.�Get on board with analytics, and you and your customers will both win
According to recent studies conducted by IBM and the MIT Sloan Management Review, some 57 percent of responders cited analytics as a key competitive advantage.
Organizations with advanced analytics were cited as 260 percent more likely to substantially outperform analytics �beginners.� And top-tier performers were 84 percent more likely to have analytics already integrated into both core strategies and day-to-day operations.
The lesson seems clear: If you're not up to speed on analytics, you're probably falling behind your peers. It's not simply an IT topic, but, increasingly, the IT topic -- the one of highest priority, because it informs and optimizes everything you do.
Rhodin concurs. �When we first launched Smarter Planet a few years back, we knew advanced analytics would have a fundamental role to play,� he said. �[But] it has quickly become the silver thread woven throughout our portfolio.
�Analytics is no longer advancing single organizations, it is transforming entire industries,� added Rhodin.
And the new IBM Smarter Analytics solutions prove that point in no uncertain terms. Consider the following, very diverse examples:
- Smarter customer interactions. If you want to succeed in business, you can't do much better than constantly pleasing your customer base. To do that, Smarter Analytics can help you understand much more precisely who those customers are, what they want, how best to deliver it, and even how to communicate its value to them. Essentially, this is CRM 2.0, re-imagined for the 21st century and done right.
- Smarter CFO insight. How efficient and cost-effective are operations in one area compared to another? How profitable are strategies, products and services in different contexts and aimed at different demographics? These questions are very close to a CFO's heart. And IBM's new solutions can generate the necessary answers.
Here�s the pudding. See the proof?
- Smarter fraud management. The insurance industry has long been driven by stats and analysis, of course, but fraud identification remains a thorny and expensive issue. IBM's Smarter Analytics solutions can help underwriters determine when they've paid too much in the past, predict when and where fraud is likely to occur in the future and score current claims in real time to trigger immediate investigation in appropriate cases.
If all that seems a little abstract to you, bear in mind that many of these solutions -- albeit in earlier versions -- have already been deployed by, and have created incredible value for, all kinds of organizations in all kinds of ways.
Take the case of Vestas Wind Systems
, for instance. It routinely uses IBM InfoSphere BigInsights
to sift through two petabytes of weather data and thus determine exactly where their turbines should go -- as well as roll those turbines out faster, and keep them up and running longer.
Or look at a wildly different case of XO Communications
. By using IBM Business Analytics
and an IBM Netezza
analytic appliance, they were able to reduce its customer churn rates by nearly 50 percent and helped save XO millions of dollars by uncovering deeper insights into customer behaviors, spotting trends and identifying those likely to defect. With a better overall customer experience, XO is now able to stay competitive against other carriers and improve its overall satisfaction.
And if private sector instances aren�t enough for you, ponder the Sonoma County Water Agency
. They are using IBM's Smarter Analytics solutions to optimize pressure valves throughout a distribution network that delivers water to more than 600,000 people all over the California wine country -- reducing needless water loss and predicting future problems with equal ease.
In what areas is your organization using analytics? And how has it improved your business?Additional InformationGet smarter about Smarter AnalyticsListen to the replay of the IBM Smarter Analytics Leadership SummitJoin the IBM Smarter Analytics conversations on�TwitterFacebookLinkedInGoogle+Hear Mike Rhodin�s keynote speech at CeBIT 2012About the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 11250