Modified on by Jackie Zhu
Simon Dickersonis a technical sales engineer for IBM covering API Management, Cast Iron cloud integration and mobile application development with IBM Worklight. He has an engineer degree and has worked with a broad range of computing technologies and focuses on the benefits of technology to the end user.
Hidden within the code of any mobile application (app) that communicates to a source of data is an API call. This API call is invoked to request data from a server, for example in a banking app the API call might return your bank balance, or it might send data to the server, such as a command for your bank to pay a bill.
This will be perfectly obvious if you are a programmer. After all, how else is your application going to get (or put) data? But underneath the obvious is something a little less so. How does that organization build the APIs and expose them to the mobile app?
How to build APIs and expose them to a mobile app?
When considering the app developers
- Mobile app developers like REST APIs and prefer data in a JSON format. XML is OK too. But having to call a SOAP service makes things harder. The trouble here is that many backend data sources could be legacy systems without a REST interface. The likelihood is that to expose this data, some development might be needed to build a REST service for the app developer.
- The app developer needs to know how to access these APIs. This entails building some kind of portal that publishes all the documentation and code samples that the app developer will require.
When considering the organization
- As soon as you start exposing services you open up those services to attack. So you need a security gateway to prevent, for example, denial of service attacks.
- Some form of user authentication/authorization may be needed. This is commonly OAuth but if the app is for internal employees your existing LDAP services might be needed.
When considering performance and user experience
- If an API is slow in returning a response, this directly impacts the users of the app and they will not be using the app for long.
- An app has to be considered as a product and a bad user experience with a product will reflect badly on the organization.
To expose your data to a mobile app, you can build out the services indicated above or implement an API Management solution. IBM API Management is a solution from IBM to do just this.
Implementing an IBM API Management solution
Key capabilities of IBM API Management include:
- Cloud or on-premise solution
- Security gateway using IBM's trusted DataPower technology
- The ability to proxy to existing REST Services
- The ability to create new REST APIs by connecting to databases and web services
- Aggregation of multiple backend services and data filtering
- A developer portal that can be set up in a matter of minutes
In addition, there is performance tuning and caching, and a comprehensive analytics capability that enables you to search and analyze business information in the API messages, important information over and above simple API statistics.
I have just finished a couple of chapters on the IBM Redbooks Publication for API Management. The IBM Redbooks publication details both the theory of APIs as well as a describing a practical implementation. Check it out.
For IBM API Management related blog posts, see:
For IBM API Management Redbooks publication, see:
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 4611
IBM API Management supports the API development lifecycle through its API versioning and promotion features.Katherine Sandersis an IBM Software Services for WebSphere (ISSW) consultant based in Hursley, United Kingdom. She has worked at IBM since 2007 and is currently specializing in IBM WebSphere Cast Iron Cloud Integration and IBM API Management. She has a strong background in software development and the WebSphere product portfolio. She holds a Bachelor of Science degree in Computer Science from the University of Nottingham.
Imagine that a company wants to increase brand awareness through the API economy. They have created their first API, and they have succeeded in attracting a community of app developers to use it. The company now wants to improve the API based on the feedback from the community to help attract more app developers. How can the company make changes to the API without affecting the existing apps that use it? How can the company track what changes have been made to the API? IBM API Management meets these requirements through its API versioning feature.
The company is also expanding its API development team to keep up with demand from the community for new features. They need to be able to test one version of an API while the next one is being developed. They also don't want changes to the API to be visible to app developers until they are ready for release. They decide that they need multiple API Management environments for development, test and production. How can the developers move each version of the API from development to test, and from test to production? IBM API Management supports this through its API promotion feature.
You can create two types of API versions in IBM API Management:
- Minor versions - Used for changes that do not require modifications in apps that call the API. For example, when you add a new resource to an API or change a resource implementation without modifying the inputs or outputs, you can create a minor version of the API.
Here are some pointers regarding minor versions of APIs:
- All minor versions of an API share the same URL, so you can switch seamlessly between them without the apps that call them being affected. However, only one minor version of the API can run at any given time.
- You can develop the next minor version while the current minor version is running, but you must switch to run the next minor version before you can test it.
- You can revert to a prior minor version of the API if there are problems.
- Major versions - Used for changes that break existing apps that call the API. For example, when you remove a resource or modify the required parameters of an API, you want to create a major version of the API.
Here are some pointers regarding major versions of APIs:
- Each major version of an API has a unique URL, and they can all run simultaneously in the same IBM API Management tenant.
- Existing apps will continue to use the previous version of the API. You must modify the app if you want it to run the new version of the API.
- You can develop and test the next major version without affecting the current major version. You can also choose to keep the next major version private and invisible to others until it is ready for release.
Each IBM API Management customer will typically define multiple environments, for example three environments for development, test and production. The API Promotion feature enables you to promote an API from one environment to another as part of the development lifecycle.
The Promotion feature can also be used to:
- Attach an API to an IBM Support request.
- Store an API version in an external source code control system.
- Backup an API for disaster recovery.
The promotion is done by exporting an API in one environment and importing it into another environment using the API Manager.
When an API is exported, a compressed file is downloaded. The compressed file contains encyrpted information about a single minor version of the API, including its name, visibility, context, and the configuration of the resources in the API. The file also contains details of the connections, entitlements, and security configuration for the API.
If an API does not already exist in the target environment, a new API is created when the API is imported. All the connections, entitlements and security configuration required by that API are also created. The new API will be listed in the APIs view after the import with an updated URL to match the new environment domain name. The new API will be imported in the stopped state, and you must start it before you try to test it.
If an API already exists in the target environment, the exported minor version of the API is added to the list of versions for the API. The current version of the API does not change, so you must revert to the newly imported version to be able to view or edit that version of the API. You must also start the new version of the API before the changes take effect in the currently running version.
During the import, the connection, entitlement, and security configuration are compared with the target environment and an error message is displayed if there is a conflict. You must correct the reported problem before you can successfully import the API. An error also occurs if you try to import a version that already exists.
Read more about versioning and promoting APIs in the IBM Redbooks publication, Exposing and Managing Enterprise Services with IBM API Management (see the link below). There is a designated chapter on the versioning and promotion of APIs that includes a step by step tutorial of how to apply these features in a real world scenario, as well as a section on best practices. Also follow @ibmapimgt on Twitter for all the latest news about the product.
For IBM API Management related blog posts, see:
For IBM API Management Redbooks publication, see:
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 8245
John Falklis the General Manager of Technical Strategy and Integration at Haddon Hill Group Associates, an IBM Premier Business Partner. Prior to that, John was the IBM Distinguished Engineer and CTO of SOA and Application Services Governance at IBM. John has 35 years of experience across all roles in Information Technology. John specializes in SOA, Enterprise Architecture, Middleware and now enabling API Management.
Modified on by Jackie Zhu
By now, you have probably been exposed to a lot of information regarding API Management. However, there's still a significant amount of confusion surrounding API Management: What it is and why it's important? In fact, it's easy to become confused as to how API Management relates to and differs from SOA.
API Management, like SOA, are both service-oriented approaches, that is managing consumer and provider relationships. We've seen service orientation evolve over the past 15 years, dating back to RPC (remote procedure call) and CORBA/IIOP, so API Management should be seen as the next step in the evolution of service orientation.
There are several perspectives to consider API Management. Simply put, API Management is focused on monetizing value locked up in existing business functions by exposing those functions (in the service sense) to expand the business.
The value of API Management ultimately is the monetization of business functions. In essence, it means taking carefully chosen business functions and exposing them (internally, externally, or to a limited audience) so that others (internal teams, clients, partners) can leverage them.
Advantage of API Management model
There are several advantages to this model:
- It extends your business model by providing core capabilities to partners and allowing them to consume your business functions. This expands your business by opening up new and different routes to market and helps partners by alleviating the need for them to build those core functions.
- It provides your business with a potential annuity revenue stream for APIs consumed depending on the business model you choose for your APIs.
- It allows you to market your business functions as products, thereby driving awareness of your key capabilities.
- It creates business dependencies for others on your 'platform' of APIs.
Key successful implementation factors
To implement APIs successfully, you should:
- Identify, document and optimize your business processes.
- Establish processes for business function identification, ownership, funding, analytics, reporting and investment processes.
- Build business APIs in a modular fashion, with the right level of granularity, so that the functions can be reused or composed into higher-level business functions.
- Catalog important information about those APIs, such as how to access them, what the terms and conditions of usage are, what security requirements exist for the API, etc.
- Establish an integration and mediation framework, so that APIs can be called dynamically based on policies and business parameters.
- Enforce policies that make your APIs secure and reliable.
- Track and analyze usage statistics, access information, performance data, etc., so as to ensure APIs are readily available, pertinent and return business value.
- Manage the APIs within portfolios, thereby distributing business decision making responsibility, as well as guiding where to invest, how to maximize business return and how to make your business more agile.
- Market your APIs to the correct audiences (developers, business partners, customers, etc.).
API Management takes a very community oriented approach to exposing business function. A major key to success with APIs is the marketing and socialization of APIs so they attract the right audience. Another important aspect of success with APIs is the ability to drive significant consumption of an API, thereby driving a fruitful dependency on the use of APIs, thus assuring the success of the business model using that API.
API Management is the hottest area of IT at the moment. Check out the IBM Redbooks publication on IBM API Management (see the link below).
For IBM API Management related blog posts, see:
For IBM API Management Redbooks publication, see:
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 4640
Modified on by Jackie Zhu
Chris Woodis an Integration Architect at Visa Europe, based in the United Kingdom. Chris has a background in SOA and ETL, designing and building systems that incorporate WebSphere SOA Appliances, WebSphere Service Registry and Repository and WebSphere MQ.
I like to get stuff done. I can't help it. This is just part of my personality. I like to get to grips with a problem, work out the best way to solve it and get it done. I think this is why I was drawn into integration architecture and design. For the past ten years, I've been taking integration problems and attempting to solve them, to get them done right.
People who have worked with me will also know that eventually I like to talk. It may take a while to get going, but once I get there with a problem and how to solve it, suddenly take-off velocity is reached and there's no stopping me.
That continuing dialogue, taking a problem, working out how to solve it, and communicating the solution particularly resonates with me on the subject of Web APIs. I've only been in the API space for a couple of years but I understand the value of Web APIs and what they bring to the integration space:
- Web APIs help me design and build interfaces that developers understand. Making integration easy is the key, right? So why not build a solution that the developers can pick up, use and understand with minimum fuss?
- Web APIs implemented using REST are great, but what about all the other stuff the business has that could be exposed as an API? The existing databases and SOAP-based Web Services? How do I make them accessible in a way developers understand and I can manage?
- For the existing assets I expose, the databases and Web Services, how do I make sure that it's business as usual for internal systems? That the new consumers of those assets don't impact the existing ones?
This is where IBM API Management can help; it provides the tools to the integration guy to make stuff happen, to get Web APIs into the marketplace and get developers working with them ASAP. With IBM API Management you can:
- Expose an existing RESTful Web API that developers can pick up, use and understand using a technology that is natural to them, adding the management features of usage entitlement and security to safeguard the backend;
- Expose an existing database or Web Service, defined as a Web API; all the benefits of communicating in the language developers understand but with the reassurance of usage entitlements and security;
- With IBM API Management you can implement the Web APIs you want to expose, and deliver the documentation and guidelines in one package. Developers can get on and code while you get on with solving the next integration challenge.
So IBM API Management helps businesses, developers and the integration guy deliver Web APIs. What's not to like? Don't forget to check out the IBM Redbooks publication on IBM API Management to learn more!
For IBM API Management related blog posts, see:
For IBM API Management Redbooks publication, see:
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 4530
Modified on by Jackie Zhu
Andrew Danielis a software engineer for IBM API Management, developing the user interface. He has been in the team for 3 years after completing a degree in Computer Science with Games Technology and has become an expert in APIs, Web Technologies and Java.
Today, more and more information is widely available over the internet for everybody to consume how they see fit, and a lot of this information is accessed via web APIs. We can find the current weather in any city, browse hundreds of photographs, and even check the schedule and status of nearly every flight happening at any moment. However, information retrieved from APIs isn't only generic, but it can be personal. Personal information such as our email address, postal address, date of birth, even our bank information can be available through API calls. With thieves constantly attempting to access our personal information in attempt to steal our identities, security is an important factor to consider when designing and creating an API, especially when sensitive data is concerned.
It is therefore crucial that we only allow the correct users access to the information that they are authorized to consume using secure methods, so this should be key in the design of an API and its resources.
Not every API needs to be secured. Some may not contain sensitive data, but a small subsection of your APIs are likely to need security.
Security can be implemented in the many stages of an API call from client to server:
- The user using the application to access the information can be authenticated and authorized.
- The application can authenticate itself.
- The communication between the server and the client can be encrypted.
Each of these three stages of security can be handled in IBM API Management in the following ways.
Authenticating the user
A typical scenario in IBM API Management is exposing personal data for a user. The user uses a third party application to consume, process and display the data in a meaningful way. But how does the server retrieve and display only information that is relevant for the user?
IBM API Management offers two ways to authenticate a user:
For each call to the resource server, IBM API Management, the application will need to pass the user's username and password in the Authorization header. This is a little risky, as even though the transmission will be encrypted with TSA or SSL, that information is still passing between servers each and every time an API is consumed.
Implementation of OAuth can be seen all around the web, especially in social media websites. It is a way of allowing an application to access user's information on a resource server without seeing or knowing the user's credentials. When the user wishes to access their personal information in the application, they are redirected to the resource server where they login and then authorize the application to interact with their data. They are then redirected back to the application, which retrieves a session token to use in subsequent calls. This method is more convoluted, but more secure as user credentials are only passed directly to IBM API Management, and only once.
Authenticating the application
APIs in IBM API Management can only be called by applications that provide a valid App ID, and in some circumstances, an App Secret too. They are generated when creating an application in the developer portal, so be sure to take note of them and keep them safe for future use. To provide the App ID with an API call, append the url with this querystring:
Encrypting the transmission
Every call to an API provided by IBM API Management is secured with HTTPS, which encrypts all packets of data using SSL or TLS, much like you'd expect when using social media websites. Encrypting the transmission means that without a special key, the data cannot be understood by any attacker who manages to capture the data.
Encrypting the data is only one part of the security. The application must be confident that the server it is communicating with is the intended server. This is done with certificates. A website certificate serves multiple purposes; most importantly it contains the key for encrypting and decrypting the transmission, but it also contains an identity which can be confirmed as trusted from a pre-established trusted sources.
Enabling all of these measures listed will ensure APIs have the full amount of security offered by IBM API Management. For details about how to apply each aspect of security to an API and its resources please read Chapter 5 "API Security" in the IBM RedBooks publication: Exposing and Managing Services with IBM API Management.
For IBM API Management related blog posts, see:
For IBM API Management Redbooks publication, see:
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 4604
Brian Benoitis the Technical Consultant for Pyramid Solutions' ECM practice. He has responsibility for pre-sales support, Pyramid Product offerings and support for the Pyramid relationship with IBM's ECM development groups. Brian joined Pyramid Solutions in 2003 and has over 20 years experience in the IT industry. Brian is an IBM Certified Specialist and Solution Design Technical Professional in Content Manager, Business Process Manager and Case Manager. He is recognized for developing Case Manager solutions in financial, insurance and government industries.
Modified on by Jackie Zhu
After the question of "what is Case Manager" one of the most common questions that I hear is where Case Manager fits within the organization. Many organizations don't realize the breadth of solutions that Case Manager is able to fit. There are several areas that can be examined to see how good a fit Case Manager is within a specific business process. Answers of "yes" to any or all of the questions below are strong indicators that Case Manager is the right tool for the process in question.
I recommend starting with a walkthrough of the business process with the users. This can either be a verbal/whiteboard walkthrough or actually physically following the process. While doing this, consider the questions outlined below.
Do the users use the term routing?
One of the most common words used when talking to a user when it comes to reviewing their process is the word route. You will often hear descriptions that include phrases like "I route the work to…". The term routing implies sending a piece of work to another person, which means that that the person sending the work no longer has it and that the work can only be processed by one person at a time.
With Case Manager, work no longer needs to be thought of in a linear process, users are now free to describe the work the way that they would like it to be processes rather than how the "have" to process it today. Often the word routing comes into play because of technology limitations rather than the business process itself.
Are there times when more than one person could be working in a process?
One of the frustrations often expressed by users is the inability for more than one person to work on things at the same time. Many users have become so institutionalized with the process as it is today that they forget places where work could be done in parallel. Talking to users about who could actually do work when if they had no technical limitations will quickly begin to identify how Case Manager and the task approach could benefit the process.
Are several of today's workflows/processes related and part of a bigger process?
Many times processes or workflows are already in place within an environment. Very often, there is a relationship between the different processes that people work on. These relationships are maintained either through manual activities or through custom applications. Most importantly is there a larger business process that all of these smaller processes support?
Can the overall process be described as a business "object"?
Answering this question can be difficult as it is common for the business owners having difficulty in understanding what is meant by a business object. The ideal approach to this is to provide examples of business objects in other or the same industry. A business object translates into the case, it often is a large business process. Examples of business objects as cases include:
More examples of cases and industry applications are outlined in Chapter 2 of the IBM Case Manager 5.2 Redbooks publication.
For IBM Case Manager V5.2 related blog posts, see:
For IBM Case Manager V5.2 Redbooks publication, see:
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 6103
This blog post is written by Jackie Zhu and Ed Stonesifer.
Modified on by Jackie Zhu
IBM Content Manager OnDemand (CMOD) provides enterprise-wide report management solution for computer-generated reports and many other type of content. The most recent release of CMOD added key features to help companies to meet compliance requirements:
- Supports individual documents holds through the new Enhanced Retention Management feature
- Supports integration with IBM Enterprise Records to provide full records management functions.
Enhanced Retention Management
The new feature, Enhance Retention Management is one of the most valuable features created for CMOD. It provides an immediate process to find and hold documents to prevent normal document deletion process. It enables you to lock down individual documents within a report that is managed by time-based retention.
This is critical to a company because any company might be forced to go through a legal inquiry. When that time happens, all documents related to the inquiry are required to be retained and not deleted during the inquiry process.
With the new Enhanced Retention Management feature, CMOD provides document retention management with one or more of the following ways:
- Time-based retention at the application group level.
- Efficient document deletions for different media types.
- Native support for putting individual documents on hold.
- Integration with IBM Enterprise Records.
Applying hold on documents
There are many ways to hold documents captured and managed by CMOD. One way is to first search for the documents and select them; then, you click the Action drop-down box and select Apply Hold to hold the documents. See the figure below. The user interface used for CMOD in this example is powered by IBM Content Navigator. When putting documents on hold, you also specify the hold reason.
Things to know about CMOD holds
- You can put documents on hold through a CMOD Windows or web client.
- Documents can be held base on different reasons, for example, legal investigation.
- Holds PREVENT documents from being expired or deleted but they do not change or manage document expiration policy.
- You can put a large number of documents on hold at once.
- A document can be put in multiple holds for multiple reasons. Only when all the holds on a document are removed, can the document goes through the normal expiration process.
- Implied hold enables management of document retention by an external system.
For Content Manager OnDemand related blog posts, see:
For more information on Content Manager OnDemand, see IBM Redbooks publications:Edward E Stonesifer is an IBM Executive Technical Sales Specialist with Mid-Atlantic ECM Business Unit in US. He works with IBM Content Manager OnDemand specializing across all IBM eServer platforms.
Jackie Zhuis an IBM Redbooks Project Leader. She works with technical experts around the world to create books, guides, blogs, and videos.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6588
Three steps to trusted data and increased business value
Have you ever been in a company meeting where multiple people had their own report on a specific topic that led them each to a completely different decision? I think that has happened, and still happens, a lot. But (there always seems to be one of those) how can that happen when they are all using data from the same company? There are actually many reasons why this can happen, most all of which have led companies on a mission to develop what is called �a single version of the truth� about the data in their company. They must be able to TRUST their data if they are to make truly informed decisions.
Have your eyes already skipped down to FINDING YOUR SOLUTION? OK, you can jump there if you want, but then come on back here and get the rest of the story!
To help them with their mission, along came Master Data Management (MDM) � STEP 1. Well, partially. They quickly found that to maintain their master data they needed a strong program for data governance and stewardship. With that in place, they can take a huge first step towards better informed decisions.
Assume they now have trusted data, so they must have increased business value - correct? Well, it sure helps. But (here it is again) they also really need to focus on those critical business processes that are required for efficiently running and managing the business. And so, enter Business Process Management (BPM) � STEP 2. With these two initiatives, the business will surely run more efficiently. Well, yes!
But (one more time) if those two initiatives are runningindependently, there is still more efficiency to be had � which can lead to even greater rewards.
So here we are at STEP 3, the place where they tie a bow around what they have done. And that bow is called integration! Hopefully we can all agree that the synergy of the two initiatives working together can bring more value than the sum of the two when working independently. Here that synergy involves such things as policy monitoring, management and enforcement that can take data quality and business process efficiency to the next level � along with bringing many additional benefits.
Want to learn more about how to start getting those benefits and that added value? Here�s how to FIND YOUR SOLUTION.
There is a new IBM Redbooks publication available called "Aligning MDM and BPM for master data governance, stewardship, and enterprise processes" SG24-8059. In this publication we describe an approach to an integrated solution of MDM and BPM - and it can be downloaded for free from the ITSO Redbooks web site.Click on this link and you are on your way.
Would you like to review some comments on that book first? Click on the following link, and read a blog by one of the authors, Dr. Lawrence Dubov.
Good reading and good luck on your way to more trusted data and higher business value.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 3876
I wanted to share some interesting Webcast poll results with you on software audits and readiness. On Monday March 18 we conducted a Webcast on Software Asset Management with Frost & Sullivan and Deloitte.
The focus was on the benefits of a comprehensive asset management strategy.
Two of the survey questions dealt with audits and the maturity level of their SAM program.
It was interesting to see how many of the respondents were audited versus their level of maturity.
Here are the questions and responses:
Has your company been audited by one or more software companies for
compliance with software licensing agreements?
- Yes, once in the last two years. 31.1%
- Yes, two or more times in the last two years. 28.9%
- No, but we have been alerted that an audit is forthcoming. 10%
- No, we have not been audited or alerted of an impending audit. 30%
So it seems 70% of the respondents have been or soon will be audited and 29% have been audited two or more times in the last 2 years. This would account for the 170+ attendees on the call. So one would assume these same people would be actively managing their software assets and be prepared for the next audit. Then we asked about their level of maturity.
What is the maturity level of your SAM program?
- Initial (Ad Hoc) - Little control over what IT and software assets are being used and where 30.7%
- Defined (Tracking Assets) - SAM processes exist as well as tools and installation data 38.6%
- Managed (Active Management) - Vision, policies, procedures, and tools are used to manage software asset life cycles 26.1%
- Optimized (Optimized Management) - Near real-time alignment with changing business needs. SAM is a strategic asset to overall business objectives. 4.5%
It�s interesting that even with the high percentage of companies being audited, 31% still had no real focus on SAM. It would appear these companies just deal with the audits as best they can with spreadsheets and manually collecting data when needed. Then there is the group in the middle, the 39% that are at least tracking their software assets so they know what they have. That leaves the remaining 30% that are actively managing or optimizing their software assets to gain efficiency.
In many companies the savings derived from SAM can actually make the investment self funding. As Deloitte discusses in the Webcast there can be hundreds of thousands of dollars and more in savings or cost avoidance by implementing a SAM program. The bottom line is that you need to be prepared for software audits, but don�t stop at audit readiness. By taking the next step to active and optimized software asses management you can optimize your software licenses and in many cases drive substantial cost savings. Listen to a replay of the Webcast for more information.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 6559
Here's a straightforward proposition: Software is more and more critical to the success of business strategies. So it's getting more and more critical to develop that software properly in the first place. Sounds simple enough, right? Just hire good engineers who don't write spaghetti code and who play well with others. Problem solved.
Well, okay, that actually works pretty well for a software startup. At a tiny, new-to-the-world organization, you've got a brand new kitchen to cook in and a very small number of cooks. Project management almost takes care of itself -- the two-topping pizzas zip out of the oven on time and under budget. They taste pretty good, too.
At the enterprise level, however, software engineering can easily go a bit wonky. Ponder if you will the following variables:
- The total size of a codebase � FYI: measured in billions of lines of code
- The number of functional units to optimize and test
- The number of programmers on a project
- The extent to which applications and services rely on each other to work
- The number of years (or decades) in which a particular codebase has gradually and imperfectly evolved
Scale these variables up far enough and you may find you've gone from a simple pizza, perfectly executed, to something else: a monstrous, 50-course, semi-French cataclysm of a meal that nobody ordered, that smells funky and that, if put in front of diners, will be hurled violently back into the kitchen and cost the restaurant its cherished good name.
Well, I can see I've worked my cooking analogy far past its reasonable life expectancy. However, having made my point, I can get to the heart of the matter, which is this:
For the largest organizations and software engineering projects, today's integrated development environments (IDE) are much more than just tools. The IDE is the individual practitioner�s working environment, which is seamlessly integrated to the team wide capabilities. IDEs are collaborative partners -- mentors, even -- that help guide development teams, projects, applications, services and codebases down the road to successful application lifecycle management
and enterprise modernization
.Given a robust, thoughtfully designed IDE, the best practices almost implement themselves
What with Rational Developer for System z version 8.5 < http://www-01.ibm.com/software/rational/products/developer/systemz/ > hitting the streets this week, now seemed like a good time to discuss these and related issues with an expert.
That expert was Richard S. Szulewski, IBM Product Manager for that very offering. Szulewski put matters on an etymological footing that wouldn't have occurred to me.
�Just look at the term IDE,� he said. �IDE: Integrated (that is, you have seamless access to all the facilities you need to do your job), Development (development is far more than just changing the code), Environment (a place from which to not just do your job, but do it effectively and efficiently). That is a lot more than just a pretty editor. That is what Rational Developer for System z offers.�
And in Version 8.5, it offers a more complete and well-rounded rendition of that concept than ever before. The new solution has been designed specifically to help organizations not just get more value from the mainframe, and from their developers, but also get it at a higher level of abstraction -- from development projects themselves.
Consider, for instance, how it addresses the common concern of scalability -- not of the software being developed, but of the project of developing that software. To optimize large-scale project management, as everyone knows, best practices are required, but not everyone actually implements them. A really mature, thoughtfully developed IDE should make that implementation a lot easier.
Szulewski agrees. �Rational Developer for System z V8.5 includes enhancements that ease potential large-team effects as the number of people on development teams using it goes up. The idea is that any given user can access the host as if he or she were the only one using it.�
For instance, consider the way the solution now automatically keeps programmer workstations up to date. Admins can simply upload new configuration files to the System z; once a programmer logs in, if the new file is needed, it'll be downloaded immediately.
That means more cross-team consistency with less effort -- a best practice by anybody's definition. It also means each programmer can spend more time on coding challenges and less on environment maintenance, which in turn leads to more productivity.
Another example of scalability, this one addressing codebase size: programmers can now more easily search for, zero in on and open the specific code modules they want.
In much the same way a Google search provides a preview of the text at a given link, so that you can decide whether to click it, the new Rational Developer for System z generates a code preview. Just mouse over a module, and you can see the first few lines of its code -- it's as simple as that.Write, visualize and test code quickly, easily... and in a way that isn't at all like French cuisine
Enhanced productivity, especially via editor refinements, is another major design strength of Rational Developer for System z V8.5. In the world of software development, editors are holy ground -- such deep investments, in fact, that they compare with religion and politics as reliable argument starters.
Well, the new Rational offering actually includes three different editors, for LPEX, COBOL and PL/l. And strengths that had been limited to the COBOL editor in the past have now been stirred into the LPEX and PL/l editors, bringing them up to par.
While they differ in specific features, what the new editors have in common is the strategic goal of helping developers visually and intuitively understand and navigate the flow of code much more easily. By increasing the time developers stay in editing context, instead of having to wander elsewhere to do various tasks, the new editors also increase the developer's focus on the job at hand.
And the way the three editors have been brought into rough equivalence turns out to be an instance of a larger theme in the new release. �Rational Developer for System z V8.5,� said Szulewski, �includes a conscious effort to get to better language equity in terms of the PL/I and COBOL languages.�
New integrations are another strength. Since organizations often already have fairly well-developed, specific solutions and information repositories that address particular areas, such integration is a great way to leverage those resources more easily and fully -- eliminating the need to reinvent the wheel.
Organizations that already have Endevor, for instance -- a mainframe code management tool -- will find that the new Rational offering can directly display Endevor elements or packages in a tidy, sortable, customizable table.
Code coverage, too, has been improved, making it a much more straightforward matter to visualize how complete (or incomplete) software testing has been at any given point. Straight from the coverage report, it's now possible to launch a view of the source code to see colored annotations that reflect specific testing.
Code review rules have also gotten a tweak for the better, thanks to additional COBOL and PL/l rules and templates in Rational Developer V8.5; you can even now create custom rules using an easy, wizard-driven process. It all illustrates just how serious IBM is about helping organizations pursue best practices through the new IDE.
�Creating an objective means for confirming best practice adherence -- that is what the new code review capability is about,� said Szulewski. �We've made it easier and faster to define what the 'coding practices' you want should look like, and provided an objective way for the individual developer and whole development teams to compare their work against those practices.�
And if unit testing is your particular cup of tea, you'll probably be glad to hear that in Version 8.5, Rational Developer for System z provides an automated unit testing framework, zUnit, which is similar in nature and concept to JUnit for Java and provides similar benefits. Here, too, smart wizards are available to generate COBOL and/or PL/l test cases.
After these test cases are built and run, the execution results can easily be displayed along with traceback information needed to isolate specific issues -- ultimately, helping to bring the software that much closer to a release version that won't remind anybody of French cooking gone horrifyingly wrong.Additional Information
Discover the benefits of Enterprise ModernizationSee what IBM offers for Application Lifecycle ManagementGet up to speed on IBM Rational Developer Version 8.5Watch videos about the features of Rational Developer for System zTry first-hand the new IBM Enterprise Modernization Sandbox, with no installGet more education with IBM COBOL and Rational Developer for System z - Distance LearningVisit the video library of IBM Enterprise Modernization Solutions for System zAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10066
It always surprises me to see tremendous potential go almost completely unrealized and undeveloped. A specific example: Recently I saw a YouTube video featuring a guitar worth somewhere north of a quarter million bucks. Yet the dealer who was trying to sell this guitar had recorded himself playing it with... a standard handheld video camera.
And I thought: �You know, if you want to sell a guitar worth $250,000, maybe you should record it with a microphone that costs more than $0.25.�
Plunking down a few dollars more for a good mike would have made a world of difference to this guy�s sales prospects.
A similar argument, or so it seems to me, often applies to IT. Platforms are bought with a particular purpose in mind, and used for that purpose, but a relatively small added investment might radically increase their total value.
Take the case of IBM Power Systems. This platform offers an extremely advanced processor architecture, IBM's RISC-based POWER 7; advanced operating systems, including AIX (IBM's flavor of UNIX); top-tier virtualization capabilities that allow IT to allocate resources and manage whole workloads fluidly and a host of other strengths too numerous to list here.
So organizations that have made the investment in Power Systems certainly know what an outstanding IT service delivery platform it is. What may not be as clear to them, and should be, is what an outstanding IT development platform Power Systems can be as well.
on Power, they can not only make their investment pay dividends to both sides of the development/operations divide -- creating and deploying better software, faster, and yet with lower costs and risks -- but also take major steps toward enterprise modernization
FYI, RD 8.5 for P7 is IDE: TNG
That argument got a lot stronger this week because IBM's own integrated development environment (IDE) for this platform -- IBM Rational Developer for Power, Version 8.5
To get a sense of IBM's thinking in this area, I had a chat with William T. Smith, Market and Product Line Manager for IBM's Development Solutions for Power Systems Software.
�We saw that many customers were developing their AIX or Linux on Power workloads on some other platform and then porting to AIX, often without optimizing them for Power,� said Smith. �And we were concerned to see them spending premium dollars for Power's unmatched price-performance profile and other unique qualities of service, but then failing to fully exploit those. Many of them are still using green screen or textual tools, or spending time cobbling together and maintaining home-grown OSS-based tool stacks, and therefore not realizing the productivity and other benefits of using Rational Developer for Power. So our goal for Version 8.5 was to have Rational Developer for Power start to play a central role in helping customers exploit AIX and Linux on Power to their fullest.�
I knew exactly what he meant by �green screen or textual tools� because I recall using such IDEs in my distant youth. And the memory gives me no pleasure. There was not very much troubleshooting and productivity, and quite a lot of vertical scrolling and swearing.
It seems to me that last-millennium development tools like that are bound to act like an anchor hung from the development team's neck -- not really the best choice if the goal is to increase business agility. Which, for most businesses today, is a very familiar goal indeed.
But the new Rational offering goes far beyond graphic visualization, which has been part of the solution since 2010.
�This new release delivers three main new capabilities: a new Performance Advisor, a new, highly scalable code coverage analysis capability and a new Porting Advisor,� said Smith. �Together these raise Rational Developer for Power's value proposition in the AIX and Linux on Power space in a profound way. Rational Developer for Power becomes not just an IDE, but an Integrated Development, Porting and Optimization Environment.�
Let's suppose you happen to be a company that has already deployed IBM Power Systems. By deploying the new Rational IDE as well, you can...
- Generate applications that are really optimized for the POWER architecture, and run faster, with more stability
- Simplify moving your applications across platforms
- Identify and eliminate software bugs much more rapidly and easily
That strikes me as a winning value prop. And if you happen to be an organization still using green-screen tools, such as Smith describes above, well, your programmers will likely clap you on the back and buy you a beer. Their professional lives will have taken a gigantic step forward into the intuitive, graphic development interfaces of the 21st century -- a very good place to be.Do you feel the Power?
Let's talk briefly about the optimization capabilities. Among the new features of Rational Developer for Power 8.5, perhaps the most compelling is the new Performance Advisor. This provides key insight needed to leverage Power strengths to the max -- not just in terms of analysis and tuning, but also by performance data management in a larger, more holistic sense.
You can, for instance, directly compare profiles of different builds to identify slowdown, drilling down into the details (like the time spent executing different functions within those builds). You can generate intuitive scorecards that illustrate real-world performance at a glance. You also get recommendations for future changes, each one assigned an estimated probability that the proposed change really will pay off.
How's that for �key insight?� It's no wonder that in its eight-month beta period, this capability was unanimously praised by all participants -- including many non-IBM organizations, of course.
Smith thinks very highly of this particular innovation as well.
�The Performance Advisor really is something new and unique. Unlike other performance tools, it is very much designed for the development generalist, but it is also fueled by deep performance engineering expertise that reflects intimate knowledge of the internals of the Power architecture, the operating systems and the compilers,� he said. �And in addition to being driven by expert advice, unlike other tools, it is also workflow-driven and deeply integrated into the IDE so that you can easily and naturally integrate the discipline and tasks of performance tuning into the routine development cycle.�75 percent of the Earth is covered by water -- IBM Rational covers the rest
Another major attraction: the new code coverage analysis for C, C++ and COBOL (on both AIX and Linux).
Now, there are lots of code coverage solutions out there. They all help dev teams establish how thoroughly code has been tested and therefore how bug-free and feature-complete -- in short, production-ready -- it really is.
What the new IBM solution offers is exceptional scalability of code coverage. No matter how large the codebase, the builds or the test coverage goals, Rational Developer for Power 8.5 is up to the job -- all with little to no perceived impact on developer productivity or application execution time. And in the enterprise, where the codebase and coverage requirements often trend very high, that kind of scalability is absolutely must-have.
For organizations that are looking to migrate C, C++ or COBOL software across platforms (read: to AIX/Linux on Power Systems), there's also the new Porting Advisor to ponder.
Using this tool, which leverages both static code analysis and expert system rules, developers can discover what kinds of issues are likely to turn up during the port, including such commonplace examples as big-endian vs. little-endian encoding, 32-bit vs. 64-bit processing requirements and signal-handling. Then, given that reconnaissance, the actual porting process can be orchestrated more easily and quickly -- a high-quality transition that results in a high-quality outcome.
Finally, if you happen to be using IBM's System i platform, the good folks at Rational have got your back there, too.
�It's true,� said Smith, �that we did put a great deal of emphasis on AIX and Linux in this release, but that doesn't mean we overlooked our IBM i customers. (And by the way: props to them for seeing the elegance in how IBM i is integrated and optimized to simplify development of business applications.) There are several goodies in this release for them, such as the integration of the Remote Systems Explorer with IBM Data Studio, support for multiple build specifications and a new live outline view for RPG.�Additional InformationSee how Enterprise Modernization helps you get more out of what you�ve gotSimplify your application lifecycle managementGet up to speed on IBM Rational Developer Version 8.5
Try out IBM products in the Enterprise Modernization Sandbox for Power SystemsEstimate your savings with Rational Developer for Power Systems SoftwareWatch videos that highlight features of IBM Rational Developer on PowerAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 9824
My cousin's wife told me recently that they wanted to buy a house, but weren't sure they could justify such a huge investment in such a doubtful economy.
So I told her this: �Buy a few square feet. Take a few weeks, try them out and see what you think. If you like 'em, buy some more square feet. Then a whole room. Then a whole floor. Eventually, maybe, you'll have your dream house.�
Of course, this was just a joke. But most of the time I think it's actually very good advice, because it's very easy to apply and it applies to so many different circumstances.
It certainly applies to physical fitness, where trying to accomplish too much, too soon will just burn you out, or put you in the hospital, instead of make you fitter. It also applies to marriage; getting engaged on the second date is generally not considered a love-life best practice.
Much the same kind of thinking applies rather naturally in IT. It shows up, for instance, in the form of kernel-based operating systems like Linux and all modern versions of Windows. The kernel represents a solid initial foundation that handles core tasks like memory management, to which any number of logical capabilities can be (and are) added to form the complete OS.
And these same sorts of ideas apply on a far larger scale in the context of cloud computing
, I think. Because organizations can't know with perfect accuracy in advance how best to develop and utilize cloud for their own particular circumstances, it's probably wise for them not to think of and develop clouds as a monolithic entity -- a thing they have to roll out perfectly and completely on day one -- but rather as a foundation to which they can add new capabilities over time.
If I had to guess, in fact, I would say that it was exactly this reasoning that led IBM to give SmartCloud Foundation
that title. It's meant as the initial �cloud kernel� on top of which you can then subsequently add new layers, new capabilities, that match your business requirements, just as Linux developers add Linux services, all of which run on top of the Linux kernel.Why manage the cloud when the cloud can manage itself?
As it happens, I prefer certainty to doubt. So rather than just keep guessing about IBM's nomenclatural logic, I decided to ask an expert: Marco Sebastiani, Product Manager for IBM Service Delivery Manager and Cloud Solutions.
Sebastiani not only confirmed my interpretation, but ran with it in what I thought was a pretty cool direction.
�You can think of cloud management software almost as a set of nested Russian dolls� he said. �Practically any cloud is going to need to be able to do things like create virtual servers, and track key assets, automatically. That basic functionality corresponds to the innermost Russian doll. We address that with SmartCloud Foundation's entry cloud solution, which does provisioning and image lifecycle management. But then, once you have that set up, you can easily add more capabilities over time: bigger dolls. Every larger doll, in turn, leverages the capabilities of the smaller ones. And the cloud intelligently and automatically orchestrates all of its capabilities based on business policies.�
So, to pursue this analogy, what's the next doll up from SmartCloud Foundation?
The answer, it seems, is IBM Service Delivery Manager
-- a set of capabilities, delivered as a pre-integrated software stack, that can help organizations leverage clouds to do even more, and create more value, in areas where they typically really need more value.
�The idea of this solution,� said Sebastiani, �is to simplify, accelerate and automate service fulfillment. It minimizes the amount of manual work IT has to put into the cloud by making the cloud much more self-governing and self-optimizing. So suppose you're an employee who wants a new service in the cloud. Instead of having to submit a request to IT to create that service, employees can just ask the cloud itself to do it. And, by orchestrating key tasks in logical ways, that's just what the cloud will then do. In this way, service management becomes much easier to pursue because services running in the cloud basically manage themselves, cradle-to-grave.�
This fits Sebastiani's analogy rather well, too. Return to the idea of Russian dolls for a minute, remembering that the innermost cloud doll does provisioning and monitoring of virtual servers.
What IBM Service Delivery Manager does, in turn, is build bigger dolls on top of that, automatically leveraging those functions over time, in ways that fulfill business requirements, while also adding entirely new capabilities that add entirely new value.End-to-end optimization of the complete service lifecycle
This solution, for instance, includes an intuitive portal interface available via any standard Web browser. This, in essence, is the front-end needed to create new services that will run in the cloud.
Using it, one can basically instruct the cloud: �This is what I'd like to do, this is when I'm going to need to be able to do it and this is how important it will be to the business.�
Then the cloud basically does the rest -- ensuring that new virtual servers are created and eliminated on time, provisioned using the right server images, and that this entire process doesn't conflict with or compromise existing services unacceptably. (If that sounds like �automatic IT governance� to you, you're pretty close to the mark.)
To do that, of course, the cloud needs to be able to allocate critical resources fluidly and dynamically
-- resources like processing power, memory, storage and even network bandwidth. This capability, too, is provided by Service Delivery Manager. It is continually aware of the available resources, discovers new resources when they are added to the general pool and doles out resources when and where they're needed. Then, when the demand level falls, the cloud pulls those resources back to the pool, or directly assigns them to another service that happens to need them at that point.
Also worth noting is the fact that all of that happens far more quickly and efficiently than it would if it were overseen by human talent. So, because fewer resources are wasted, fewer are needed in the first place -- a major cost-saving opportunity for the organization, which can now get by on less total processing power, memory, storage and bandwidth than it would have thought possible before the cloud.Real-time monitoring
is another major capability. Service Delivery Manager continually tracks the health and performance level of both virtual and physical resources -- a critically important function given how incredibly dynamic a cloud can be. So let us imagine that a given node (physical host) fails due to a toasted logic board; Service Delivery Manager will automatically notice and report that issue, leading to a quick and accurate failover of the associated service to a different, much healthier node.Cost-tracking
is yet another major strength of this solution. Given the intensely shared and interconnected nature of a cloud, where so much is happening automatically, you might expect it'd be difficult figuring out the costs created by different cloud services and systems -- and business teams and projects that use the cloud. And normally you'd be right.
�Service Delivery Manager changes all that,� said Sebastiani. �It gives you granular insight into exactly how costs are trending in all those different ways -- in as much or as little detail as you need. So if you're using your cloud in a public model, it can tell you exactly how much to charge your customers for their particular cloud utilization, even though all customers are using the same hardware. Or if you have a strictly private cloud, it will tell you how much you should charge back to different groups. This way, it creates the kind of insight that over time can help, or encourage, those divisions to try to keep their costs down.�Additional InformationFind out what Cloud and IT Optimization can do for your organizationLearn more about Cloud Service Delivery & ManagementDiscover the benefits of cloud with the cloud simulator gameAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6647
If you're looking for a can't-miss, bound-to-pay-off IT investment, I'm not sure you can do much better than upgrade your database architecture. I've written about this in the recent past in the context of migrating away from Oracle. But if you've already entered the 21st century, and are therefore using DB2, it's also really worth checking out the latest rendition.
That would be DB2 10
, which is so fresh to the market it still has that wonderful Golden Master scent to it that I still think should be bottled and turned into a cologne.
Recently I caught up with Conor O'Mahony
, Program Director of Database Software for IBM, and he clued me in concerning the many new bells, whistles and deeper-level improvements that have been made to DB2 in this version.
The reported performance increase alone justifies the upgrade in my mind. You might expect that any software environment hitting a double-digit version number would be so mature and optimized by this point that performance improvements would be negligible.
Not so with DB2 10. Instead of getting slower, it's gotten faster -- a lot faster. In fact, there are tests that were run by Intel that show that query processing can go up by as much as 10 times
when compared to the previous version of DB2 9 running on the same hardware.
�For software this mature, that kind of enhancement is jaw-dropping,� said O'Mahony. �Usually such radical optimizations have happened earlier in the development history. But as with so many other things, the devil really is in the details. In DB2 10, we have added breakthroughs for processing certain kinds of queries, for retrieving data, and for accessing indexes. The aggregate outcome is really stunning.�
So you can see why O'Mahony uses the adjective �jaw-dropping.� The thought of being able to handle 10 times as many transaction queries on the same hardware, just by implementing a software upgrade and taking advantage of new features, is probably enough to leave many IT managers stunned and blinking, like small children on Christmas morning.
And when you consider all the things your organization may rely on DB2 to accomplish across a product or service lifecycle -- from transaction processing to inventory assessment to data warehousing to customer service to marketing analytics -- you can see just what kind of business value the new performance optimizations are really likely to mean from a pragmatic standpoint.
Spend your money here and you'll realize not just exceptional ROI, but ROI delivered in many different areas, almost immediately.How many databases can you fit on the head of a pin?
And to continue the Christmas analogy, DB2 10 really is the gift that keeps on giving. For instance, consider the substantial improvements IBM has made in the way DB2 compresses data.
This has actually been a historical strength for DB2, and for good reason. The total costs of any database environment are significantly affected by storage costs; the less organizations have to allocate to storage resources, and maintaining them, the higher the payoff they will get over time.
That's why, in DB2 10, IBM has really gone the extra mile in implementing not just better compression, but a new class of compression. Whereas previous iterations focused on table-wide compression, DB2 10 augments that with page-level compression, thus allowing even more data to be squeezed into a given GB of storage.
How much improvement are we talking about -- and what kinds of savings can organizations get?
�That's a good question; first of all, I must say that compression rates can vary greatly from environment to environment,� said O'Mahony. �However, more than one client has seen 7 times or greater overall space savings, with some tables achieving 10 times space savings .�
Think about that for a minute and you get a sense of what it means. Not only do organizations now pack far more data on their existing storage -- delaying the purchase of more storage -- but all storage-related tasks are accelerated, including jobs like backup/recovery. This is because the data, being compressed, moves from point A to point B that much faster.
So here, too, it's plain that the new enhancements really translate into not just impressive, but impressively widespread, business value.Heating up the database infrastructure
Storage is also the focus of a completely different feature: DB2 10's new �multi-temperature data management.� The idea here is that in any large database environment, not all data is created equal; some data is hotter (more widely and frequently used) than other data.
Similarly, not all storage tiers are created equal. Some are faster and pricier than others. So, ideally, what you want to be able to do is put your hottest data on your fastest storage tiers.
Solid-state drives (SSDs), for instance, are absolutely perfect for enterprise-class databases because of their stellar read/write times and because, given no moving parts, they are much more reliable than conventional spinning-disk drives (or as I like to call them, failures waiting to happen). But because in life you really do often get what you pay for, SSDs are also a lot pricier per GB.
What DB2 10 does is empower you to get the best possible utilization from your highest-performing, highest-cost storage tiers (like SSD). You can put your hot data on fast storage like SSD, and put your colder data on less expensive storage.
Thus, the most important data is not just much more rapidly accessed, but, in the case of SSD, better protected because SSD drives are intrinsically much more reliable.Organizations that don't understand the past are doomed to repeat it
Storage improvements like that revolve around space, but it may interest you to know that DB2 10 is also the master of time. Specifically, it includes a new feature dubbed Time Travel, which, though it does not involve a flux capacitor, nevertheless delivers the goods.
The basic idea of Time Travel is that database queries often have implied time constraints. For instance, an insurance underwriter may need to know what kinds of policy terms were in effect at a certain point in time when a past event occurred, as opposed to the policy terms in effect now.
While many organizations have jerry-rigged their own, hand-coded approaches to that kind of capability, possibly using flux capacitors, DB2 10 provides a formal, vendor-approved-and-supported rendition. This is integrated not just more deeply, but also (let's face it) more effectively -- it is very likely to be faster and more reliable than the homegrown flavor.
The fact that it's included in DB2 10 right out of the box also means organizations no longer have to worry about supporting their own code to provide that functionality, and can instead just build on the IBM version. So new database apps and services are rolled out faster, for lower development cost -- both major wins to any organization looking to get a competitive edge (which is to say, all of them).Additional InformationLearn more about data management from IBM SoftwareVisit Conor�s Database Diary blogJoin the On Demand Virtual Conference on DB2 10 nowRead about Hyper-Speed and Time Travel with DB2Read how to Run Oracle applications on DB2 10 for Linux/UNIX/WindowsGet some user perspectives on DB2About the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
1. Based on tests of IBM DB2 9.7 FP3 vs. DB2 10.1 with comparable specifications using data warehouse / decision support workloads, as of 4/3/2012.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10433
Tell me if this sounds familiar: You're pondering whether to do something potentially risky -- perhaps quit a job, switch to a completely different career path or even start a business. You have many motives to do so, yet the road ahead seems very unclear, and you're uncomfortable with that. And someone else says, �Oh, go for it. Everything in life is risky. You could get hit by a bus any day... but that doesn't stop you from leaving the house.�
Well, that�s true, of course, but as an argument it has a really basic problem: it's number-free.
Not all risks, in other words, are the same. The risk of getting hit by a bus is different from, and much smaller than, the risk of starting a business, watching as it slowly fails and getting into deep debt.
Making such a decision reasonably competently means finding a way to clarify, quantify and prioritize the kinds of risks you're facing in a given strategy -- and weighing them against the benefit you're trying to create.
This, in essence, is a problem confronted every day by businesses making complex decisions. They'd like to create improvements or pursue new goals in a given area. But in a perfect world, they'd also like to avoid getting hit by a bus.
By no coincidence, this is also a major focus of IBM's considerable interest in advanced business analytics -- recently described by Mike Rhodin, Vice President of IBM Solutions Group, as �the silver thread woven throughout our portfolio.� Risk assessment and mitigation are central to business strategies -- almost all strategies, in almost all industries. And advanced analytics can deliver some of the best available insight to accomplish that.Get a moment of clarity -- actually, get lots of them
Toward getting a little more clarity about this area, I talked to John Kelly, Worldwide Market Segment Manager for IBM's Business Analytics group about IBM's perspective in this area... and how that perspective is going to be explored at the forthcoming Vision 2012
conference to be held from May 14-17 at the JW Marriott Grande Lakes in Orlando.
Like me, Kelly sees analytics as a powerful visualization tool -- a way to understand different possible futures, and steer your organization into a future that offers more benefit and lower risk.
�Customers are looking to improve decision making and business performance through increased insight and business intelligence,� he said. �That's exactly why IBM has recently labeled analytics as one of our four major strategic directions -- we know how much potential this area really has. And we'd like our clients to realize as much of that potential as possible.�
Risk assessment and mitigation, of course, have a long history in some areas (like finance) and are less well understood and established in other areas (like technology startups), but the root appeal remains the same in every case. If you want to get the best possible outcome, you need to establish the most likely, and most potentially devastating, pitfalls.
Analytics tools can work almost like a car's high-beams, helping you navigate and get where you're trying to go more safely. That's a goal that almost any business leader, in any industry, at any organization of any size, can understand and appreciate.Regulatory compliance stands out as a growing challenge
And beyond that general value proposition, IBM is making considerable strides in applying analytics effectively in areas that are of particular concern to its clients. One such area: regulatory compliance and policy management.
In the wake of major scandals dating back more than a decade, these regulations have increasingly been created with the stated goal of minimizing various forms of unacceptable risk to the public, to business employees and customers as well as to stockholders. And that, of course, is a laudable goal.
But complying with those regulations can be a headache even for the best-intentioned organizations that are really committed to compliance and dedicating tremendous resources to the job. Even when compliance seems to have been achieved, it hasn't always been. New regulations appear every year; it's not the easiest thing in the world to know which apply in a given case, and under what conditions, and what the best organizational response should be.
IBM, it seems, can help. �Our solutions deliver analysis and reporting, to provide visibility into the state of risk in the enterprise including evidence of compliance or remediation status, trending and point-in-time analysis and ad hoc querying,� said Kelly.
Consider what that means in practical terms. Not only can you understand much more clearly, quickly and easily the extent to which your organization is in compliance, but you can also demonstrate that compliance on demand, in whatever level of detail is required. In the event of an audit, such a demonstration will be essential -- and avoiding potentially hefty penalties and fees will be much simpler. What organization wouldn't be interested in solutions like that?
One solution family drawn from IBM's analytics portfolio is particularly strong in the area of compliance and risk: IBM OpenPages
. This suite of tools focuses specifically on governance, risk and compliance, not just identifying and monitoring risk, but also putting in place a programmatic way to communicate and manage risk exposure across the enterprise to reduce unexpected losses, penalty and fines (not to mention reputational damage), while at the same time improving decision making.
Its compliance capabilities, for instance, are directly on point. Organizations routinely create (and enforce) policies to drive compliance... but not always in as governed and coherent a fashion as they might. (Banking industry, I'm looking at you when I say that.)OpenPages Policy and Compliance Management
automates the lifecycle of compliance policies from cradle to grave, reducing redundancy and optimizing the policies you keep in a way that spans resources, business groups, projects and workflow. Organizations that have a formal implementation of risk mitigation, but would like to tune or enhance it to better align with their current and future needs (not to mention future regulations), will find this solution particularly compelling.A better outcome can result from risk-aware decision making
Risk management is increasingly becoming a strategic, executive-sponsored solution that many organizations view as providing a competitive advantage where risk and performance are aligned and where governance, risk and compliance is part of �annual strategic planning�.
An integrated governance, risk and compliance program also has a wealth of information that can be leveraged for risk-aware decisions. Through business intelligence and reporting, information from an integrated program is being utilized beyond the risk and compliance office and being leveraged by business managers to make risk-informed decisions about resource and investment allocations in product planning.
Optimize your risk management strategies in many dimensions
Other OpenPages solutions -- which inter-operate with each other, via a shared foundation of data -- are available to deliver similar capabilities in related fields like:
- Operational risk management. This offering can identify, manage, monitor and analyze operational risks of all types, all from a single point of command to spur a particularly agile response. From better, more accurate insight comes a faster and more comprehensive remediation.
- Financial controls management: Regulations like Sarbanes-Oxley in the United States are mirrored by similar regulations in other countries around the world -- and for global organizations, each crossed border represents a new set of financial regulations with which to comply. This solution focuses on reporting, offering a centralized architecture for analysis, documentation and data management.
- IT governance. IT has become central to almost everything organizations do today. As a result, risk assessment for IT assets, services and data is needed to ensure that IT delivers the intended value -- ideally, on time and under budget -- even in the case of complex projects that take years to complete.
- Internal audit management. For large organizations that proactively conduct audits of their own, this solution is a natural fit. Using it, they can automate many of the basic processes involved, as well as connect the results logically to other risk assessment initiatives they have in place.
Anyone interested in getting more information on these and related topics should definitely consider attending the previously mentioned Vision 2012 conference.
This is the premier global conference for finance and risk professionals, and the most high-profile stage for IBM to discuss everything it has to offer in this rapidly evolving, increasingly hot area.
When I asked Kelly to sum up in a nutshell what IBM will be discussing at Vision 2012, he said this:
�IBM Risk Analytics enables the Smarter Analytics approach -- turning risk information into insight, and insight into better business outcomes.�
I like the sound of that.Additional InformationLearn how Business Analytics improves business performanceSee what Vision 2012 offers for finance and risk management professionals
Gain relevant business insight through Smarter AnalyticsSmarter Analytics for the financial industryAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7234
Love and marriage. Spring and allergies. Bad economies and ROI. These concepts often come in pairs, and for good reason: when the first comes along, we need to pay more attention to the second. This is certainly true for the third example I've listed above. For both individuals and organizations, the question �How can I get the best ROI from the investments I've made?� is mighty popular right now. And for both, the answers are too elusive.
If you're an individual, perhaps you decide to talk to a stockbroker -- a guy who charges you money to tell you things you probably already knew and who probably generates no clear value over time. (It's suggestive that stockbrokers, despite claiming extensive specialized insight going back multiple decades, are practically never billionaires.)
For businesses, fortunately, the situation is a lot brighter. Particularly in the case of a portfolio of applications, there are ways to go about improving ROI that are based on sound and consistent principles. And software solutions which are built on those principles are now available.
That, in sum, is what Application Portfolio Management (APM) is all about. If you think of applications as investments -- which, for organizations, they certainly are, and on a huge scale -- it's very logical to ask: �What kind of return am I getting from my investments? Which ones are vital to my business, is there a scope for consolidation, which applications should I consider retiring? How should I optimize my investments to dial up my total ROI and dial down my total risk?�
APM solutions are particularly attractive to organizations at the enterprise level. That's because the largest organizations have giant portfolios of applications, and getting good answers to the above list of questions is therefore much harder. Similarly, in certain industries like banking, where applications have been in use for an exceptionally long period of time, the idea of introducing change to those applications is going to encounter more cultural resistance than usual. Change may be necessary, but it's really going to have to be justified with demonstrable ROI, if it's going to happen.
When you throw in the problematic economy we continue to face, in which ROI has taken on greater significance, it becomes pretty clear the need for effective APM has never been greater. Yet in many cases, organizations have barely even begun to think about portfolio management in this context.
Recently I was very fortunate to be able to talk to a real expert in this area: Per Kroll, Chief Solution Architect for Application Portfolio Management at IBM. Kroll agreed with me that at many organizations, the time for APM
and also project portfolio management capabilities is now -- and not just because the economy is bad, and ROI is a touchy topic.
�The basic problems have been around for quite some time,� he said. �But they are getting worse every year and have now reached a breaking point. Companies can no longer continue with business as usual. They need to assess the value versus cost of all their current applications.�
Kroll's slant on the cost benefit ratio of current applications is particularly intriguing.
The usual approach to portfolio management in the enterprise revolves around projects -- answering the question: �What is the ROI for this business project we're thinking about undergoing, or have just finished?�
Well, that question is sensible. But it has the effect of shifting the focus away from applications. It ignores the fact that a problematic application's influence can be, and often is, multiplied because it spans multiple business projects.
How do APM solutions help? They put the focus right back on the applications. And they provide a clear, logical path organizations can follow to get more value, and lower risk, from every application in the complete portfolio.
IBM solutions including IBM Rational Focal Point, System Architect and Asset Analyzer can be used to pursue the following steps:
1. Create an application inventory.
2. Provide initial information about each application.
3. Analyze applications and determine which need more investigation.
4. Make decisions, like consolidate, modernize or move to cloud
5. Execute and track on those decisions through project proposals and project deliveryEnhance both IT development and IT operations, and receive more value from every application you have
IBM thus helps organizations optimize their application portfolios the way a stockbroker is supposed to help an individual optimize an investment portfolio -- in a balanced, objective and data-driven way that takes full advantage of proven best practices.
This approach generates many positive effects. And the more creative the company is in using APM solutions, the more benefits it will realize. Some of them are more obvious and some more subtle, but the possibilities really are endless.
Looking for something big and obvious? Think about the 80/20 rule of IT budgets. This says that typically an organization spends 80 percent of its total budget on IT operations (�keeping the lights on�) and only 20 percent on strategic innovation (�doing new stuff to make the business grow�).
What IBM APM solutions do -- unlike certain alternatives -- is deliver value to both halves of that ratio. And particularly in operations, that's welcome news in the enterprise.
�Seems to me like companies have got the 80/20 rule wrong,� said Kroll. �Many implement an objective and transparent portfolio management process only in development, to determine how best to spend the 20 percent of funds going to new projects. But what about the 80 percent on the operations and maintenance side? That's often decided based on a 'who screams the loudest' approach -- the squeaky wheel gets the grease whether it deserves any or not. Our APM capabilities make decisions like that a great deal more objective.�
A subtler, but still powerful, improvement lies in the area of information transparency. APM solutions, once applied, have the effect of pulling key information out of the shadows and into the spotlight, where it can deliver more value through wider utilization (and/or correction or revision, if necessary).
Because it's revealed, that information also becomes more resilient -- surviving the loss of key employees, for instance, who leave the organization.
�Think about what happens when people make decisions about investment levels, modernization targets or which applications to move to the cloud,� said Kroll. �Usually the relevant information is distributed in people's heads throughout the company... or hidden in spreadsheets. APM is about revealing that information (including analytical processes), prioritizing it, and making it all easily available, to anybody who needs it, at the time decisions are made.�
That reference to cloud brings up yet another point. Cloud and APM turn out to be closely related areas because APM-based insights can significantly improve the odds of a cloud's success.
How? Given an application inventory in which each application's context (risks, costs, complexity, etc.) has been quantified and analyzed, that information can be very useful in deciding which applications are the best candidates for clouds and choosing specific cloud models. This is really important, because picking the right set of applications and the right model can make or break a cloud project. APM insight not only helps ensure the chosen applications will scale well in a cloud, but also addresses other factors -- security, for instance, or business criticality -- that definitely need to be taken into account as well.
Furthermore, these same APM solutions can be used to support and enhance many other kinds of initiatives as well, some of which are hot and rapidly getting hotter.
Kroll agreed. �The interest in APM is growing so rapidly right now partly because people need to make so many new kinds of application-related decisions -- not just cloud, but also in areas like mobility, regulation compliance and outsourcing,� he said. �And once you've built an application inventory that captures value, costs and risks, all these decisions are much easier to make. IT's job is not to say 'no,' but to help business establish the constraints and trade-offs at hand.�Innovate 2012
, to be held June 3 - 7 in Orlando, Florida, offers more on portfolio management, enterprise modernization, application lifecycle management and more -- with nearly 400 technical sessions and more than 20 tracks -- to give you insight into how software can help your organization cut costs, drive innovation and reduce risk. Be sure to register < http://www.ibm.com/software/rational/innovate/register.html > by March 14 to save US $200.Additional InformationGet more details on Application Portfolio Management solutionsSee how to align your business with your strategyFind out how IBM can help you with enterprise modernizationWatch the demo on Smarter Application Portfolio Management with IBM RationalCheck out what�s planned for Innovate 2012Register for Innovate 2012 by March 14 and saveRead a commissioned study
conducted by Forrester Consulting, Measuring The Total Economic Impact Of IBM Rational Integrated Solution for Application Portfolio ManagementAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 8747