One of the greatest challenges of the world becoming more digitized and interconnected is the fact that we are always in state of evolution. There has never been a point in computing where we have thrown everything out and started all over again. Over the last few years nowhere have elements of this conversation been more clear than in the space of cloud computing. Was this something new, the next logical step in the evolution of computing, or have we been doing cloud for years before ever calling it cloud? These issues raise questions about the way we think about and use computer systems over the long term.
The problem with evolving some systems, without evolving all systems, is that many times systems and applications were designed to make sense within the context of the world where they were originally conceived and built in. They were not designed understanding how the world would change over time or how they might be asked to do something they weren't originally asked to do. While there are obvious issues that come up like compatibility, perhaps the greatest challenge is in security. When we look at the electrical grid for example, we see huge benefit attached to making the grid "smarter," yet at the same time, the original design of the grid hasn't fundamentally changed much since it was originally conceived, and when it was originally conceived something like Stuxnet had never been appreciated as a potential risk.
The threat landscape has been constantly intensifying over the years, with IBM's own X-Force Research and Development team referring to 2011 as "the year of the security breach." Perhaps the greatest opportunity that we have had in security in years comes from the fact that the BYOD trend was introduced almost simultaneously with unprecedented breach activity. Executives at every level care more about security than they ever have before. As a result, organizations all over the world are looking very closely at best practices in mobile device security as they confront the obvious security risk of, "I, as a company, do not own this device, but I am going to allow it to access my network anyway."
While even Smartphones do represent the next logical evolution of making computing smaller and more powerful, that platform is still very new, and it is being developed and built today in a world where we understand security risk. For that reason, we have an opportunity with mobile devices to build them more securely, from the very beginning, in a way that we never had with traditional computers, or even the internet in general.
New mobile applications are being created all the time, but the number of mobile applications compared to web applications is still not even a conversation. We are still in the infancy of mobile application development, and with that infancy comes opportunity. For the last several years, 40-50% of every publicly disclosed security vulnerability has been in a web application. As a result, the total number of vulnerable applications on the internet just keeps compounding. What's really interesting is that we have seen improvement in the introduction of new vulnerabilities over time because people who care about security can and have made efforts to be more diligent about eliminating vulnerabilities before deployment. Now, with mobile applications, there is no historical backlog of applications that require modernization. This is the beginning.
In security, perhaps the worst practice is only addressing risk once attackers begin actively exploiting that risk- fixing things not just after we know they are broken, but fixing them after we have already lost. We can turn the tide with mobile applications because we fortunately have years and years of best practices built up in secure engineering that enables us to design this new wave of applications more securely from the very beginning.
This is an opportunity that does not come around very often for all users of technology, and that is the opportunity to take a new platform, and really design it, and the applications on it, from the very beginning with security as a core consideration. With the right approach and diligence, we can actually make mobile devices a more secure computing platform than anything else we currently use today.
Guest post from Dr. Michael Zerbs, Vice President, IBM Risk Analytics
This month we at Algorithmics, an IBM Company, released the June issue of TH!NK, our semi-annual magazineexploring the world of financial risk management with an editorial focus on the space between inspiration and implementation.
TH!NK is an award-winning publication written by and for risk professionals, featuring engaging, original content on thought leadership designed to inspire conversations about the challenges of today and the possibilities of tomorrow.
As is most editorial writing on the subject of financial services in recent years, the issue is markedly inspired by the ramifications of turbulence and change, and the possibilities that arise in their wake.
Recent elections in France and Greece have added a new chapter to the ongoing sovereign debt crisis in Europe. Yet following both elections, Chancellor Angela Merkel of Germany clearly stated that neither she nor her government were interested in reopening the eurozone fiscal pact, or the strategy of deficit-cutting austerity measures.
Determining the best response in times of uncertainty has been an issue for financial service firms since the outset of the financial crisis. Regulators, governments and analysts have called for financial firms to change the way they do business.
One way that firms may be able to respond is by looking to how they have managed uncertainty in the past. In �Back to the Future,� the June issue�s cover story revisits capital and its role in the bank of tomorrow. When early banks operated as partnerships with personal liability attached, every decision regarding capitalization and risk profiles was owned by decision makers. The impact of this framework on their business holds interesting implications.
�Back to the Future� is not only the insightful cover theme of this month�s TH!NK magazine, inspired by turbulence in financial services. It is also emblematic of the possibilities beginning to come to fruition in the wake of change for Algorithmics, as we continue to undertake our transformation as an IBM Company.
Together with IBM OpenPages, Algorithmics has formed IBM Risk Analytics � a segment of IBM Business Analytics dedicated to helping firms transform their business models in order to optimize outcomes through employing risk-aware decision making. I could not be more excited about this new chapter in which the best teams in operational and financial risk management have come together with IBM to build on the successes of their respective pasts and create an even stronger integrated offering for our clients, not only in financial services but across industry sectors shaped by an ever-shifting ecosystem.
IBM Risk Analytics is committed to thought leadership, but more importantly thought leadership that matters to the industries in which we operate. The June issue of TH!NK incorporates our past � in longstanding expertise areas such as CVA, counterparty credit risk, optimization, and curve fitting; and our future � with content inspired by IBM Smarter Analytics, IBM Banking Industry Solutions, and Social Business.
This issueis representative of the transformative changes through which you will see, over the coming months, the blossoming of IBM Risk Analytics as a true force in the risk community. In our early days we have already seen outstanding feedback on the power of our integration � especially of our close and growing relationship with IBM�s Global Business Services organization � from our clients in forums such as ARC 2012 and Vision 2012.
What is the appropriate response in times of uncertainty and conflicting views on future direction? TH!NK posits that it is evolution, but a particular kind of evolution that does not overlook historical wisdoms. I expect that by our next issue in November 2012, TH!NK will demonstrate even further the convergence of the best of Algorithmics, OpenPages and IBM.
Last Sunday was Father�s Day. This is a paradoxical �holiday� in the U.S., as it is a day to honor fathers with gifts and food, but they are still required to work in the yard, fix stuff, yell at kids and run errands.
I received thoughtful, useful and handmade gifts from my three wonderful kids. They included a converter that lets me play my iPhone through my cassette tape deck in my car (needless to say I�m not driving a 2012 model); a homemade comic strip card about mutant aliens; and, a personalized gum wallet made of duct tape (see picture below).
The real challenge was what to get my father for Father�s Day. In fact, I face this conundrum every gift-giving occasion with my father.
As those of you with fathers can attest, the typical dad has everything he will ever need in his entire life by the age of 31, plus or minus two years. And, I mean everything � tools, gadgets, sweaters and golf paraphernalia.
This personal challenge is what prompted me to use the recently released IBM Analytical Decision Management to provide a recommended action related to my gift selection. My strategic objective was to have my father accept and enjoy my gift.
Because we have been talking a lot about Customer Analytics, Next Best Action and IBM Signature Solutions at this year�s IBM Business Analytics Analyst Summit (search #ibmbas12 on Twitter to follow the commentary), you can understand why I could easily configure my IBM Analytical Decision Management solution. (Hint: Replace �father� with �customer� and �gift� with �offer.�)
Following were the steps to my recommended decision:
�Using years of historical fatherly gift giving data (e.g., ties, golf shirts, jive coupons with the promise of a �car wash�), I restricted the analysis of my data so that the recommended action(s) would be based only on those gifts given in the summer months (e.g., nothing with long sleeves).
�I also opted to exclude �no action� from the recommended action list, which is often a viable decision for retention offers but not for gift giving to my father, especially if I hope to stay in the Peckman will. Just kidding. Sorta.
�I defined the list of the potential recommended outcomes linked to my objective: Give a product, a service; or a combination of the two. Then, I built new business rules and predictive models that were not included since the last time I used IBM Analytical Decision Management. For example, new rules:
If (golf_hndcp[current] > golf_hndcp[lastyear]) & (golf_complaints > 3) then add risk points;
If balance_giftcard > 0 then add risk points;
If (favorite_child[current_month] = me) then subtract risk points;
� and so on.
�Similarly, I created new predictive models:
Before deploying the gift giving decision management solution for use in the field by end users (like me, my wife, my children) I ran all the proper �what if� scenarios and used the new constraint-based optimization functionality in an attempt to maximize enjoyment and minimize effort to carry/use and subject to cost constraints. (To see the other new features in IBM Analytical Decision Management, read the data sheet.)
For example, a new Audi has a predictive acceptance of 100 percent (1.00) but falls outside cost limits for the gift; and, $5.00 tickets to Ballet in the Park (performed by an up-and-coming troupe of back-ups to the back-up dancers) fall within cost constraints, but have a predictive acceptance of less than 2 percent or 0.01667.
By completing all of these steps, �IBM Decision Management for Gift Giving� (the next Signature Solution?) is ready to generate a recommended action to my wife�s question, �What should we get your dad for Father�s Day?�
My recommended outcome >>> Gift certificate to the Olive Garden.
The next step is to put my updated application up into the cloud (read more about Analytical Decision Management SaaS) so my extended social network can run the SaaS version for batch gift recommendations.
And, in case you have any wild ideas, I have a patent pending on the personalized gum wallet made of duct tape.
Today's post is from Tom Mulvehill, Security Product Manager. One of the most informative Innovate 2012 sessions I attended was �Security Scanning within Continuous Integration� presented jointly by USDA and SAIC. The session highlighted the benefits of addressing security vulnerabilities early in the Software Development Lifecycle (SDLC) � a pretty well understood development best practice. USDA took it a step further by integrating Static Application Security Testing (SAST) into the SDLC utilizing a Continuous Integration (CI) model. Their commitment to CI generated measurable results, greater scanning coverage, and wider adoption by their development community. Early on, USDA recognized that security knowledge and expertise in development had a direct bearing on how much vulnerability information a team could remediate. They knew that the development teams must be presented with an actionable set of security findings. To address this challenge the security team used the AppScan Source filter feature to first focus only on high severity vulnerabilities limited to: SQL Injection, Cross-site Scripting, and Authentication risk. This approach produced a finite set of actionable results that could be actioned by developers. Their key message was �focus on what your team can fix�. Once the security analysis was refined they introduced it to the development teams through a CI process. They made application security analysis part of the build process. USDA took advantage of the AppScan Source support of Maven to streamline the CI integration. Admittedly, this required some experimentation in terms of frequency of security analysis tied to a build. Ultimately their CI framework enabled them continuously scan 135 projects once a week; some are scanned more than once a week. In total, over 1.8 million lines of code are scanned. The actionable results are provided to hundreds of developers in an automated fashion. One of the ancillary benefits of the CI investment is the data provided to management. AppScan Source generates important security metrics that have been integrated into a Code Quality dash board. In fact, the existing dashboard has been extended to include security reporting. Ultimately, USDA was able to integrate SAST into their existing CI environment enabling them to report on security risk across a portfolio of applications. The entire process is automated and extensible. USDA also highlighted some areas of product improvement based on their experiences. Their feedback is being reviewed by the development teams this week for future releases. USDA concluded their session with the following key points on the benefits of integrating AppScan Source into their CI environment:
� Reduced the cost of performing scans
� Reduced the cost to fix vulnerabilities
� Increased visibility to Management
Deploying SAST using a Continuous Integration model is beneficial to both security savvy organizations and those just starting to focus on application security. It�s a practical model that benefits security teams, developers, and management. For more information you can visit us on the web here.
This week, market research firm IDC ranked IBM tops in worldwide market share for enterprise social software. According to IDC's analysis of 2011 revenue, IBM grew faster than its competitors and nearly two times faster than the overall market which grew approximately 40 percent.
That's one pretty sweet three-peat.
According to the press release, IDC expects the enterprise social platforms market to reach $4.5 billion by 2016, representing growth of 43 percent over the next four years. While this demand is on the rise, organizations are still looking for ways to embrace social capabilities to transform virtually every part of their business operations, from marketing to research innovation and human resources, but lack the tools to gain insight into the enormous stream of information and use it in a meaningful way.
"Social software is gaining in momentum in the enterprise," says Michael Fauscette, group vice president for IDC's Software Business Solutions Group. "Companies are seeing significant gain in productivity and increasing value from successfully deployed social software solutions including supporting ad hoc work by bringing people, data, content, and systems together in real time and making more effective critical business decisions by providing the 'right information' in the work context."
The IBM portfolio of social software includes Social Collaboration, Unified Communications and Web Experience. Today, more than 35 percent of Fortune 100 companies have adopted IBM's social software offerings including eight of the top 10 retailers and banks. IBM's social business software and services is unique combining social networking capabilities with analytics to help companies capture information and insights into dialogues from employees and customers and create interactions that translate into real value.
Naturally, we're pretty excited about this news. Here's a sampling of what we've been saying:
"Integrating social technologies into business processes is an important early step in the transformational promise of social business. A social business can use analytics to derive insights from its networks of customers, partners, and employees, and ultimately use those insights to improve business functions. The big winners will be those who can gain real-time intelligence on the data being generated within these communities to be more competitive in their markets. Clients applying analytics to their business processes are improving productivity, driving innovation and speed customer responses saving time and money." ~ Delivering Transformational Social Technologies = Success for our Clients, a blog post by IBM Social Business GM Alistair Rennie
"I see IBM as a social business, because of the way we�ve broken down the barriers of reaching out to the people within the organization, but also how we�re leveraging these same tools externally facing, to interact with our partners and clients." IBM Social Software VP Jeff Schick in conversation with MIT Sloan Management Review.
There was anarticlein The New Yorker last week entitled, �Why Smart People are Stupid.�
Its premise stated, �When people face an uncertain situation, they don�t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions. These shortcuts aren�t a faster way of doing the math; they�re a way of skipping the math altogether.�
Given all the work organizations do to collect and align data, there really is no reason why foolish decisions should be made any longer, especially when there�s a huge price tag associated with bad decisions.
And, when you think about how many decisions an organization makes on a daily basis (thousands, millions?), being foolish is no longer an option � especially calculating the cost between one foolish decision and a million foolish decisions.
And, most of these transactional or tactical decisions need to be made in an instant, such as a customer service agent deciding to give a customer a discount to combat churn; an insurance claims system determining whether a potentially fraudulent activity should be escalated for investigation; or, a logistics manager deciding if a truck is safe to put on the road for the next delivery.
To end this foolishness, IBM has introducedAnalytical Decision Managementto help organizations automate and optimize decision making in real time to ensure the best outcomes occur every time.
Essentially, Analytical Decision Management takes the complexity out of big data by quickly analyzing and embedding analytics directly into business systems (in a call center, on a website, on the manufacturing floor) to empower employees and systems on the front lines with the ideal action.
It also allowsbusiness users to run multiple �what if� simulations, compare the outcomes of different approachesand test the best business outcomes before the analytics are deployed into the operational system. Even analytics follow the old adage, �Measure twice, cut once.�
IBM Analytical Decision Management
According toIDC, the Decision Management software market is expected to exceed $10 billion by 2014. To meet this growing demand,IBM Analytical Decision Management is the first in a series ofIBM Smarter Analyticsinnovations that will change how organizations weave analytics into the fabric of their business, fueling all systems, decisions and actions to consistently deliver optimized outcomes, while adapting to changing conditions.
The newly released Analytical Decision Management combines and integratespredictive analytics, business rules, scoring, and now, optimization techniques, into an organization�s systems to:
�Maximize every customer interaction to grow revenues and increase loyalty
�Detect and prevent threats and fraud in real time to reduce risk
�Proactively manage resources by predicting equipment failure, staffing downtime and service disruptions to contain cost
For example, Santam Insurance is using Analytical Decision Management to transform its claims processing byenhancing fraud detection capabilities and enabling faster payouts for legitimate claims. In fact, in the first four months of use, Santam saved $2.4 million on fraudulent claims. (Readthe full case study.)
Santam can now automatically assess if there is any fraud risk associated with incoming claims and allow frontline claims representatives to distribute claims to the appropriate processing channel for immediate settlement or further investigation, which in turn, optimizes operational efficiency.
As all customers and claims are not created equally, Analytical Decision Managementadapts its recommended actions in real time to accommodate changing conditions as new data is collected and outcomes are recorded.
Analytical Decision Management is also equipped to automatically prepare, cleanse and transform data for the best possible analytics through the newEntity Analyticscapabilities.
There can be challenges when diverse enterprise-wide data is integrated � especially when this data contains natural variability (e.g., Bob versus Robert), unintentional errors (e.g., a transposed month and day in a date of birth), and at times professionally fabricated lies (e.g., a fake identity).
The Entity Analytics feature allows data scientists to overcome some of the toughest data preparation challenges and create the most complete view of an individual record. Users can generate higher quality analytic models and, as a result, organizations will enjoy better business outcomes whether the goal is detecting and preempting risk or better responding to a customer�s needs.
Building a Smarter Planet requires collaboration and innovation--this is why IBM is looking to partner with the best and brightest startups and working hard to connect these companies with leading industry influencers and top investors. IBM SmartCamp is one of the premiere channels for connecting startups, investors, academics, students, and technology experts.
IBM SmartCamp is a mentoring and networking event aimed at identifying early stage technology entrepreneurs developing technologies that tackle some of the world�s most pressing issues, such as healthcare, water management and efficient energy resources. These events have been extremely successful in providing entrepreneurs with the resources needed to accelerate growth and continue driving innovation in their markets. IBM SmartCamps choose top startups in different cities across the globe, rewarding the winners with mentoring, services, access to industry experts and deeper partnership opportunities with IBM, investors and industry partners.
Since the program�s inception in 2010, SmartCamp finalists have gone on to generate more than $60 million in VC/Angel funding. More than 400 experienced mentors have given their time to work with companies competing in SmartCamps.And now you have the opportunity to attend an IBM SmartCamp: on June 21, 2012, IBM SmartCamp is coming to Boston! IBM SmartCamp Boston is the first of 4 regional SmartCamp events across the globe and we truly hope you are able to join us for this exciting event. SmartCamp Boston provides the opportunity to interact with top VCs, innovative startups, technology experts, and industry influencers. And best of all, you will be able to experience IBM�s Smarter Planet vision firsthand.
Consult-a-Doctor: Telemedicine platform offering groups, health plans, hospitals and health providers with 24/7 access to physicians across the U.S. via mobile apps, telephone, secure email, and video.
eNano Health: Affordable medical device products that can be used for health screening at home with instantaneous results for a variety of disease diagnostics, including diabetes.
Streetlight: Leverages transportation and behavioral analytics to make business decisions at select locations (stores, buildings, transportation, etc)
Life Medical Technologies: Focuses on the manufacture and sale of BreastCare DTS�, an easy-to-use device produced under patent which has been FDA-cleared as an adjunct to mammography and other procedures for the detection of breast disease, including breast cancer.
SkyFoundry: Allows domain experts to capture their knowledge in �rules� that automatically run against collected data. The analytics engine automatically identifies issues worthy of attention and tells users what they need to know about the performance of their systems.
Sourcemap: An open directory of supply chains and environmental footprints. Consumers use the site to learn about where products come from, what they�re made of, and how they impact people and the environment. Companies use Sourcemap to communicate transparently with consumers and tell the story of how products are made.
If you can't make it, you can still follow along on Twitter through the #IBMSmartCamp tag.
Good strategies require good information; I think we can all agree on that. If you want to fly to Austin, for instance, it's important that you establish whether you mean the Austin in Texas or the Austin in Minnesota before you buy your plane ticket. Failure to do so will threaten the success of your Austin-visiting strategy at a deep level.
You might also think of this in terms of the military phrase �actionable intelligence.� If the intelligence isn't very good, the action you're contemplating probably isn't very well advised. (You could call that kind of information �actionable stupidity.�)
For many organizations today, however -- especially the larger ones that have been around a while -- ensuring that information is good is far from easy. This comes as a consequence of many factors, including:
The total volume of information, which is vastly higher today than it's ever been before. Is big data always a resource to be tapped? Or is it sometimes a challenge to be overcome?
The age of information -- in too many cases, it has long outlived its usefulness, may indeed be flat-out wrong and if it plays a part in strategies, those strategies will likely be compromised.
The way information can change as it's used in many ways, by many people, to achieve many goals. The game of Chinese Whispers (also called Telephone) illustrates pretty well how easily and thoroughly that can happen.
The fact that information can occur in multiple versions which differ from each other in subtle or blatant ways. Reconciling these different versions to arrive at a single accurate truth, and eliminating the versions which aren't true, is no simple matter.
Information governance can help you fulfill the promise of big data
Recently I discovered a blog on these subjects by an IBM expert, Dave Corrigan -- aka, IBM's Director of Product Marketing for InfoSphere -- and was intrigued to find him discussing these various ideas in terms of trust.
It makes perfect sense, of course. If you're building an omelette out of eggs, or a house out of wooden beams, you need to be able to trust that they aren't rotten. And if you're building business strategies, processes and decisions out of basic information, the same logic applies.
This, in short, is the heart of information governance - maximizing the business value of information by maximizing its quality and trustworthiness in a variety of related and interconnected ways. A quick phone call with Corrigan confirmed this interpretation.
�Information Governance establishes trust in information,� he said. �Without trust, organizations fail to capitalize on new insightsBut when business users can trust information, they act upon insights from analytics and reports, and operate more efficiently when using enterprise applications.�
This struck me as particularly interesting because of the implications. Picture a CIO who, having invested heavily in big data solutions, proceeds to collect piles and piles of data, runs his shiny new analytics tools on the piles and generates lots of impressive-looking reports, only to round-file the reports because, at some basic level, they just don't seem very trustworthy. Or, possibly worse, he uses the reports to make major decisions anyway, despite profound doubts about the wisdom of this course of action.
Talk about an indictment of technology! I asked Corrigan how common that scenario really was.
�More common than you'd think,� he said. �Recent studies tell us that one in three organizational leaders frequently make decisions based on information they don't trust, or don't have. Half say they don't have access to the information they need to do their jobs. And 60 percent, a clear majority, think they have more data than they can use effectively.�
Information governance is all about solving that problem. The idea is to make data more trustworthy so that you can then proceed confidently to use it in more ways, solve more problems and create more value -- both for yourself and for your clients, customers and business partners.
Six pillars of governance to support business goals and strategies
This, of course, is easier said than done. Fortunately, you don't have to do it alone. Corrigan explained to me that as a result of IBM's hundred-year history in business and an endless list of successful customer engagements, IBM has learned a thing or two about how information should be governed for best results -- actually, six things.
�Trusted information, as we see it, is dependent on six key technology aspects,� said Corrigan. �Basically, you need to ensure that information is understood, clean, holistic, current, secure and documented.�
Let's walk through those aspects briefly.
Understood information is information that has a clear, established context. That means its structure, its source and all associated metadata. Information has to be understood in this sense before definitions and policies concerning it can be shared across projects.
Correct information is just that -- correct. It's been standardized and cleansed, is in the right format and is known to be accurate. Logistics companies that ship products, for instance, will need to be quite sure they have the correct shipping address or customer satisfaction is going to take a major hit.
Holistic information is information that's been reconciled across all repositories, so that inaccurate versions of it are removed and a single accurate version is left. The logistics company above may have a correct shipping address on file for a customer, but it will also need to get rid of the other five addresses it also has, in other databases, all of which are completely wrong.
Current information is chronologically accurate. Keeping all information forever, as if it were all perpetually useful, will inevitably create problems. Instead, information should have an expiration date (rather like milk, water filters or members of Congress). This minimizes the odds it will influence decisions in ways it shouldn't.
Secure information has been protected and monitored over its lifecycle to verify only the right people have seen it, changed it or used it in any way. One of the best ways to increase the trustworthiness of information is to keep the wrong people from getting access to it in the first place.
Finally, documented information has a known lineage to establish its history. This is rather similar to the idea of provenance in the art world, used to reflect changing ownership. If you're planning to spend $50 million on a Picasso, you need to be sure it was not in fact painted eight years ago by someone named Steve. Just as with provenance, information lineage can be used to trace problems, guide decisions and yield a better outcome.
All of these capabilities are provided by IBM's InfoSphere family, which includes leading solutions like InfoSphere Information Server, InfoSphere Guardium and InfoSphere Master Data Management.
InfoSphere solutions aren't just standalone tools; they interoperate at a deep level, forming a complete information governance solution. This solution, in turn, helps organizations get the best use out of information even in the most sophisticated cases, where information volumes are incredibly high, use cases are many and it's critical that the information be as trustworthy as possible.
Corrigan sees this interoperable design, in which governance capabilities are logically linked, as fundamentally necessary if major IT initiatives are really going to be successful in a pragmatic sense.
�Common projects that drive the need for integration and governance include newly installed enterprise application, or a data warehouse or big data systems that are the foundation of analytics and reporting,� said Corrigan. �Improving the trustworthiness of information in each of those enterprise projects requires various combinations of the six aspects, through Information Governance technology, to fully satisfy requirements. That's why we see Information Integration and Governance as a common platform of integrated capabilities for data integration, data quality, privacy and security, lifecycle management, and master data management.�
About the author Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Guest post fromHaytham Yassine, Software Engineer, IBM Social Media Analytics
I�m back with the redesign of the call center complaint process. Click herefor part 1.
Before I share, here are some key areas such a process should focus on regardless of implementation:
�Customer� Today�s customers find value in sharing experiences and advice amongst themselves via social media. Companies should accommodate our preference for these channels and come to us as opposed to us going to them.
�Customer value� Customer value and loyalty is attained by resolving requests in a short encounter with high quality and minimal effort.
�Inputs and outputs� Inputs to the process should simply be the complaint/question, a relevant profile summary of the customer and any CRM data to assist the agent in providing assistance. The output should be quality service along with reference points for future engagements.
�Performance measures� Key measures are customer effort, customer satisfaction, quality of engagement, number and ratio of successful engagements, capacity of the system, channel flexibility and obviously cost.
You will see from the diagram below how most of the issues mentioned earlier can be resolved via a social media solution.
So what are the key improvements to take away from this redesign?
�Reduction of customer effort to a single activity
�Perception of shorter service encounters by pushing most aspects of the process into the pre-encounter phase
�Elimination of duplication by utilizing customer�s social media profile as input, as well as CRM data when available
�Educated (and empowered) agents provide more sophisticated responses by utilizing analytics and suggestions offline
�Proactive quality control integrated into process workflow by incorporating a review activity
�A multiple workstation approach is still employed, where customer requests are distributed across agents
Here�s an end-to-end scenario:
If I have a complaint or question about your product, I�d share my thoughts through a social media channel; let�s say Twitter for simplicity�s sake, but it could be via a blog (similar to the one I�m writing now), board, forum, etc.
Using a social media analytics solution, such as IBM Cognos Consumer Insight, a scheduled hourly query would pick up the post (and many others) and run it through its analytics dictionary and the XYZ-defined model.
Based on geography, demography and other user attributes, the analyzed post is pushed to the designated agent�s backlog.
The agent accesses the backlog from within a reliable social media management dashboard such as HootSuite. The workflow can define the priority in which complaints should be answered, be it the influencer score of the customer, time of request or a combination of both.
The agent sees my post dissected to portray opinion, product mentions and other analytics:
The agent then assesses whether this post is worthy of a response. Maybe it should be addressed by the developer of the EFG application or better yet, maybe it has already been answered by other users in the same social network.
User specific analytics (preferences, prior engagements, etc.) would be brought up to assist the agent in providing the appropriate response. If my profile can be mapped to the company�s CRM, internal data would be loaded as well. The agent would then formulate the response, get it reviewed by their social media manager and then share it.
So how does this implementation fair compared to the current one?
I can�t claim to have done an assessment so I�ll leave it to your company to implement a pilot project and test it out. However, I�ve already proven that quality, effort, capacity, and flexibility are far more superior in the proposed design.
Please ensure you measure successful engagements in absolute and relative terms across the two processes. A reliable social media analytics solution would measure the impact of your engagement efforts over time.
There are also numerous considerations to keep in mind prior to migrating to this process design, most notably, your customers� demographics and their presence on social media.
I do realize that it won�t be easy to get over your call center�s sunk costs. Don�t worry; I�m advising a gradual transition. Pilot this system in parallel.
The cost of a social media analytics solution is mere change compared to the millions and millions you�ve already spent on that call center.
Please let me know if you have any feedback or comments. I would love your input.
Larry P. Ritzman, Lee J. Krajewski, Manoj K. Malhotra & Robert D. Klassen. Foundations of Operations Management (third Canadian edition). Toronto: Pearson
Talk about timing: no more than three days after my assignment in one of the driest places in the world does IBM announce another successful solution for Smarter Water.
Last Wednesday, IBM announced that Arizona�s Desert Mountain Community will use IBM analytics software to manage irrigation of its championship grade golf courses for a 10 percent reduction in water usage. Three days before, I had returned from an IBM Corporate Service Corps (CSC) assignment in Antofagasta, Chile, where the annual rainfall is about four millimeters � about one tenth of an inch.
Fresh water is one of two critical resources in Antofagasta, copper being the other. The former is in short supply; the latter most certainly not. The region is home to the Escondida (below) and Chiquicamata copper mines, the world's two largest. Last year, Escondida alone produced 569,000 tons of copper. All told, mining accounts for 97 percent of the region�s exports and supplies 53 percent of Chile�s mining output.
Water and copper: a critical link
Water and copper are inextricably linked. An open pit mine can use up to 15 percent of its water resources each day simply wetting the roads to keep the dust down.1 Refining is also a thirsty process. In 2006 the reported average usage rate stood at 11.9 m3/s2, though innovations are making the process continuously more efficient.
Historically, Antofagasta sourced water from the Cordillera � the foothills of the Andes mountains. But a decade of intensive mining activity and the resulting influx of nearly 60,000 thirsty new inhabitants has put a severe strain on the system. Agricultural activities, too, contribute to the challenge. A desalination plant built in the 1990s has eased the burden somewhat and a second plant is in the works. Also to its credit, Antofagasta has cut its water waste rates significantly over the past few years.
Balancing supply and demand
Demand for copper means economic growth; successful water management will mean a higher quality of life. Fortunately, Chilean authorities also recognize the need to smartly balance supply and demand. A 2008 report by the Chilean Copper Commission Cochilco states:
The limited availability of water resources in northern Chile has become one of the most important topics in the country�s agenda due to the importance of the resource for the development of all economic activities, care for the environment and the quality of life in the communities.
For mining, which will continue to be one of the most important production activities in Chile, the availability and proper management of water is key to its long term sustainability. Thus, the challenge is both enormous and strategic, since precisely this activity is concentrated in the northern region of the country, where periods of scarcity and drought are recurrent.
Antofagasta not alone
Nor is Antofagasta is alone in seeking a solution. Nearly half of the world�s six billion people live in water stressed areas. Eighty countries already have water shortages, and the World Bankreports that the demand for water doubles every 21 years. The solutions are out there
The CSC is designed to increase IBMers� understanding of the opportunities within emerging markets and the challenges that accompany their rapid growth. That was certainly true for me.
My CSC assignment didn't deal specifically with mining, but the industry plays such a large role in the region that nearly every facet of life there is affected by it. Before coming back I had thought that integrating what I�d seen and learned on assignment into my regular responsibilities would be a challenge. But after seeing more evidence of our emerging Smarter Planet - not to mention seeing opportunities to contribute to it - I�m happily being proved wrong.
Today's post comes from Kim Madia, Product Marketing Manager, InfoSphere. If your organization is like most, you are feeling pressure to move to �the cloud.� What does this mean exactly and how can you ensure data security in the cloud? We tackle these questions in a new ebook -- Protect data in physical and virtual infrastructures; Keeping data secure in the age of cloud computing. IBM recommends scalable, comprehensive data protection for physical, virtual and cloud infrastructures. Don�t let security concerns become a barrier to cloud adoption. Cloud adoption helps organizations:
� Reduce total cost of ownership with fewer servers.
� Deploy server images faster compared to physical server hardware.
� Provide standardization and optimization of IT infrastructure to allow for scalability and reliability.
� Provide business agility and IT flexibility.
In this ebook we discuss how you can realize these benefits while also protecting sensitive data and demonstrating compliance. We will discuss how controls must become more flexible in order to address access control, privacy, and cloud vulnerabilities.
Guest post from Kurt Peckman, Program Director, IBM Predictive Analytics
Last Friday I took a different train into my office here in Chicago.
This particular station has a diner located right next door and within steps of where I would be catching my train. They only serve breakfast and lunch and it immediately hits me that I�ve stumbled upon a diner with an optimized location and manufacturing schedule.
Speaking of which, I had optimized my wait time for my train. No gross surplus of minutes to waste on the platform; no deficit of time causing a heart attack-inducing sprint from my car to the train. I immediately headed to the diner.
The waitress, who I�ve never met before today, immediately greeted me with, �Hi, honey� $1 egg sandwich today?�
I didn�t fall for the �honey� play. I�m old enough to know that any good waitress worth her salt will refer to me as: honey, sugar, handsome, and the like in an attempt to up-sell me from coffee to coffee plus. And given my experience in up-selling myself (discussed in my last blog) I was naturally on guard.
However, I was very, very intrigued by the price of the $1 dollar egg sandwich.
I said, �No, thanks,� which was tough to do. I love egg sandwiches and one dollar is a heck of a deal for a diner-based product. (Notice the use of the word �deal� and not �price,� which implies �value� to me.) I am trying really hard not to eat so many egg sandwiches so I declined. But, the critical fact in this story is that I paid $1.75 for a cup of coffee.
Secretly, what I really wanted to do was take the entire day off of work to interview �Flo� the waitress (my customer service rep), the chef, and other patrons about the implications of the $1 egg sandwich. I especially wanted to interview the owner (who I think was sitting in the corner reading a paper) as to how the execution of the egg sandwich is tied to his overall business strategy.
How was that price determined? Is it an optimized price? Can a diner really make a profit on a $1 egg sandwich? If so, does it include the cost of all goods: materials, labor, overhead (e.g., utilities, wear and tear on the grill, depreciation on the spatula, etc.)?
Or was the pricing objective pull marketing for the diner? The deal didn�t prompt me to go into the diner, and I�m not even sure there was a sign out front stating the terms of the deal. But, there was signage inside that I realized only after she pitched the deal. Now my mind was spinning.
Is Friday the best day for the egg sandwich promotion? Is this an optimized campaign � right offer, price, channel, day and time? I didn�t even get a chance to ask if every Friday is a $1 egg sandwich day. If so, I might be inclined to invite my colleague Bob (who regularly commutes to/from this station) about the end-of-week-deal at this diner.
Given my love of egg sandwiches, I might even be tempted to take to social media to sing the praises of this diner.
Other questions scrambled my mind: do they pre-make the $1 egg sandwiches? They must. There is no way the diner can meet the short-term, burst demands dictated by the average time one waits for a train.
And what is the optimized inventory of egg sandwiches that minimizes spoilage and maximizes freshness, demand, labor�? The $1 egg sandwich production quickly becomes an n-dimensional optimization problem.
And by �optimization� I mean the mathematical definition: maximizing (or minimizing) some outcome or value within a set of predetermined constraints. A classic example is an investment portfolio: we are all trying to maximize the value of our portfolio subject to the constraints of contributions, time, risk, market direction, etc. But I digress� back to the eggs.
Maybe the $1 egg sandwich starts at $2 earlier in the day and, by the time I arrived, the decision was made to drop price due to surplus inventory. Wouldn�t it be something to find out that a mom & pop diner was using sophisticated optimization algorithms to price egg sandwiches that maximize profit and minimize spoilage?
At this point three things become apparent:
1. Tying strategy to execution is as critical to the mom & pop diner as it is to Global 100 companies;
2. The best decision management solutions must include an optimization component; and,
3. I have an unhealthy obsession with egg sandwiches.
Today's post is from Constantine Grancharov, Product Manager, Appscan Enterprise. The IBM X-force team declared 2011 'the year of the security breach'. This statement reflects the large number of prominent security breaches that made the news last year. The X-Force team also made the observation that attacks have become much more sophisticated and that requires a new approach to security. This new approach involves more security intelligence in detecting attacks and obtaining forensics. Information Security requires that you put all the different pieces in context. IBM's QRadar solution gathers data from a large number of sources and puts it together to provide deep insight for effectively managing security risk. Last week we announced the release of AppScan Enterprise v8.6 and its ability to integrate with QRadar. QRadar - which automatically maintains and updates information about each host system�s services, applications, vulnerabilities, traffic/use level, Internet exposure, users and more � is enhanced by the addition of application vulnerabilities information from AppScan. This greater context allows QRadar to better detect and prioritize threats by calculating more accurate risk levels for each asset and more accurate offense scores for each incident. In addition to creating more accurate overall risk levels for each asset in QRadar�s asset database, application vulnerabilities information also helps QRadar better detect and prioritize threats through real-time correlation with IPS/IDS alerts. It is an overwhelming task to secure a large number of legacy and in-development applications. Having the capability to put applications in the context of the infrastructure on which they are deployed and correlate their vulnerabilities with probes and attacks detected by IPS/IDS in real-time, helps application security teams prioritize which vulnerabilities to address first and more effectively manage the risk applications present. The integration between the two products consists of QRadar pulling application security data from AppScan Enterprise on a periodic basis. In other words, AppScan Enterprise acts as a vulnerability data source for QRadar. Since automated application security testing with AppScan is done primarily in pre-production, users are given the ability to map an application from its testing to production environment. Once a user completes an application security assessment, they have the ability (if granted permission by the AppScan administrator) to make the assessment results available to QRadar. I encourage you to take advantage of these new product features. I think that you will find them very useful.
Today's post comes from Sydney Shealy, Market Segment Manager, Application Security. When IT security first rose to prominence (when they gave someone a budget and the title of security manager), many organizations focused on infrastructure security. Over time, while infrastructure security has remained important, application security has proven to be a crucial linchpin to effective IT security. Our very own X-Force security team reports that 41 percent of all disclosed vulnerabilities are found in Web applications. We also know the average cost of a data breach is high ($5.5M currently), enabling us to extrapolate that application breaches are both likely and costly. Beyond this, statistics also show that the costs associated with remediating an application vulnerability are lower the earlier in the software development lifecycle the found vulnerability is uncovered and addressed. This fact should provide further encouragement to establish a clear process for addressing Web and mobile application vulnerabilities. All this said, what makes application security unique is that application security vulnerabilities cannot be effectively addressed without a direct relationship with your development team, who actually remediate the vulnerabilities. At the risk of stating the obvious, most developers are not security experts and are not looking for additional work. In the recent weeks, I've heard first hand from several IBM customers who leverage our own application security solutions to secure their Web applications against attack. I learned that capable tools are critical, but these tools are not useful if they don't foster communication with development teams and provide simple steps to remediate found vulnerabilities. All of these customers I heard from worked hard to simplify the notion of application security, to integrate security into current software development processes, and to make security an acknowledged component of these processes. Their results speak for themselves. Built on insights like those shared above, our IBM Security AppScan family is designed to not only equip your organization to find and remediate Web and mobile application vulnerabilities with a market leading suite of software for static and dynamic application security testing and reporting, but also to break down the silos between security and developer. At the end of the day, isn't our common goal to build strong applications?
This semester I have a new challenge. In my Operations Management class (MBA5280B), I was tasked with an assignment to analyze a process and improve it.
I�m not a big fan of call centers and I firmly believe businesses should stop imposing their traditional models of service and start utilizing market-driven media of conversation for their business processes.
Just last month, I read arhetorical blog posted by Brian Solisfrom Altimeter Group. It really intrigued me, particularly because theExtreme Blue(IBM's internship program for students pursuing software development and MBA degrees) project we planned out for the summer heavily addresses this space.
Solis� blog helps highlight, in an exaggerated fashion, the frustrating traditional process of reporting a product or service complaint. I highly recommend reading this post as it provides a great introduction to the process reinvention I�m putting forth.
Not only that, the format I�m adopting in my blog takes the shape of a response to the original �Dear customer� tagline.
Although I would love to explore numerous processes that could be improved by using social media analytics, I will limit this article to the following: analyzing the call center process for reporting a product complaint and improving it by transposing it onto a �smarter� social media engagement workflow.
Here we go�
Dear customer relations manager at (fictional company) XYZ,
I am writing to express my dissatisfaction, not with your products and services, but with the process you employ for people like me to voice their concerns about these very products and services. You emailed me recently asking that I go through the standard call center process for reporting a complaint or asking a how-to, and here I am instead going through the �very standard� social media channels.
Why, you ask?
It�s because I�m one of the millions of consumers out there who have grown fond of using social media for gathering buying decision information and venting experiences and reviews in return. It�s a vicious cycle, you know. Oh, and by the way, I hope you have a reliable social media analytics solution in place to pick up this blog; I won�t be picking up the phone and calling your hot line.
Your process is not only inefficient and painful, but it�s also missing on countless opportunities in the social media space. I�ll start off by analyzing the current process. Below is a simplified chart that highlights the various activities involved.
The process can be categorized as a service shop, where high customer involvement meets moderate-length service encounters and immediate delivery is expected at the end of the process.
I�ve highlighted in red the most problematic activities in the process chart and, as you can see, most of the complexity lies in the customer space. You�ll notice I�ve excluded transfer activities from the chart to make things simple; in reality, these do exist and they add to the encounter�s length and the customer�s frustration.
Here is a list, by no means exhaustive, of the issues I see with the current process:
� Increased customer effort in various activities, complicating the interaction
� Customers have to actively wait in queues for service, extending the service encounter
� Duplication and inefficiency � same inputs being requested/processed at multiple phases
� Multiple transfers may be required
� Potential peak capacity concerns
� Under utilization during low periods
� Agents pressured to provide quality output based on unknown inputs on the spot
� Lack of proactive quality control on process output
� Lost opportunities in customers hanging up
Of course, like a traditional call center, I�m not here to just offer my complaints. I do have a solution.
Please stay tuned for part 2 of my article when I describe the suggested process redesign.