1. Overall comments and feedback
First, if you were not with us this week, I encourage you to consider joining (if you boss lets you...) next year. And, if you had the privilege to attend and participate to his inaugural conference, please consider augmenting this report with your own notes, observations, ideas by leaving a comment on this post. To the ISIS public (our Service Engineering platform in IBM's Software Group) which may wonder why I include SRII
-related post in this blog, SRII builds bridges between Service Research and Service Engineering, so this represents an invite to rally and join any initiative you can.
After last year's leadership conference, this was the inaugural 2011 annual global conference, labeled: "Innovating Services for the Smarter World." SRII has two i's: one for institute, the other for innovation. And the latter keyword, innovation, has been used in an explicit way in most of the presentations. The thing
which was really impressive for an organization that young (3 years) was the size (280 registered participants) and number of papers and presentations (more than 180!). And the list of countries represented. I counted 18 (Finland, Thailand, Vietnam, Japan, Korea, China, India, Germany, UK, US, France, Spain, Sweden, Australia, Taiwan, Luxembourg, Switzerland, Singapore), but I haven't managed to see everyone so I probably missed a few. An amazing mix of representatives from both mature and growth markets which represents the importance of services in our global economy, as well as a mix of academia and industry, not to mention government, professional associations and health care (quite a few medical doctors participated to the conference. Which other industry sends their leaders or practitioners to such conference? I don't think there was any one form banks or insurance companies, from travel and transportation, from utility companies, all sectors which are basically service provides though). Bottom line, hat down to SRII President, Kris Singh, for having built such a global organization and network, leading to a profusion of papers on service research as well as an amazing list of renowned keynote speakers at this year's conference. Kris also gathered an impressive management team to make SRII successful (http://www.thesrii.org/index.php/management-team/executive-management-team), with a special mention to SRII's secretary, Ralph Badinelli of Virginia Tech, for his major contribution in building this very professional agenda and keeping up with a fluctuating schedule along the week.
Beyond the innovation, research, engineering, quality, cloud, mobile buzzwords to pick a few, the most used term was of course service (or services). So much that it becomes almost necessary to define it every time we use it in the context of such a broad conference. Several speakers defined service as a transaction between a supplier and customer with co-value creation. I initially thought that the co-value creation wasn't reflecting properly the service my plumber renders to me when fixing a pipe while I watch him doing so, while my part is just to pay the bill. On Friday, one of the talks I liked the most was from serial entrepreneur Tim Chow, who defined a service as "the delivery of information which is personal to you", using Amazon as an example and most on line banking sites as counter-examples. Which leads me to suggest that this definition certainly fits well the digital services nowadays, but not the human-based services which are the first one I think of when I hear the word Services, in contrast of the concept of product. I don't argue that, in our digital era, software products are not manufactured like other traditional products, especially since we distribute them electronically, yet there is merit in differentiating these two types of services, the digital ones and the human ones. While getting on the cloud bang wagon is super key and a matter of survival for any IT company nowadays as Ann Winblad so eloquently sated in the final keynote on Friday night, we still have to address the challenge of very limited productivity improvement we have reached so far on the "traditional" service side (while Moore's law led to a leaping 1 million improvement factor in density on the CPU side, it's really a big deal when we improve human-based services by 20% or a factor of 1.2, that's a big gap, isn't it? What do you think about the definition and scope of service? What are your own challenges?
The concepts of pure digital services and IT-enabled human-centric services became so intertwined in the forum that, at some point, confusion was created on the economics of the service model. In the IT industry, human-based services are known for being a lower margin business than the product-centric business. At least for the not-yet-commoditized products, or the ones which sell very well (e.g. Microsoft OS and productivity tools). Of course, the economics and scalability behind the digital services are completely different, especially in an ecosystem in which development and deployment platforms may be completely free (Apple apps, Google's android platform, ...). So, saying that services are low margin become irrelevant if you don't state which services you are referring to. Time to invent new terms and terminology (or some ontology...)?
By the way, SRII is not about any service, but IT-enabled services (IT being in this context the shortened version of ICT, Information and Communication Technology). That being said, as Kris pointed out, it's hard to find services we are not IT-enabled today or at least which would not benefit from some form of IT-enablement. Certainly, health care can improve greatly from a greater and better use of IT as Dan Riskin (Vanguard Medical Technologies) and Yan Chow (Kaiser Permanente) stated in their read-out of the Special Interest Group (SIG) they are leading around health care services. Per Dan, while some industries operate at 5 or 6 sigma (99.99966% of service quality), health care is closer to an alarming 2(*) to 3 sigmas with a lot of human errors (yikes and ouch!) and/or a lot of good intuition and miracles for the better cases (phew!).
Regarding the number of papers at such conferences, we really need to find a system making all this content more consumable. The conference had 127 papers (live 30-minute presentations or posters in post session forums), 13 keynotes by renowned speakers and more than 40 panelists. Apart for dreaming of using Watson to index this content smartly (one speaker talked about the concept of "super or mega colleague" referring of the concept of digital assistants we'll have in our pocket all the time), a simple collaborative wiki where we can attach some comments, reviews, links, suggestions, would extend in a more scalable and actionable way the water cooler type of discussions happening (or not happening) at the breaks. We need an RSS feed associated to this wiki and I'd say even a newspaper or magazine to provide news on a monthly basis for instance, and an mail notification mechanism with headlines and links to publications, for which we can set subscription preferences (frequency, format, areas of interest), so we can quickly browse through announcements (publications, collaborations, reuse in commercial context, calls for papers, calls for ideas, ...). I know, this is not research nor innovation, this is "just" infrastructure and a few elements of social collaboration and knowledge management, to make all this research work more actionable and lead to meaningful and impactful service... innovation. I also realize that it's asking more to SRII's volunteers, already doing so much on top of their day to day job, but since they did so well so far to build this content generation machine, better covering the last mile and making it effective!
Speaking of innovation, we had the privilege to get Andy Bechtolsheim
's insights on how our IT world evolved over the past 40 years (first Intel chip in 1971) and his vision for the future, as well as how innovation works or doesn't. Like the horizon impediment playing in two dimensions: spatial first --it's challenging to work on a goal beyond the horizon, like it was to travel beyond the limit of the Earth in Columbus' time-- and temporal --the quarterly rhythm of our companies kind of discounts innovation which is not realizing business benefits within the quarter or next quarter (although this is the world I live in as opposed to most of the audience, I met several researchers who mentioned about this constraint too, it sounds like a virus for which we need to find a vaccine to protect longer term and bolder research...).
What I would like to see more of in SRII?
- More tangibles results of applied research (I know, I'm bringing up the time horizon dilemma here, and it's ok to admit there isn't any application yet, but at least add the "test" to every presentation)
- A stronger voice and presence of the practitioners in the trenches (and I know it is tough to ask billable consultants to volunteer time but they have to at least reach out any time they are on the bench)
- A brief abstract of the papers in the program to make the choice of which to attend and what to expect. In exchange for more content in the program, I suggest using a smaller font and lighter and less glossy paper
- Better time management (e.g. sticking to the schedule, and I'm not talking about the impact of the blackout, but for instance the numerous presentations on panels which went way over 10 minutes, some with more than 20 slides!).
- On line publication of the proceedings so we can reference others, and practitioners in particular, to them.
If you read this report after having attended the conference yourself, PLEASE leave a comment with additional suggestions
so the wonderful team of SRII volunteers can leverage such ideas for the next conference.2. Additional notes on the conference content
At the risk of doing injustice to other papers by only reporting on the following ones, here are a few notes and comments on a subset of the keynotes, panels and paper and poster sessions I was able to attend between internal and client conference calls... Again, feel free to add yours, either agreement or different points of view, as comment to this post, so we all benefit.
- Tuesday - Global Leadership Meeting - The meeting consisted in read-outs of the SIGs and international chapters (I just realize that there doesn't seem to be a US or North American chapter as SRII started in the US and is managed from the US...). Worth noting:
- Health Care IT Services SIG (Dan Riskin, Vanguard Medical Technologies and Yan Chow, Kaiser Permanente) -- Yan admitted that the health care industry was running approximately on a 25% error rate. That 79% of patients were not taking medications as prescribed, and that represents a huge opportunity for remote services to address this issue. That, with billions of people at the door of health care organizations, providing care at hospitals was not scalable. Another big opportunity for home-based and remote health care services.
- Intelligent Services SIG (Murray Campbell, IBM Research) -- On behalf of the group, Murray highlighted the importance and challenges of human factors related to data. And the need to provide more automation to provide more scalability to the service industry and keep up with the overwhelming rate of data generation and collection from billions of devices, systems and sensors. I must admit here that I struggle with the term "Intelligent Services." First, I had not understood from the title that what people meant by intelligent services was simply the world and applications of analytics. To me, it implies that, in contrast, services not leveraging analytics are dumb. I'd rather talk about data-rich or data-centric services, or data intelligence services. Or simply "Analytics Services"? Anyway, I feel the title of the SIG should be revised to bring more clarity on scope and intent.
- SIEQ - Service Innovation, Engineering, Quality SIG (Babis Theodoulidis, University of Manchester and George Miller, British Telecom) -- To me who works in Service Engineering, this of course the main and dearest SIG. Also because it has a huge scope with so many interrelated fundamental concepts for our industry. So much that the SIG has and will have challenges narrowing down the scope and mission of the SIG. Among other very pressing and strategic themes are:
- Modeling of new services
- Design and creation of new services which are more robust and have less sensitivity to external events
- The measure of the quality of the outcome, instead of the process itself
- CeC - Cross Enterprise Collaboration SIG (Daniel Oppenheim, IBM Research) -- This is such a relevant issue, not only across different companies, but within companies of any size, across internal departments and constituents. Very interesting that the two gorillas of the IT industry (HP and IBM) are collaborating on this topic in this SIG and sharing their own experience.
- Multi-media immersion experience (HP Labs) and Applications of nano technology (IBM Research) -- Very interesting presentations from a scientific standpoint, and thought-provoking ones from a service standpoint, meaning that there was no obvious application to today's service business, especially for the latter. For the former, it is obvious that a better and richer multi-media infrastructure and offering will increase and accelerate the adoption of tele-presence and tele-delivery of services, supporting for instance remote medical consultations or interactions between customers and agents (augmenting the success and pervasiveness of audio conferencing where visio conferencing failed for acceptable performance and experience).
- Chapters -- We then had 8 read-outs from the regional chapters (India, Thailand, Vietnam, Germany, Spain, Japan, Australia, Taiwan).
- In Thailand, whose economy includes a very high percentage of services, the government has established an institute dedicated to service research, SRI, which naturally became the umbrella of the local SRII chapter in this country. Initiatives include Smart Health and Smart Farm, in addition to the special attention to two critical source of services, education and tourism.
- In Germany which is known for its excellence in manufacturing of products, there is a good and bad news. The good news is that Service Innovation is part of the 17 initiatives the government set for the country in 2006. The bad news is that the initiative got €50M out of the total €15Bn allocated to the overall program, a mere or abysmal 0.3%... But, again, better than nothing. With that, Gerhard mentioned the awareness program called "INSPIRE Germany!" (National Initiative for Service Policy, Innovation and Research in Germany). Like in Thailand, the ideal fit to attach the local SRII chapter. The overall idea in Germany is to create mind share around the concept of co-creation of value with product manufacturing through the establishment of service systems.
- On behalf of the SRII Spain Chapter, Pere Botella explained how his university is leading the way and also mentioned the following initiative, NESSI (http://www.nessi-europe.com). The acronyms stands for Networked European Software and Service Initiative to address the challenges of the Internet of Services in the ICT industry. It regroups 430 organizations from industry and academia!
At the end of this marathon day, Kris' summary was concise. It is all about "connecting teams worldwide, so they identify local and global issues and unite to address them." The sky is the limit...
- Keynote - Robert Morris - VP Service Research, IBM -- Robert had a very insightful talk at the SRII leadership conference last year, about the disproportion between the dominance of services in our economies today (over agriculture and industry/manufacturing). All hard problems around the world relate to information (social, economical, geographical, environmental). Health Care is so far from the Pareto curve, it is not even necessarily needed to make trade-offs between costs and improvements, as it is often the case in other optimization sectors and technologies. Robert mentioned BASIC (the Bay Area Science & Innovation Consortium) and a Collaborative Care Cloud initiative in South California. Two major types of innovations (slides 18-19). First, when technology is used to automate, to do the same thing better and faster. Second, and something which requires more radical change, changing the process, to do things or render a service in a radically different manner.
- Panel - Service Innovation, Engineering and Quality (SIEQ) - Again, per my earlier comment, SIEQ faces the challenge of an overwhelming broad scope. With that, the panel brought a collection of interesting perspectives albeit quite disjoint in my opinion.
- Tung Bui, Chair and Professor of Technology Management at the University of Hawaii, used the Condorcet Principle about democracy to illustrate how collaboration can lead to innovation. I didn't know about this expression of democracy, and found the comment to actually be quite relevant to another SIG, the CeC one (Cross-Enterprise Collaboration). In a few words: considering most of the ideas of the group (concordance) without ignoring any opposition (discordance).
- Mahmoud Naghsheineh, VP Service Innovation Lab at IBM Research, reminded us about the three tiers of service quality: client experience (front stage), quality in solutioning (which in most cases is still an art and should move to a science), quality in delivery. And how any service and marketing model should address these three levels.
- Ana Pinczuk, VP Global Technical Services at Cisco, made an appealing argument for the leverage of forums to spur innovation.
- Increasing Service Productivity through Service Migration and Externalization -- Freimut Bodendorf from the Friedrich Alexander University in Nuremberg, Germany, presented his concept of service migration. Basically, how to reuse a service which was designed to address a specific need, to tackle another challenge in another context. Because migration really means something else in the IT service industry, I thought the term was really confusing, and I would rather recommend to leverage the concept of pattern. The illustration in the context of Adidas was very convincing (how virtual prototyping was first applied to shortening the product design cycle, then reapplied to replace physical samples in the trade shows where retailers place their orders, then on the web to improve the customer shopping experience while browsing through on line catalogs). Paper 4371a193.pdf in the proceedings.
- Studying the Evolution of Skill Profiles in Distributed, Specialization Driven Service Delivery Systems through Work Orchestration. -- Phew, what a long title... Shivali Agarwal from IBM's India Research Laboratory, explained the experiments she and her team did about the performance of the factory model, that is the decomposition of projects into smaller work elements which are handled by autonomous and specialized teams. As a practitioner, I would have loved to see more about the influence and impact of project management and governance in the effectiveness of the factory and distributed model. And also, more references to the lessons to be learned from the failure of the software factory approach touted in Japan in the 80s. The factory approach also reminded me of crowd-sourcing initiatives such as IBM's GenO Liquid Portal or TopCoder. Paper 4371a210.pdf.
- InnoScore Service: Evaluating Innovation for Product-Related Services -- Mike Freitag presented a software to support surveys about the major drivers and inhibitors of innovation, in the context of services. Paper 4371a214.pdf.
- Measuring the Core Competencies of Service Business: A Resource-Based View -- While the presentation consisted in a litany of service-related characteristics (hard to make this exciting...), the paper contains an interesting taxonomy, aggregating elements from a whopping 48 references! Worth leveraging when building methodologies and models. Paper 4371a222.pdf.
- Panel - Health care IT services -- While certainly a meaningful test of the deployment of electronic medical record at a national scale, I found that the focus on Meaningful Use (MU) gave to this panel a rather regional flavor.
- Panel - University research & new curriculum -- Interesting perspectives about the long term actions of a few representatives from industry and academia in the research space.
- Jim Spohrer, Director of Global University Program at IBM (IBM UP), described the profile of the candidates IBM is looking for as well as how a few "mega topics" align with the priorities of the US Academy of Technology.
- Theresa Maldonado from the National Science Foundation, used Pasteur's Quadrant to describe the scope of NSF's support of fundamental research. And laid down a nice evolution of the engineer profile over the past 5 decades (Engineers of the future).
- Sorel Reisman, President of the IEEE Computer Society, highlighted IEEE's organization and wide range of activities and worldwide coverage. He announced the upcoming trycomputing.com website which will be based on the tryengineering.com model. He also referred to the Software Engineering Body of Knowledge (SWEBOK), as an example to follow for the service sector.
- Marl Stockman, Chair of the ACM SIG on IT Education, followed up on the accreditation topic by pointing us to the Accreditation Board for Engineering and Technology (ABET).
- Keynote - Andy Bechtolsheim -- A lively and motivational keynote from a tenor of the Silicon Valley (co-fonder of Sun and one of the first investors in Google among other successful ventures), on innovation and pace of change. Per my earlier comment in this post, while Andy explained how the processor density and performance increases by a factor 1,000 every 20 years since 1971 (or 1,000 over the past 40 years, and a total of 1Bn x factor in 20 years at this rate), the human-based services barely improve by a few percentage points in the meantime, from a productivity standpoint (the service business of course grow faster through the employment of more labor forces, in particular off shore, but the model isn't as scalable). Anyway, can't summarize Andy's insights in a few lines, you have to watch him yourself to get inspired!
- Panel - Cloud computing & services -- Here we are back to the definition of services, clearly in the digital service arena (services delivered on the cloud).
- Jamie Erbes, Fellow, Chief Technologist and Head of Service Research at HP, shared the impact of the cloud on the next generation of workers, the professional consumers or "prosumers."
- Wolfgang Gentsch from DEISA in Europe added EaaS (Expertise-as-a-Service) to the already long list of other models such as SaaS (Software-as-a-Service), PaaS (Platform-as-a-Service) or other IaaS (Infrastructure-as-a-Service) (see an interesting post on this acronym "soup" starting with XaaS for everything-as-a-service...).
- I had conference calls from 6 to 11 AM on Friday morning so I missed the keynotes. A few minutes after I got in the room, Hakan Eriksson, Group CTO of Ericsson and President of Ericsson Silicon Valley, was on stage and had an chart on the screen representing the exponential forecasted growth of the mobile data traffic. I don't know if that scared PG&E, our local utility company, but the whole room suddenly lost power, highlighting all the laptops which were on at the time. How ironic that, after so many exciting talks about the cloud, the only devices working were laptop, albeit without Internet connection and mobile devices... A big way to remind us that the cloud needs power and a smarter grid to fuel it. It took at least 10 minutes to get a message from the hotel to let us know that the black out was touching several blocks in San Jose but PG&E was working on it. And almost two hours for the power to come back. This unplanned black out provided another great opportunity to network, the old human and social way...
- Panel - Service innovation for public sectors -- This theme which isn't a SIG yet unfortunately, is of course very relevant to all of us as we are all citizens, and the perspectives and quality of the panelists matched the importance of the topic.
- Ephraim Feig, Associate CIO of the US Social Security, commented on the challenge that an organization managing $700Bn annually was facing. Basically, that you cannot stop the car to fix or improve it, you need to replace and rebuild while driving and keeping the car moving... In this context, research means asking the question "what if we were starting from scratch, what would we do differently" then build and implement a migration path. A migration strategy in which each transition has to tak into account many stringent constraints (quality of service, continuity, limited budget, impact of political cycles, ...).
- The highlight of this panel for me was when Tim Chou shared his vision of the web, from the traditional transaction-based approach to a pure personal touch, a personal experience to every user. Tim Chou is another tenor of Silicon Valley, currently involved in three start-ups after having created and managed the overall On Demand business of Oracle. While I like his definition of services ("the delivery of information which is personal to you"), it certainly applies to digital services over the cloud, but not to all human-delivered services, even the IT-enabled ones. Tim invited all of use to get out of the SQL mentality and used Amazon to illustrate the personal information approach (screen full of recommendations as opposed to the tiny shopping cart in the corner, representing the transaction side). And pointing that most on line banking sites were of the older type, centered around account transactions.
- After this panel, the afternoon continued with two parallel paper sessions, and I attended most of the SIEQ one.
- BioMIMS - SOA Platform for Research of Rare Hereditary Diseases -- As Dr. Aya Soffer (IBM Research, Haifa Lab) said in her panel presentation and demonstrated while presenting her colleagues' paper: "working in the health care space is really cool and rewarding as your work really impacts people's lives." In the case of this paper, SOA is solving the challenge of exchanging huge amount of data across partners and countries (e.g. Israel and Italy) to then apply meaningful analytics.
- Towards and Inclusive World - A simulation Tool to Design Interactive Electronic Systems to Elderly and Disabled Users -- This tool presented and offered by Cambridge University (Pradipta Biswas and Pat Langdon) simulates how a GUI (Graphical User Interface) appears and behave to people with disability such as impaired vision or Parkinson's disease. As a result, you can determine the suitable fonts or color to use, or graphical component size to make the interface usable under certain health conditions. The work has been funded by a European grant and is (will be?) freely accessible.
- Panel - Intelligent services/Information management -- Apart for my comment about the use of the conjunction of intelligent and services in the title, this panel on analytics was one of the best with a good balance between statements from the panelists and questions from the audience. It even seemed like it would never have never stopped if it wasn't for the need to switch to Ann Windblad's keynote and the gala dinner. Represented on the picture, from left to right: Cisco, Shimane University, SAP, Google, HP and National University of Singapore.
- Jim McDonnell, Sr. Director of Cisco's Smart Services Technology Group, made a point about the migration from reactive to predictive and proactive services. And the importance of advanced visualization to complement analytics, to capture more intellectual capital about data and improve access and leverage of data on the cloud.
- From SAP, Vish Agashe, responsible of information and data-related initiatives, stated that the line between the operational and analytical worlds was getting blurrier. That more and more decisions were taken at the point of transaction and the decision shelf life of data was constantly reducing. Therefore, insights need to be prescriptive and/or predictive. Vish also made the case of leveraging social networking techniques and concepts on back office data (for instance, upon learning about a major event such as a tsunami or a merger announcement, contact your suppliers or clients who may be impacted to determine the impact on your own business).
- Despite Hemanth Puttaswamy's introduction (moderator), Daniel Russell, Research Scientist at Google, did not reveal trade secrets about Google's search algorithms, but showed the power of BigData, Google's own implementation of the largest database in the world, with an SQL-like and super fast terminal line interface. Unfortunately, with the delay caused by the black out in the morning, I missed Daniel's second keynote scheduled during the dinner.
- Representing HP (and replacing the third HP Fellow listed on the program after Jamie Erbes and Dan Gonos, quite a committed involvement from HP's leadership), Kannan explained the acceleration of the "time to decision" and why we needed better data to support decisions.
- Last but not least, just off a painful 19-hour flight, Dr Hock-Hai Teo (Associate Professor of Information
Systems and the Head of the Department of Information Systems at the
School of Computing, National University of Singapore) recounted the amazing eGovernment story of his country. Having contributed to many intelligent and decision-support systems for
the various constituents of Singapore since 1990 at ILOG, I was quite
familiar with this eGovernment global success story, yet it is always nice to hear and I do hope that more countries will adopt this citizen-centric and service-oriented approach (albeit the "big brother is watching you" exposure, but who cares on the cloud, right...?).
- The long afternoon (1-7 PM!) finished with an amazing wake-up call or call to action from Ann Winblad, Co-founder and Managing Director, Hummer Winblad Venture Partners, a very successful venture capital firm based in San Francisco. Here are some of her statements:
- Today's action is in three areas, which intersect: Mobile, Social, Cloud
- Mobile is the largest platform shift ever. Compared to the PC and any previous computing platform, it is several orders of magnitude bigger, with billions of e-commerce-capable devices coming on the market.
- Most of the platforms are free both for development and deployment! This creates a huge opportunity for today's entrepreneurs and almost eliminate barriers to entry.
- It is a world of prosumers, where employees pick what ever device and application they want. This creates a huge security issue, or opportunity. Ann also talked about the "promiscuous employees."
- The edge of automation is approaching quickly (smart assets, sensors, actuators, agents)
- By the way, this is not the end of the old digital world; nothing dies, but everything fades away, so better embracing change.
In addition to the numerous presentations, about 70 papers were presented as posters in a research forum at the end of each day. Here are a couple which I found particularly noteworthy in my area.
- A Business Model Framework for the Design and Evaluation of Business Models in the Internet of Services -- Nico Weiner and Anette Weisbecker [4371a021.pdf] Very interesting modeling and simulation environment to design and validate a business model and concept, extensive underlying ontology, and actionability with a software-based tool.
- The Corporate Sustainability Dimensions of Service-Oriented Information Technology -- Robert Harmon [Not in proceedings] Like Robert, I am a big fan of sustainable development and responsibility and am convinced that this is a great source of services to help not only our corporations but the whole planet taking care of itself! I have connected Robert to another although quite different initiative about sustainability which I like, much more IT-oriented, but which at least provides its share to a more sustainable world through the building of sustainable IT systems and architectures (http://www.sustainableitarchitecture.com).
As for Saturday, Kris & al., sorry I missed the finish, you'll have to switch to one of my other blogs
to track me down... ;-)3. Opening more doors...
Last but not least, there were a few fliers advertising service-related conferences, such as:
- 1st International Conference on Human Side of Service Engineering (HSSE 2012) - July 21-25, 2012 - San Francisco, CA
- 13th IEEE Conference on Commerce and Enterprise Computing (CEC 2011) - September 5-7, 2011 - Luxembourg, Luxembourg
Here again, please do not hesitate advertising or sharing in the comment section other relevant conferences which can help us advance the field of services, through fundamental and applied research, innovation and collaboration across all the motivated stake holders in this major sector of our economy. More announcements are available for instance on the Service Science website
, where Jim Spohrer has posted his conference report this Sunday
, by the way (more to read... ;-).
Speaking of other doors, there is a special one which I hope gets re-open with SRII, that is a relationship with TSIA (Technology Service Industry Association)
, which we benefited a lot from while I was with ILOG
, before begin acquired by IBM. With more than 200 members, this is a major player in the service industry, albeit focused on services rendered by product company (but, to my previous point about the blurry definition of services, focus is good!). Interestingly enough, the major SRII supporters and players (e.g. HP, SAP, Cisco, Ericsson, Microsoft, Oracle) are part of TSIA but IBM. No need to remind me about that and, as a service professional, I wish IBM rejoins TSIA as soon as possible!
To the next time, and looking forward to hearing back from you in the meantime, about the application of your innovations, findings and longer term researches to advance our service field!
(*) 2 sigma means 31% of defect/error, 3 sigma 6.7% (http://en.wikipedia.org/wiki/Six_Sigma
You probably did not know the meaning of the acronym anyway, so will
not be troubled with the change of SRII's name to Service Research and
Innovation Institute (from the initial last word Initiative). That is
one of the many results of the reshaping of the association led by its new Board under the leadership of Kris Singh, its new President
attended the inaugural meeting of SRII in Santa Clara 2 years ago when
I was on the Advisory Board of TPSA, the Professional Services branch
of the newly-renamed TSIA, Technology Services Industry Association
. It was fascinating, exciting and encouraging
to see the academic and commercial worlds meeting around the topic of
Service Innovation. Services represent today the largest part of the
economy of developed countries, yet we are far from having cracked down
the productivity improvements that agriculture first, then industrial
manufacturing benefited from. The consensus was that a lot had to be
done in both worlds, yet I didn't hear much about SRII the following
two years. Kris is calling his program SRII v2.0 and strongly believes
that making SRII an Institute will help addressing the overwhelming
challenge, instead of the too limited and usually time-bound concept of
Initiative. Certainly he has gathered an amazing panel of companies and
organizations around the theme of Service Research.
Kris in his office at Almaden back in November after reaching out to
him to understand better what IBM meant by Services Research.
Certainly, as you may expect, I found some distance between the
researcher that he is and the practitioner I am, but we agreed to
maintain the dialog and he invited me to this 2010 SRII Summit which
started this afternoon (see the whole Summit agenda
presentations will be posted on the SRII site shortly and someone was
video taping the speakers, so I will not go into details and will let
you check the site at your convenience. The opening keynote was delivered by Dr. Robert Morris, VP, IBM Services Research
and titled "Service Innovation for a Smarter Planet." As Dr. Morris put
it a couple of times, he was preaching the choir as he advocated the
critical importance for the Service industry of gaining major
productivity improvement. The fact is that agriculture switched from
being the predominant pillar of our economies a few centuries ago but,
through major productivity improvement, is now only representing 2% of
the total labor work force in our developed countries (I'm just back
from 3 weeks in Ethiopia and, of course, the situation is completely
inverse over there). Today, in our countries, at least 70% of the work
force is involved in providing services, the remaining being mostly in
manufacturing. Yet, to remain such a sustainable and major contributor
to our NGPs, Service performance have to keep improving or it will drag
the whole economy down (cf. Baumol's Cost Disease).
Dr. Morris proposed a 3-step recipe:
- Focus on the Process: optimize Productivity, manage Risk
- Implement Quality Transformation
- Apply to the services which most affect our lives
The first step is well-supported by tools and technologies nowadays but require focus on process modeling, analysis, simulation.
second step relates to the quality revolution which occurred in
manufacturing. Thankfully, cars don't break (most of the time), tires
don't blow up inadvertently if used properly, billions of transistors
process operations consistently despite rough conditions. Now, what
about the 10% of pieces of luggage being lost or delayed in airports
every day? What about the hundreds of mistakes and misinterpretations
of medical diagnosis and radioscopy readings? What about all the hours
lost when calling call centers and being routed to the wrong person or
getting a conflicting or inaccurate answer? We are not talking about
six-sigma here, but three, two, one...
To illustrate the third one,
Dr. Morris linked his presentation to several examples of IBM's Smarter
Planet initiatives (Smarter Cities, Smarter Transportation, Smarter
Grid, Smarter Health Care).
I had high expectations for the
subsequent panel on "Service Innovation:
Models/Methods/Tools/Metrics/Quality". Unfortunately, the format turned
to a series of somehow disconnected 15-minute presentations where
speakers were rushed and squeezed to present years worth of research in
a few slides. I look forward to accessing the presentations when they
are posted (check this page
). Actually, you can already look at my favorite one, from Robert Gluskho of UC Berkeley: Service Innovation with Design Patterns
I would have loved to attend the rest of the conference but, enough of a taste of Service Research for today, I need to get back to
work... While more and more researchers around the world are looking at
ways to improve our... work! In the meantime, speaking of Service Engineering and Methods, we are expanding ISIS with BPM-related best practices, hoping to issue an Agile Business Process Development plug-in for the Eclipse Process Framework, similarly to ABRD (see Jerome's post
This morning was special, as I woke up as an IBMer. No, no sign of my skin or clothes turning to blue, of course! From the due diligence last July, the formal announcement, the two offers to tender ILOG shares, the on boarding, blue-rinsing and blue-washing processes, it has been a progressive and continuous move over 7 months, and the integration will still take a few more months. But switching to the IBM payroll this morning is quite a special milestone.
A date and step special for another personal reason. Friday was not only my last day as an ILOGer. It was also marking exactly the end of 20 years with ILOG (I joined at the creation of the company in 1987 but had to leave for 1.5 years to complete my military service before coming back in 1990). Definitely a special weekend, squeezed between these two milestones.
Anyway, with this acquisition, you may wonder about the future of the ISIS acronym and the process itself. As a reminder, ISIS stands for ILOG Solution Implementation Standard. ILOG, will remain a legal entity for a few more months, acting as an IBM Company (see our new logo below). We can therefore keep ISIS as is until the actual "transfer of trade" or "transfer of business," the terms set by the M&A team.
By the way, what about the I in ILOG? Almost 22 years later, I bet many people do not know the origin of the company name. (Not our best marketing achievement...) ILOG stands for Intelligence Logicielle, or literally Software Intelligence in English, the crossing between Artificial Intelligence techniques and concepts applied to Software Engineering. A French acronym which went a long way around the globe to get the planet smarter. Smarter thanks to the intelligence of more than 3,000 customers and 850 employees. And now with the opportunity to leverage IBM's fabulous reach with close to 400,000 employees in more than 170 countries. From the cash registers to the mainframe, eBusiness, On Demand and now the Smart Planet initiative, we are glad to join and contribute to IBM's 97 years of history in "Changing the Rules of Business!"
From a content standpoint, ISIS is a perfect match with IBM's strategy in terms of method and process and we already established key contacts in this area both within SWG (Software Group) and GBS (Global Business Services) to discuss how our BRMS and Optimization best practices will smoothly integrate with IBM's capability around SOA, BPM and Project Management in particular. ISIS has been designed and architectured as a series of OpenUP plug-ins, leveraging the Eclipse Process Framework. That makes it 100% compatible with RUP and Rational Method Composer. ISIS strongly pushes the agile and iterative development values, and so are IBM's processes. In addition to our work within the Praxeme Institute, which provides an open methodology, we look forward to getting ISIS to leveraging SOMA (Service-Oriented Modeling and Architecture). As you can see, you can be assured that ISIS is promised to a great future as its best practices are part of the assets both companies are eager to leverage to continue expanding the reach of our BRMS, Optimization, Visualization products and Supply Chain applications!
Bottom line, we have a few months to live with the ISIS acronym as is. I standing for ILOG. Or will it stand for IBM...? Fortuitous coincidence, predilection or destiny? To get the answer, you will have to ask ILOG's CEO and founder, Pierre Haren! With one opportunity to do so this week at Dialog, our Premier User Conference. Talk to you later from Florida!
After several iterations of intensive analysis, business modeling, agile rule writing and meticulous ruleset testing, your BRMS application is finally ready for prime time and just got deployed to production. So, quickly give yourself a pat in the back, and get ready to move on to some real testing action!
Because, while BRMS technology certainly makes our life easier for creation, update and deployment of business rules, it does not dispense us from performing thorough functional tests on the business decision services.
Testing, which is already a grueling activity in traditional software development, actually takes on some new twists with BRMS, the most obvious being “short notice”.
We promised to carry business policy updates to production in a matter of days or even hours. Now, in that short time frame, the rule changes must not only be analyzed, designed, implemented, but above all tested against defects that would potentially have severe business consequences. Below are a few non-technical recommendations to help you win the testing race.
Prepare for Change: One easy way to buy yourself some time for testing the change is to spend some time up-front preparing for it. It is essential to do some pro-active work with the SMEs and gather as much information as possible beforehand on the different types of potential change requests that can be expected, analyze and categorize them, and use this information to create a collection of re-usable test plans for these types of changes, whenever possible.
Different flavors of this up-front work on change preparedness are illustrated by the concepts of Rules Variability Analysis, Business As Usual Change Request or Business Policy Change Templates. Also, note that the analysis can be performed incrementally during the lifecycle of the application, as patterns of changes begin to emerge.
One important benefit of well-prepared and well-understood changes is that the testing can be delegated to resources with little or no business knowledge (e.g. pure IT Quality Assurance team members), thus relieving the SMEs for more knowledge-intensive changes.
Secure the SMEs Availability: Subject Matter Experts are key to help define pertinent test cases for complex changes. There should be an organizational commitment from the business owner to make the needed SMEs available to support the testing activity. Ideally, while a change is analyzed with the help of one SME for implementation, another concurrently writes test cases for the change.
In exchange for their time, the SMEs should be rewarded with a business friendly environment to compose their test cases. In the same way that the expression of business rules rely on a business-friendly vocabulary, the process of creating test scenarios should mask the underlying complexity of the object model. For this reason, the SMEs must be the prime source of requirements when the testing process and tools are elaborated.
Demand Mature Change Requests: As the project stakeholders start to witness the extreme agility of a well-oiled BRMS, they may become overly giddy and start to issue half-baked business change requests. Don’t shrug in disbelief: I’ve actually seen this happen! Half-baked change requests mean that the implications of the policy changes have not been fully studied and/or the understanding of the policy is not yet shared throughout the company. This is especially true when the surrounding business environment gets extremely competitive.
Resulting problems usually get uncovered during testing, as different SMEs may have a different interpretation on how the system should behave with respect to the policy change for certain uncommon test cases. Generally, getting an agreement on the outcome of these test cases will force a late iteration on analysis between the stakeholders and may delay the deployment of other pending changes.
If this type of risk materializes, it is important to assign to one of the stakeholders, let’s call him the Change Manager, the responsibility to ensure that each change request is fully understood and has been fully disseminated before it is considered for implementation.
Demand Complete Change Requests: Somewhat related to the previous point, some business policy changes involve multiple parts, coming from different business groups (e.g. Eligibility and Pricing), and the deployment must include all the parts to be consistent and complete. Here again, the Change Manager has the responsibility of verifying among the different business groups and LOBs that a change issued by one is supported by the others, and that all the needed parts are included in the change request.
Educate Stakeholders: Not all changes are born equal: introducing a typo in the text of a contract stipulation rule does not usually have any devastating business consequences. However, setting a single erroneous eligibility threshold within a decision table will lead to either turning away the good customers or taking on some overly risky ones. Both updates amount to a simple value changes in a single rule, but the consequences are quite different.
It is thus important to train the different BRMS stakeholders to recognize the risk, understand the difference in potential impact between the different cases, and appreciate that while the implementation time may be identical, the magnitude of required testing time and efforts is not.
In conclusion, we can see that the common thread here is “preparation”: prepare the people, the organization, the templates and the tools so that as much work as possible is taken out of the reactive task of change management.
It sounded that “Putting the business rules in the hands of the business people” was still one of the most popular mantra at the 2008 Business Rules Forum, a bit like a recurring vow from a political stump speech.
Here are some candid questions about this catch phrase that I would like to explore: Who exactly are these “business people” that we keep on talking about? And do they clearly understand what they’re in for when we propose to put the rules in their hands? Also, is there a reality to this purported urge of IT to retain control over every activity associated with business rules maintenance? Or do we sometimes yield to an easy, Manichean caricature?
So really, who are these business people? Strictly opposing IT personnel to Business people only makes sense in the restricted context of a company’s reporting org-chart, to determine who reports to the Line of Business EVP versus the CIO. From a competency point of view however, the spectrum between Business and IT comes in many shades of gray.
Let’s take an example of different roles from the (almost defunct) sub-prime mortgage business, as illustrated in the figure below.
At one end of the spectrum, we have an underwriting cabinet, a group of mortgage experts who manage the competitiveness of the loan products, either by updating the positioning of the existing products or by designing new ones.
A policy manager is then responsible to precisely codify the underwriting requirements, write the stipulations and conditions associated with the loan product, verify the regulatory aspects of the loan product through the different states, etc… Next up are the underwriters, who run the loan requests against the underwriting guidelines.
Straddling Business and IT territories is the Business Analyst, well versed in the practices and vocabulary of mortgage lending and at the same time, open to get his hands dirty with technology. The QA engineer also usually has a good grasp of business knowledge. We then enter core IT roles such as the Java developer, the DB Administrator or the Release Engineer.
Among these profiles, who has the desire, the time, and above all, the competencies needed to actually analyze, write and test rules in the long run, that is once the BRMS application has been deployed in production and is in its change-time phase. I insist on “in the long run”, because we’re not just talking about blocking a few days or weeks of time to work on the BRMS project. This is about ensuring the continuous stewardship of the rules for the life of the deployed application.
So let’s review our different profiles:
The members of the Underwriting Cabinet certainly don’t want anything to do with the day-to-day BRMS activities. While they usually are the main project’s sponsor and are the first to understand its business value, the focus of their activity stays at a strategic level.
The policy manager may be the first line of business people to have any actual interaction with the BRMS, but it is usually in a purely consultative mode, often in the form of reports. Also, there is only one policy manager: His time is precious and cannot usually be diverted (at least for too long) to direct BRMS maintenance tasks. His primary care is to see the policy changes he has authored are reflected as quickly and accurately as possible in the BRMS.
The Underwriter can be tapped as the SME to harvest and analyze the rules. They can sometimes get enough involved in the processing of change requests through the BRMS that they feel comfortable browsing the rule repository to locate the rules that need to be changed. However, they usually lack the technical confidence to go through the rules authoring process. Moreover, their time and expertise is better used in the analysis part and also to create complex test cases.
The Java developer certainly has the technical competencies to navigate through the business object model, the rule packages and so on. But he lacks the critical business knowledge ingredient. Also, he is usually the one to crave new technology challenges. Once familiar with the specifics of a BRMS implementation and deployment, he is ready to move on and discover another technology or another implementation.
The Release Engineer and the DB administrator just have nothing to do with writing rules, nor do they really want any part of it.
It looks like we are left with our jack of all trades: the Business Analyst. With his right mix of competencies, he certainly looks like the best candidate. Besides his knowledge of business and technology, the Business Analyst is essentially a strong communicator, used to move information back and forth between Business and IT concepts. This is a critical advantage since communication between the stakeholders is the cornerstone of a successful BRMS project, as was underlined by this other mantra from multiple Business Rules Forum presentations: “Communicate, communicate, communicate...” (e.g. Gorman & Seer, Habraken).
As part of the organizational changes that naturally come with the adoption of the Business Rules paradigm, the designated Business Analyst then becomes the “Rule Plumber”, a combination of the Rule Steward, Rule Architect, Rule Analyst and Rule Writer hats, with many new responsibilities.
For complex projects or ones that support heavy volumes of change, these responsibilities can (should?) be shared among several persons, thus forming a Business Rule Management Group. In this case, Management should allow the group to borrow members with more business-oriented profiles (such as the Underwriter in our example) for rule analysis, as well as more technical profiles, such as a QA engineer for rule testing.
Note that in the process, we have not, strictly speaking, “put the rules in the hands of the business people”, nor have we surrendered the control of the system to the IT group. Instead, we have defined a person, or group of persons who are responsible for supporting the lifecycle of the business rules, solicit the input and expertise from Business or IT when needed, and enable the benefits of the business rules approach for the business people.
More than trying to find a fickle balance between the good: “Empowering Business Users” and the evil: “IT Control”, this is about recognizing the need for a nimble mediation entity that serves the business while using IT services with discernment.
It is important to distinguish the development phase of a BRMS project from its production phase. While IT specialists are heavily involved in the development phase, their involvement naturally fades to a minimum once the project is in production. If the development has followed an Agile methodology such as ABRD, where all the stakeholders worked together toward a real business solution, the foundation of the system should be solid enough to go through the production phase with minimal IT involvement.
Communication is central to the success of a BRMS project. The key first step to take is to establish a set of rule governance processes that formalizes the communication flow and content. The next step, for large and complex applications, is to form a Business Rules Management group who will initiate and control the communication between the different Business entities and involve IT when needed. For smaller or less complex applications, there should be at least one central person to foster the communication, and in most cases, get his hands dirty in the rule change implementations.
With such a pivotal role in supporting a BRMS application, Joe the Rules Plumber may well be worth more than the fatidic $250K threshold...!
To the question: “How long will it take for one of your Rule Analysts along with one of my SMEs to harvest and implement about 500 business rules?”, the straight answer, from the top of my head is: “Between 6 and 8 weeks!”
Since there is no sense of doubt or hesitation here, you may be curious and ask: “And how did you come up with that number?” I can choose between the following: “Just a WAG”, or better: “It’s a SWAG” or, my favorite: “It is based on ABRD!”
1: The WAG
This one starts with an unapologetic comeback, such as: “Well, and how did you come up with that 500 rules number?”. Given the vagueness of the initial question, my wild-assed guess on workload is certainly as good as the one that was used to count the rules.
Indeed, someone’s interpretation of what is called a rule can range from -a single row among a group of 500 belonging to the same large decision table to- a whole paragraph from a business policy guideline document.
Besides the issue of defining the real number (and complexity) of rules, there is a whole set of other critical factors that impact the estimate. Among those are:
- The nature of the rule source material (interviews, documents, application code)
- The number of decision points included in the proposed batch of rules
- The pre-existence of a shared vocabulary, fact model, or business object model
- The distribution of the rules over multiple business entities
- The level of experience of the SME with rule harvesting
- The experience of the Rule Analyst with the business domain
- The overall experience of the team with object oriented analysis and design
While the WAG is a deserved casual answer to a vague and uninformed question, it doesn’t help either of the parties. Instead, proposing to execute a Discovery Workshop, a short 3-day engagement part of the inception phase of the ISIS methodology, is a great way to achieve at least two goals:
- Extract the business context needed to craft an informed estimate.
- Provide some education on rule set development methodology.
2: The SWAG
This time, the "scientific wild-assessed guess" justification for the 6 to 8 weeks is based on average time estimates per rule, numbers coming from past project experience. The ISIS methodology offers the following durations (in minutes) for Business Rules (BR) and Decision Table (DT).
A Decision Table is deemed complex if it has more than 50 rows. Other considerations, such as the number of columns involved may be taken into account. For business rules, complexity is linked to the fact that it performs complex calculations, involves complex pattern matching, implements an exception, etc…
Armed with these numbers, we must also consider that the time taken to develop a ruleset is never a linear function of the number of rules. It is instead logarithmic, or more simply, follows the common 80/20 rule (or some other statistical distribution close to it), with 80% of the time spent on the first 20% of the rules. Finally, we can apply a distribution of 65% of simple BRs, 25% of complex BRs, 9% of simple DTs and 1% of complex DTs. This yields around 6 weeks for 100 rules (20% of the 500), and then 8 weeks for the whole set.
In her “Scoping A Business Rules Harvesting Project” webinar, Gladys Lam present the following baseline estimations for rule harvesting alone: 2 to 3 weeks to produce a fact model, 45’ to specify, analyze, trace and review a rule, and 2 weeks for a final review and business validation cycle. She also mentions that the first 50 to 100 rules will always take the lion share of the time. Using these baseline numbers, 6 weeks gives us roughly between 50 and 100 harvested rules, with factors such as the ones listed previously influencing the estimate up or down.
So obviously, we are quite capable of justifying an estimate by relying only on a simple abacus. This abacus plus some additional insight on the factors listed in the previous “WAG” section can yield a reasonable estimate.
However, I’m arguing that if we don’t back it up with a proven rule development methodology, this estimate does not have much operational value.
Rule development is inherently an agile and iterative activity. We certainly cannot stand by our time estimate if we decide to approach the development using a waterfall process where we sequentially start by defining the business object model, then harvest and document all the rules, and finally implement and test them.
3: Using ABRD
Overall, relying on a proper development methodology is more critical than focusing on the estimate. The Agile Business Rule Development (ABRD) methodology is aimed not only at harvesting the rules but, more importantly producing an executable (albeit incomplete) ruleset as early as possible. It is based on five cycles: Harvesting, Prototyping, Building, Integrating and Enhancing. Within the cycles, the activities are Discovery, Analysis, Authoring, Validation and Deployment. The fourth cycle, Integrating, which should occur roughly within 4 weeks of the start, sees the deployment of a first executable version of the ruleset.
At this point, only 40% to 50% of the rules are developed, but are strategically distributed over the various tasks of the rule flow. Also, after the first deployment, there have been enough iterations on the Discovery, Analysis, Authoring and Validation activities to ensure that the business object model is solid and well-founded and that the existing rules are sound.
In 4 weeks, we thus have a solid and tangible (meaning executable) foundation for the ruleset. The ruleset then goes into the Enhancing cycle, when it is incrementally completed with the remaining rules.
As a conclusion, and to better play the estimate game, you may want to consider the following:
- Besides the pure size of a problem, producing a good estimate depends on multiple complex factors that need to be discovered in a short preliminary estimation workshop.
- The relevance of the problem size is diminished by the fact that the time it takes to develop a ruleset is not a linear function of the size of the ruleset.
- The estimate is close to meaningless if it is not attached to a specific agile development methodology, such as ABRD.
Let us know what your personal experience is on this software estimation topic applied to the business rule context!
A previous blog entry introduced the notion of tracking the position and speed of a software engagement rather than only the position. This post focuses on lessons one can learn from tracking the speed, and how this concept and some associated tools can be a very powerful help in Project Management.
While the position, or progress, of a software development changes all the time, with results produced and specifications evolving, its speed tends to change much more slowly. It is interesting and often useful to understand what causes that speed to change, hence how one can work with it and influence it. This in turn helps in choosing the "safer route" mentioned in the previous blog.
Many Software Metrics are fickle at best (see for instance the recent article, Lies, Damned Lies and Project Metrics (Part 1)), which reinforces the interest of tracking trends (where individual errors get aggregated and masked) rather than positions (where the only two reliably accurate positions in a software project are the starting point, when nothing is done yet, and the end point, when the project is completed).
Some productivity factors, already shown in The Mythical Man-Month are well known. They include the effort involved in the training of a newcomer to a project team, or the influence of team size on the management effort, hence on the overall productivity of the team (the "speed" mentioned above).
Individual productivity also varies while working on a project. Tracking this productivity as such -and not as compared to the initial expectations- yields useful results, such as the definition of a longer initial iteration due to various "administrative" overheads, or realistic expectations on how long a given task-implementer combination will require to reach completion.
Changes in trends can be tracked and yield useful forecasting results. Indeed the earlier one notices changes in productivity, the earlier one can work with that, by finding:
- what lowered the productivity, and correcting this cause, or
- what increased the productivity, and reinforcing the consequences.
Some changes are to be expected, such as for a newcomer in a team, where, on top of the training effort imposed on the team, it typically takes two tasks to reach full productivity. As a consequence ISIS recommends that the first two tasks assigned to a newcomer to a team be short ones rather than long ones.
This person's productivity should show a significant jump between the second and the third task. If this jump has not occurred by the end of the third task, the Project Manager must investigate what the issue is, and in particular if:
- the consultant started with a very high productivity and should not be expected to improve that much, and this is OK, or
- the consultant started with relatively low productivity and
- there's little hope of improvement (and an alternate team member may have to be found), or
- improvement is just around the corner (and the next task should also be a short one).
ISIS requires tracking effort, and encourages the tracking. beyond effort, of other important aspects in an engagement, such as:
- the top risks,
- the number of defects (fixed, investigated, identified),
- the testing coverage,
- the response time.
The ISIS Test Plan template recommends tracking the defects apparition and resolution rates. These rates and their trends allow the PM to known well enough in advance when to expect the completion of each test phase.
As another example, ISIS has a pretty complete list of standard risks that apply to projects that ILOG PS is involved in, and gives ways (as well as initial estimates) to:
- Mitigate, or
- Accept these.
Tracking these risks, or at least the most influential ones (ISIS figuratively speaks of "the top ten risks", with practical advice on how to identify the top risks), gives an important edge to the Project Manager as this helps pointing out both those risks which don't move and those that keep increasing, hence should be met before they become overly expensive, be it in terms of time or development resources. This will be furhter developped in another blog.
A useful spreadsheet-based effort tracking template can be found in ISIS, its data entry sheet is shown below, followed by the corresponding graph.
This templates is filled in up to time interval 4, which the project had just reached. The corresponding graph gives:
The trends in the above graph show an expected end date for interval 10, at the intersection of the yellow (effort spent) and orange (work to be done) curves, with an ending effort of 78 instead of the original 61 or current 73.
In brief, three recommendations from this blog:
- track trends, not just status;
- integrate a newcomer into a project with smaller tasks;
- track defetct and risks, not just deliverables and efforts.
Like on the road, check your position but your speed too, in addition to the other instruments on your dashboard!
Software lore is rife with Project Management analogies. One of my favorite compares tracking a software project to navigating the ocean by dead reckoning. Two commonalities emerge:
if you don’t know where you go, you don’t know when you get to your destination, and
if you don’t track your progress, you don’t know which way you are going.
Early ocean goers had no accurate way of locating their position or destination, or even of tracking their progress. This is quite similar to many current software project managers. Yet these ocean voyages had a much higher rate of success (i.e. reaching their destination with most of their goods sellable and most of the crew alive) than current software projects, where it is said that in fewer than half the cases their results are actually used.
Software Project Managers (PMs) can find some very relevant lessons in the experience of early ocean seafarers. I will develop two what I see as the four most important ones in the next paragraphs:
Know where you want to go.
Track your progress, estimate where you are.
Track your speed, estimate where you are going.
Choose the best route.
I’ll skip for now points 1 and 4, leaving them for a later discussion and focus here on the tracking part of a software project.
Track your progress, estimate your position.
Both the time it took you to reach a certain point, and the location of this point count. Project tracking is naturally about time and expenditures, and number of remaining bugs, but also about delivered results.
As shown in the above screen shot, ISIS encourages the “earned value” tracking method  that gives good location estimates by focusing on what is actually delivered to the customer (the green curve above, top graph) , while the ISIS-required properly filled time-sheets tell the project manager where the effort was spent (the red curve above, top graph) .
Tracking of efforts and actual deliveries give Project Managers an approximate snapshot of the project’s position. Yet PMs typically have a difficult time estimating what work is required to finish. They need to get help from the developers who should report the efforts spent, and what they see as the remaining effort, whether it was planned initially or not (“remains to be done”).
Track your speed, estimate where you’re going.
Project Managers can fairly easily tell their approximate position (based on what deliveries have been accepted by the customer on one hand and the time-sheets on the other); but they usually don’t take trends into account. Back to the initial analogy, being next to a shoal and drifting towards it is very different from being next to a shoal and drifting away from it!
ISIS recommends tracking position over time, e.g. through the integration of successive snapshots. This helps getting an accurate estimate of the delivery speed, thus a good estimate of how long the remaining work in the project will take. This is a much better estimate than using the current position snapshot and betting the house that the rest of the project will go according to some initial plan.
Similarly ISIS recommends tracking the “speed” rather than snapshots of bugs and issues, as this helps predict when a test phase finishes.
The following graph (lifted from the ISIS documentation) gives a hint of how such a dynamic tracking can work:
Using the above graph, with snapshots are taken at each vertical line, a project manager could have a good idea of the overall project effort and duration by the 3rd snapshot, predicting an end of project around the 12th time interval, rather than either the 5th as was the initial prediction, or the 6th as the 3rd snapshot would have.
A future blog will develop the “Track your speed” aspects, and some of their unexpected consequences on effective mechanics of project management. Please send us your comments and suggestions!
Like for a front page article in a newspaper, the key challenge when posting on a blog is to find the right title. What is the opposite of agile? Looking at antonyms, you will find terms such as awkward, clumsy or stiff. They do not seem appropriate opposite terms for what this post is about: agile software development. Is non agile software development necessarily an “awkward”, “clumsy” or “stiff” software process? Surely the proponents of traditional life cycles such as the waterfall approach would disagree. So, what is the difference about?
A concept is often better explained by its opposites, or defined by negation: what it is not about. This is a way to put boundaries around an idea and understand it better. Reading Martin’s Fowler’s story about the origins of the agile movement, we see that the term was selected to highlight adaptability and response to change. And what is something neither adaptable nor responding to change? Something rigid, isn’t it? Something lacking flexibility, lacking the ability for all the stakeholders to frequently renegotiate the scope to take into account new elements. Hence the title of this post: agility versus rigidity.
Let’s explore agile software development concept and its relationship to ILOG methodology, ISIS, further.
As the original source of the agile approach, the Manifesto for Agile Software Development establishes the following principles for better software development:
- Value individuals and interactions over processes and tools
- Value working software over comprehensive documentation
- Value customer collaboration over contract negotiation
- Value responding to change over following a plan
Despite their clarity and simplicity, these criteria are quite generic, but they give us some good clues to evaluate if a process is agile or not. If you look into the manifesto pages, you’ll see the principles behind the agile manifesto, which will help you in this understanding. If you keep surfing the Internet, searching for instance for agile software development in Google, you’ll get 1,340,000 results (quite an outstanding score actually projecting a false idea of the real and limited adoption of the agile approach to date).
Fortunately, you do not need to read all these pages to understand agility! Coming back to the source and definition by negation again, the article what’s agile, what’s not by Alistair Cockburn, one of the signatories of the manifesto, is a good reference to delimit agile software development.
Developing decision-support systems is an agile task by nature because humans outsmart machines in terms of ability to deal with change, the ability to come up with new decisions, which is a pure demonstration of agility. In ISIS we promote and support the ideas presented in Cockburn’s article, making our methodology agile in many ways:
- It is iterative. It recommends short iterations (4 to 8 weeks). The outcome of an iteration is an "iteration release", which is a stable, integrated and tested subset of the complete system, a working piece of software.
- It promotes communication and collaboration. From application assessment to deployment, communication tools such as brainstorming, workshops, facilitation sessions, progress meetings, etc, involving the customer from the very beginning, are used.
- Testing is fundamental, ISIS recommends Test-Driven Development (TDD) and continuous testing from unit to acceptance.
- Continuous integration is accomplished through adequate and recommended tools to streamline and automate the process.
- ISIS provides a set of light, agile templates to produce an effective documentation.
Through these various readings it appears more clearly that, addressing the second question in this post, "not agile" software development is neither simply awkward, nor clumsy, nor stiff software development, it is simply too rigid. Although we understand the value of agility, we also understand that it is not always the best approach, either because of our client’s own methodology requirements or because the special characteristics of the project (see picture bellow, criteria taken from Boehm and Turner’s book Balancing Agility and Discipline: A Guide for the Perplexed).
In practice, while on consulting engagements, we often have to align ourselves on the customer's existing methodology or process which may not be agile yet. We then tailor our approach to leverage its founding principles and existing artifacts and promote agility where ever we can, to decrease the rigidity of existing processes and replace it with flexibility to increase business adoption of IT solutions.
Design patterns have been around for quite some time already but have we really paid enough attention to them?
In software design, a design pattern is a proven and reusable solution to a commonly occurring problem in a determined context.
The idea behind design patterns comes from Christopher Alexander’s book, A Pattern Language (1977), which talks about architectural patterns in buildings and towns. Ten years later, Kent Beck and Ward Cunningham started to apply the patterns to software development. It became popular when the Gang of Four’s book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994.
Wouldn’t it be great if we were able to apply the appropriate patterns in sequence to obtain the complete design for a software system? Well, we are not there yet… (Are we? See how JUnit was designed in JUnit A Cook's Tour). Design patterns provide quality, elegance, flexibility and reuse to the systems that use them, but they are intended for small scale, partial solutions, ignoring the big picture. Other concerns such as system complexity, maintainability and performance, architectural issues, have to be taken into account (for this we have architectural patterns, but this could be another blog post). Also, there is no well-defined, systematic approach to apply design patterns, and the possible problem domains in software engineering are so wide that it is difficult to say that we’ve already found all the necessary design patterns to solve them.
Despite of all these limitations, design patterns have evident advantages and it is surprising to see people heavily involved in software development not having heard of them. Currently, design patterns can be found anywhere, even for specialized areas such as Java EE, with its recommended core patterns. Interestingly enough, these patterns, along with those from the Gang of Four, are an important part of the Sun certificaction for enterprise architects, SCEA, which clearly shows their relevance.
Design patterns are perfectly aligned with ISIS principle of "not reinventing the wheel". They are recommended to ILOG consultants and the methodology contains guidance on them.
The following picture shows the "ubiquitous" singleton pattern, such as it is presented in the ISIS guide for leveraging design patterns.
More generally, there is no good or bad design patterns, they are all useful to some contexts. They are not all relevant to every decision-support system, but most of them are, especially when you consider the other layers and services surrounding a decisioning application. Overall they represent a key concept to leverage during design phases and also to keep in mind to identify new ones which can be reused from one project to another.
What about you? What do you think of design patterns? Do you normally use design patterns in your projects? Have you come with new ones?
Discovering and analyzing your business policy change templates is one of the several key aspects documented in the ISIS methodology, and that must be addressed as part of defining your Rule Governance processes.
The reason why a BRMS component is brought to a company’s IT application mix is because it facilitates the implementation of decision services, but also, and more importantly, because it helps manage their rapid evolution. It is thus surprising that the task of preparing the system and its supporting organization for business policy change management does not always get the attention it deserves.
Note that we are not talking here about preparing for change at the level of the business rule elements, which are the atomic building blocks of a policy. These are usually (or should be) well covered by generic rule governance processes for authoring, validation and deployment. We are talking about change at a macro-level, where it is expressed in terms of raw business policy statements instead of individual business rules. Think change to a paragraph of the underwriting policy used as the reference document by the loan underwriters, for example.
During the application analysis steps, the main questions to the business stakeholders revolve around capturing as accurately as possible the definition of the business policies that will be implemented by the system. However, when analyzed from a snapshot, the policy can appear monolithic or prompt a decomposition along some logical or system-related concepts which are not adapted to the future need to accommodate business changes.
Therefore, another critical task that must be addressed early on in the analysis process is focused on producing an inventory of the probable ways in which the policies may or will change, and with which frequency.
This requires the policy managers to reflect on their experience and come up with as many concrete examples of discrete policy changes that have they have been witnessing regularly in the past. These examples can then be arranged in a taxonomy of business policy change templates.
The most frequent changes are usually the most simple and precisely described ones. The more complex and less frequent ones will often contain some unknowns. For example, the possible templates for a loan pricing application may be:
- Changing the base rate values (may occur weekly).
- Changing the add-on, minimum rate, or fee values (may occur monthly).
- Creating or retiring “specials” for selected regions or channels (may occur quarterly).
- Creating or retiring the pricing structure for a new product (may occur once or twice a year).
Of course, not all changes are predictable. Important and unpredictable ones come, for example, from external regulatory rules or from newly devised company strategies. But for the more banal and recurring ones that can be identified and dissected in advance, the benefits are multiple.
Some of the artifacts that can be prepared for a given business policy change template are:
- A template for change submission, which precisely describes the change and the different parameters involved.
- A process map to implement the change, detailing which rule should be updated, created, or deleted, in which package, whether an update to the rule flow is warranted, etc…, and which specific resource is needed.
- An accurate time and effort estimate to implement this change, from authoring to deployment. The estimate is developed and agreed upon by both IT and business stakeholders, and helps in setting the release schedules and expectations.
- A set of rule templates to facilitate the implementation and reduce the risk of introducing defects.
- A precise test plan and set of test cases. And since the scope of the change is well defined, it is easier to apply techniques such as Delta Testing, which help get extensive test coverage and minimize the updates needed on test cases.
For traditional software, requirement tracking is an important pre-requisite to smooth system maintenance. It allows to analyze the impact of a requirement change to the underlying implementation, and plan the change accordingly.
Business rules management systems take maintenance to the next level and make it a standard activity. It should thus be expected that the efforts on requirement tracking are pushed accordingly to collect and analyze the patterns in requirement change.
To know more about Rule Governance, join me in a 1-hour webinar on May 28th (click here to register). Like our mission states, we are here to help your business handling change and complexity, and rule governance is a key element to make sure you can attain such a goal!
I was invited to share some of our experience at the Informs Practice conference in Baltimore last month, more specifically on our methodology work to support successful implementations of decisioning systems.
Informs Practice is one of two annual conferences of Informs, the Institute for Operations Research and the Management Sciences. A group extremely vested in building strategic decisioning systems with thousands of practioners around the world, definitely worth spending a couple of days with. What impressed me the most is the ability of this community to address business needs with smart, pragmatic, mathematical and computer-based solutions. A great example of the benefits of Business Analysis and a great lesson or inspiration for IT as a whole.
My goal was to leverage the experience we gained building both optimization and rule-based systems to generalize some fundamental best practices applicable to the implementation of decisioning systems in general, what ever technology is involved. When you look at such systems, here are some common characteristics which come to mind:
Yes, some decision processes are non deterministic and a decision made through different paths. In other situations, some users change their mind in the way they make decisions and want a computer to assist them in the process. That makes decisioning systems hard to test. In some situations, decisions are made on intangible elements, or intuition, a challenge for our usual way to model things and implement heuristics. Or the human brain is just too powerful and outperform the computer (e.g. visual pattern recognition, optimized way to allocate resources developed over years of experience). Some decisions are not black and white, some are based on fuzzy information, some others are really complex either in terms of number of variables involved or the type of search algorithm. Some users or policies require the decisions to be explained or traced. Some rules are unwritten. And to complicate things even further, decisioning systems are usually involved in processes which are subject to frequent change, for instance a change of policies (internal or external), strategy, organization or business model.
Such qualifiers clearly would not apply to a standard accounting or record-management application such as a CRM system, or an embedded real-time program such as in an ATM or your car ABS.
Although technology can help addressing many of these above challenges, and object-oriented programming in particular, it is not the only solution. Project Management has also to adapt and leverage new paradigms to make sure these projects can cope with so many potential hurdles.
Here are our top-5 picks in ISIS which we will develop further in subsequent posts:
- Iterative development. Ultimately, you want to put a working system in the hands of your end-users every 4 to 6 weeks. So they can provide early feedback to avoid late discoveries of misunderstood needs or help uncovering mismatch with changing requirements. This is the key concept of the Unified Process, as well as the Agile Manifesto. An approach which is unfortunately not natural in the IT industry yet, but clearly a plus to handle the challenges identified in the above picture.
- Focus on Time To Profit. It is key to define iterations which each bring some value to the business to anchor the project on the right track, get acceptance and traction from the business and solidify the business case of the solution under development. Business-driven iterations as opposed to purely technical and/or contractual releases or builds like we too often see in classical project life cycle such as the waterfall approach.
- Intelligent tracking. Most of the IT projects get in trouble because of non realistic tracking. Weeks after weeks, you hear the usual "I'm 80% done" and the percentage keeps the same... Even more so with decisioning systems, estimates need to get revised to account for changes and implementation challenges. Not only you need realistic "pictures" (a project status at a given time), but you need to juxtapose these pictures to make a "movie" and highlight trends which will help you resetting expectations or reevaluating the scope and objectives. In ISIS we have added charts to our weekly status report to explicit potential drifting in workload estimate or track the quality of a deliverable through the evolution of issues found, analyzed and fixed.
- Project governance. Governance has definitely got a lot of spotlight since the Enron debacle, and a lot of negative connotations for many people since then. However, increased scrutiny surely helped businesses avoid costly mistakes. There are all sorts of governance (e.g. corporate governance, IT governance, SOA governance, rule governance); the project governance should be seen as a positive oversight, an assistance to the project manager to maximize the chance of success. In particular, a steering committee can help the project manager negotiate changes with the business or suggest mitigation actions to address implementation-related issues. ISIS provides checklists and guidelines to make such a governance process work for all the stakeholders.
- Risk management. We sometimes hear "if we only knew what we know" and this is indeed the title of a great book on knowledge management. Unfortunately, we actually know much more than we want to admit when we start an implementation. For instance, in ISIS, we have a tool to evaluate 193 pre-identified risks and 48 classical project management mistakes. Needless to say, with such a long list, the saying should become "if we had known what we knew then..." Definitely worth spending time in a fair, transparent and upfront risk analysis.
These are only 5 of many foundations of ISIS but key ones to ensure success. As we have seen over the past 20 years and learnt from hundred of projects. They do require quite some discipline though, but the rewards of getting a decisioning system used for many years make them worth the effort!
The Agile Business Rules Development methodology (ABRD) is the industry's first free, vendor-neutral methodology delivered as an Eclipse Process Framework (EPF) OpenUp plug-in. ABRD provides a step-by-step process for developing business applications using technologies such as Business Rule Management System, BPM, BPEL. It details all the different activities to develop a rule set, from rule discovery to rule set deployment and maintenance.
It was designed 5 years ago and used in ILOG Professional Services worldwide on dozens of projects. Leveraging ABRD mitigates the risk associated with new business rules initiatives by providing a well documented and structured approach for developing rule-based applications. ABRD allows organizations to avoid using ad-hoc processes or having to expend significant time and effort creating their own best practices.
ABRD is an Eclipse plug-in the same way ISIS is. In fact the following diagram outlines the relationships between ISIS, ABRD, and OpenUp.
OpenUp represents the base, the foundation, defining the core principals of software application development such as: collaborate to align interests and share understanding, balance competing priorities to maximize stakeholder value, focus on the architecture early to minimize risks and organize development, evolve to continuously obtain feedback and improve. ABRD is called abrd_openup plugin and extends OpenUP. ISIS has a common plugin to include any elements which are reusable for BRMS, Optimization, Financial Industry or SCM projects. Each plugin in ISIS provides container for knowledge management for specifics industry or practices.
I encourage you to book your agenda May 7th, this Wednesday, for a web seminar and register at https://iloginc.webex.com/iloginc/onstage/g.php?t=a&d=828310442&SourceId=00005 and go to www.agileitarchitecture.com to have more information on ABRD, BPM, CEP and other agile technologies.
You can also get more information about ABRD by visiting the following web sites:
Let us know how ABRD works for you!
PS: here is a link to the recorded webinar.