Service virtualization and validation practices for the utility industry

Eliminating software constraints and mitigating IT risk to modernize energy delivery

The utilities industry must modernize IT resources and existing, antiquated technologies to meet current business and regulatory requirements and gain better visibility and control of power across the grid. A particular challenge is Advanced Metering Systems. In this article the authors demonstrate how service virtualization can help development and QA teams to meet the challenges of today's smart meter and smart grid implementations.


Jason English (, Vice President, Marketing Communications, iTKO

Jason English brings years of experience in executing marketing plans and designing customer-facing business processes for technology and consumer companies such as HP, IBM, EDS, Delphi, TaylorMade, Sun, Realm, Adaptec, Motorola, and Sprint. As Executive Producer of the in2action interactive consulting unit at i2 Technologies, he was responsible for outbound messaging during a period of extreme growth, as well as working directly with clients to build easy-to-learn workflows and front-ends to B2B collaboration systems. Prior to that, he served as one of the first "Information Architects," managing customer experience for Fortune 500 clients at He has also scored and designed several internationally released computer games in addition to producing conventional advertising and television commercials.

Chris Kraus (, LISA Product Manager and Director, EMEA Practice, iTKO

Chris Kraus is an expert testing and architecture strategist with a 17-year background in computer software development, product management, and sales support for enterprise software. As product manager for iTKO's LISA SOA testing software suite, Chris applies both project management and development experience to refine LISA features to best suit customer quality needs throughout the software delivery life cycle. Chris was previously a retail and manufacturing industry manager at the enterprise software platform firm webMethods, overseeing requirements, customer presales, and training for the US$16M annual group. At supply chain software provider i2 Technologies he worked in i2's infrastructure group with responsibility for the release of business process, workflow, and monitoring engines. Prior to i2 at software AG, Chris specialized in cross-platform product installation and administration. As a software engineer, project manager, and solution architect, Chris has a great breadth of industry knowledge from working with companies like Citi, TD Ameritrade, Lenovo, Tandy, Rubbermaid, TI, and TxDoT to ensure quality application delivery life cycles.

02 September 2010

Also available in Chinese Japanese Portuguese Spanish

Today's utility companies are driven to deliver superior service reliability to customers while controlling costs. The industry is struggling under the weight of technologies implemented decades ago and must modernize a mix of IT resources and existing technologies to meet current business and regulatory requirements for better visibility and control of power across the grid.

One specific area that presents the biggest challenges for utility companies today is in Advanced Metering Systems (AMS) and the enabling software that sits atop a new smart grid architecture. Let's focus on that specific effort and discuss how iTKO LISA™ can help deliver this new capability with less risk of cost and timeline overruns.

Across the global energy industry, there is a push to optimize energy infrastructure both at a network and customer level. Governments have mandated efficiency initiatives that are funded both by customer billing and subsidy — totaling US$3.4B in government grant funds for smart grid projects in the United States and US$21.4B allocated worldwide in 2009 according to the US Government Accountability Office (GAO). Despite these initiatives for greater energy efficiency and transparency, ensuring reliable service delivery to customers remains the primary goal of the utility enterprise.

In this article we discuss:

  • The challenges of modernizing smart meters and smart grids.
  • How service virtualization facilitates the efficient development and testing of dynamic enterprise applications.

By capturing the behavior of deployed software assets, as well as by virtualizing the behavior of those not yet in existence, service virtualization can help development and QA teams in the utilities industry rein in dependencies and constraints in the software life cycle, thereby helping to ensure reliable services delivery.

Risks of modernizing the energy software infrastructure

Sample architectures consist of a large number of new and existing technologies that must be integrated to meet the demands of a distributed IT environment (as shown in Figure 1).

Figure 1. Utility IT architectures are complex
Utility IT architectures are complex

To make smart meter and smart grid initiatives work, utility providers must deal with an incredible amount of software interconnectedness and legacy enablement. Installing millions of new devices on the grid causes thousands of new integration activities that utility companies never needed to support in the past.

To make things more difficult, utility companies must deliver advanced metering and smart grid technology on a mandated, aggressive delivery schedule — and it absolutely must be reliable.

AMS and smart grid risk factors include:

  • New endpoints and systems: One of our utility customers had to install 800,000 new smart meters within a first-year "pilot" window: A complex undertaking involving more than 600 different combinations of meter, firmware, and software configurations for the project. And that is just for starters.
  • Market deregulation: Since most utility markets in the US are deregulated, there are dozens of new retailers entering the market who differentiate their customer offerings not just on price, but on custom web-based systems and in-home units for managing energy usage. This creates many new customer use cases that must be validated.
  • Legacy systems: The existing IT infrastructure of utility companies can be archaic, comprised of many components that were deployed more than 10 to 20 years ago and that are designed for coarse data and control over the power grid, including measuring things like average metering and billing. There is a move toward a more modern, multi-tier approach that would offer more visibility, flexibility, and control over energy usage patterns, but legacy systems often find it difficult to adapt to their resource needs.

When you push a new smart grid or AMS system live to meet deadlines, what if the system provides inaccurate data or grinds to a halt? If systems fail in production, it calls into question the performance and reliability of the whole network which ultimately has an impact on revenues or can place the utility provider at odds with regulatory bodies.

Challenges of smart meter/smart grid modernization

Utilities are moving from an IT environment of relatively low variability and data volume to one where every customer meter is now a networked computer that is pushing new data into the grid and propagating data to underlying legacy systems as a control point. There is an exponential increase in the number of technologies, calculations, and amounts of data that need to be processed by any individual energy company's IT systems and infrastructure.

Let's take a look at a very simplified view of the typical energy software architecture. Figure 2 highlights the new and dramatic, high-risk efforts to get to smart grid and smart meter efforts off the ground and provide new levels of control and predictability.

Figure 2. How unavailable and incomplete systems can constrain delivery of new "smart" technologies
How unavailable and incomplete systems can constrain delivery of new 'smart' technologies

This simplified diagram shows the typical software architecture for an energy company and the resulting constraints on rolling out new technology.

How can your development group expect to deliver such a huge undertaking, practically overnight, with an interdependent environment of software that isn't fully built or available?

We find that utilities' IT teams are struggling with these delivery problems today:

  1. A high dependency on unavailable or constrained systems.
  2. Data complexity and variability that has an impact on integration and testing efforts.
  3. A drastic increase in transactions that affect systems across the entire grid.
  4. And an inability to validate the system end-to-end for accuracy and service performance.

Let's look at these problems in more detail.

Problem 1: Increased dependencies

It is difficult for development teams to get their heads around a complex, distributed system to support new usage scenarios:

  • One meter-to-customer transaction can translate across a dozen different message protocols and communicate with many systems of record and services that you may have little or no access to while you are developing and integrating the system to support AMS.
  • Operations teams may limit access to critical systems, leaving your development teams with a two-hour window per week.
  • Your system needs to communicate with third-party services or systems that are outside of your team's control.
  • And some of the components you need to use may not be developed yet.

From a user perspective, development and QA teams need to validate thousands of transaction scenarios into the system — but every time one team uses the environment, it taints the data in the environment for other teams.

Problem 2: Data complexity

Utility IT teams have to build software against live systems and volatile transaction patterns; a major side effect of that is the difficulty of modeling realistic, robust test data to work with throughout the software life cycle:

  • The volatility of customer and energy transaction data, as well as the need to follow stateful transactions across live systems, makes it seemingly impossible to automate the testing of AMS implementations versus the applications.
  • Sensitive customer account information and live transaction data needs to be controlled and masked from testing and development teams in order to avoid regulatory compliance and data corruption issues.
  • Teams often spend 60 percent of the testing life cycle or more setting up and tearing down data, emailing or calling other teams on the phone to validate or reset specific data scenarios, and so on. This level of inefficiency and cost is unacceptable.

We need to be able to mitigate the impact of test data on live systems and on other software teams invested in the architecture.

Problem 3: Increased transaction volumes

The addition of thousands of new smart meter endpoints to a smart grid implementation that is usually still a work in progress creates a great deal of uncertainty about system reliability:

  • What happens if large numbers of meters go down? Will they accurately report the outage? How will the grid respond if millions of meters fire off hourly usage data into the system? What impact would tens of thousands of Transactions Per Second (TPS) have on the infrastructure?
  • We have found companies trying to literally test system reliability under assault by new smart meter endpoints by nailing several physical meters to a board and running them against the network or hand-coding fake "stub" software to simulate these meters. But that practice just doesn't scale enough to account for all the possible uses and transaction volumes needed.

Traditional utility IT infrastructures have never before had to account for this level of usage.

Problem 4: Inability to validate the end-to-end system

The many technology layers and interdependencies across modern utility systems increase the difficulty and cost of validating reliable outcomes from software:

  • Manual testing, either at a user-interface level or at single endpoints, is the rule of the day in the utilities industry. These testing approaches provide very little visibility into root causes of problems.
  • Validation is also typically a manual task; a tester can spend time trying to verify outcomes on the phone and manually looking for the expected data results in a system of record.
  • False positives and failures are common. For instance you might have a website or service that says it "confirms" that a transaction happened, but there was no actual state change at the head end.

Teams need to be able confirm in an automated fashion that an end-to-end command was correctly processed and that the data was delivered at each middle-tier layer according to expected policies.

Solution: Virtualize dependencies, validate reliability


iTKO, a member of the IBM Partner Ecosystem, helps customers move enterprise applications into the cloud. The LISA virtualization and validation software optimizes complex applications throughout the software life cycle, eliminating costly constraints and defects while improving agility in an environment of constant change.

The iTKO LISA Virtualize, Test, Validate, and Pathfinder solutions eliminate dependencies and increase the reliability of distributed, modern applications that leverage cloud computing, SOA, BPM, and integration suites.

For more information, visit or

Service virtualization— a solution to the off-premise, off-cloud software constraints of utility companies — takes the next step past hardware virtualization by virtualizing the behavior, performance, and data of dependent services and applications. Multiple teams can use this practice to virtualize the behavior of dependent applications, automate data scenarios, and construct testable models of systems that do not yet exist from the model specifications.

Service virtualization provides a solution to address hardware virtualization limitations such as:

  • Providing 24/7 access to the service endpoint on your terms.
  • Removing system and software capacity constraints.
  • Addressing data volatility across distributed systems.
  • Reducing or eliminating the cost of invoking third-party systems for non-production use.

Service virtualization, used to simulate constrained and/or unavailable components, allow IT teams in the utilities industry to deliver against tighter timelines with less risk and lower total project costs (Figure 3).

Figure 3. By automating development and test dependencies in the software life cycle, you can eliminate utility constraints
By automating development and test dependencies in the software life cycle, you can eliminate utility constraints

Development and QA teams in the utilities industry can leverage service virtualization to reduce dependencies and constraints in the software life cycle that can delay projects and hinder application performance. Table 1 shows how this practice can be employed to meet the challenges of today's smart meter/smart grid implementations:

Table 1. Practices to parry the problems
Dependency on unavailable or constrained systemsVirtual service environments allow development and testing teams to virtualize the behavior of constrained services and components by capturing the data and context of transactions and offering a virtual service that looks and acts just like the real thing for purposes of development and testing during the software life cycle.
Data complexity and variabilityTest data management using virtual services. By capturing the complexity of data transactions across the system, teams get robust, stable virtual data scenarios with valid responses that realistically simulate the complex behaviors of customer sessions, stateful transactions, times and dates, and more. You will need to desensitize or obscure key customer or system data during development and test so that security and privacy policies are not compromised.
Increased transaction volumesVirtualized performance testing configures a high-volume array of virtual meters that push thousands of TPS into the system or simulates the intermediate components like meter control and meter data management systems as they perform under load. Using this method, teams know they can scale their systems with the traffic needed for AMS initiatives.
Inability to validate end-to-endContinuous build and validation services enable the end-to-end testing of service-based and composite application types. You can natively test complex end-to-end workflows and directly validate that the expected behaviors are occurring at each component that contributes to a given business process.

The benefits of service virtualization and continuous validation

These examples are limited to AMS and smart grid initiatives and do not represent all of the utility customer scenarios where service virtualization and continuous validation can provide significant benefits. Nonetheless, the challenges of rolling out new functionality in complex, changing environments can be mitigated. Within the AMS and smart grid category, joint customers have achieved very compelling benefits in a short period of time. Customers have:

  • Eliminated system constraints for parallel development:
    • Increasing software functionality delivered and tested by up to 68 percent.
    • Achieving faster time-to-market (10 weeks earlier delivery within a 3-month cycle).
  • Achieved automation and 24/7 availability of test data scenarios:
    • Eliminating 80 percent of test data management costs within first two weeks.
    • Stabilizing volatile data and eliminating setup/teardown efforts.
  • Simulated the impact of millions of new endpoints:
    • Automating delivery of a high-volume simulation environment.
    • Reducing up to 90 percent of test lab environment creation costs.
  • Enabled continuous validation with fewer failures:
    • Completing transparency of end-to-end workflows.
    • Increasing customer satisfaction and suffering fewer penalties since there is less poor service level performance and quality.

Utility companies are challenged to modernize IT resources and existing technologies to meet current business and regulatory requirements. In this article, we have demonstrated how service virtualization facilitates development and testing of dynamic enterprise applications for AMS and implementation of the enabling software that sits atop a new smart grid architecture to ensure reliable and efficient services delivery.



Get products and technologies



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Cloud computing on developerWorks

  • developerWorks Premium

    Exclusive tools to build your next great app. Learn more.

  • Cloud newsletter

    Crazy about Cloud? Sign up for our monthly newsletter and the latest cloud news.

  • Try SoftLayer Cloud

    Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.

Zone=Cloud computing, Industries
ArticleTitle=Service virtualization and validation practices for the utility industry