Today's utility companies are driven to deliver superior service reliability to customers while controlling costs. The industry is struggling under the weight of technologies implemented decades ago and must modernize a mix of IT resources and existing technologies to meet current business and regulatory requirements for better visibility and control of power across the grid.
One specific area that presents the biggest challenges for utility companies today is in Advanced Metering Systems (AMS) and the enabling software that sits atop a new smart grid architecture. Let's focus on that specific effort and discuss how iTKO LISA™ can help deliver this new capability with less risk of cost and timeline overruns.
Across the global energy industry, there is a push to optimize energy infrastructure both at a network and customer level. Governments have mandated efficiency initiatives that are funded both by customer billing and subsidy — totaling US$3.4B in government grant funds for smart grid projects in the United States and US$21.4B allocated worldwide in 2009 according to the US Government Accountability Office (GAO). Despite these initiatives for greater energy efficiency and transparency, ensuring reliable service delivery to customers remains the primary goal of the utility enterprise.
In this article we discuss:
- The challenges of modernizing smart meters and smart grids.
- How service virtualization facilitates the efficient development and testing of dynamic enterprise applications.
By capturing the behavior of deployed software assets, as well as by virtualizing the behavior of those not yet in existence, service virtualization can help development and QA teams in the utilities industry rein in dependencies and constraints in the software life cycle, thereby helping to ensure reliable services delivery.
Risks of modernizing the energy software infrastructure
Sample architectures consist of a large number of new and existing technologies that must be integrated to meet the demands of a distributed IT environment (as shown in Figure 1).
Figure 1. Utility IT architectures are complex
To make smart meter and smart grid initiatives work, utility providers must deal with an incredible amount of software interconnectedness and legacy enablement. Installing millions of new devices on the grid causes thousands of new integration activities that utility companies never needed to support in the past.
To make things more difficult, utility companies must deliver advanced metering and smart grid technology on a mandated, aggressive delivery schedule — and it absolutely must be reliable.
AMS and smart grid risk factors include:
- New endpoints and systems: One of our utility customers had to install 800,000 new smart meters within a first-year "pilot" window: A complex undertaking involving more than 600 different combinations of meter, firmware, and software configurations for the project. And that is just for starters.
- Market deregulation: Since most utility markets in the US are deregulated, there are dozens of new retailers entering the market who differentiate their customer offerings not just on price, but on custom web-based systems and in-home units for managing energy usage. This creates many new customer use cases that must be validated.
- Legacy systems: The existing IT infrastructure of utility companies can be archaic, comprised of many components that were deployed more than 10 to 20 years ago and that are designed for coarse data and control over the power grid, including measuring things like average metering and billing. There is a move toward a more modern, multi-tier approach that would offer more visibility, flexibility, and control over energy usage patterns, but legacy systems often find it difficult to adapt to their resource needs.
When you push a new smart grid or AMS system live to meet deadlines, what if the system provides inaccurate data or grinds to a halt? If systems fail in production, it calls into question the performance and reliability of the whole network which ultimately has an impact on revenues or can place the utility provider at odds with regulatory bodies.
Challenges of smart meter/smart grid modernization
Utilities are moving from an IT environment of relatively low variability and data volume to one where every customer meter is now a networked computer that is pushing new data into the grid and propagating data to underlying legacy systems as a control point. There is an exponential increase in the number of technologies, calculations, and amounts of data that need to be processed by any individual energy company's IT systems and infrastructure.
Let's take a look at a very simplified view of the typical energy software architecture. Figure 2 highlights the new and dramatic, high-risk efforts to get to smart grid and smart meter efforts off the ground and provide new levels of control and predictability.
Figure 2. How unavailable and incomplete systems can constrain delivery of new "smart" technologies
This simplified diagram shows the typical software architecture for an energy company and the resulting constraints on rolling out new technology.
How can your development group expect to deliver such a huge undertaking, practically overnight, with an interdependent environment of software that isn't fully built or available?
We find that utilities' IT teams are struggling with these delivery problems today:
- A high dependency on unavailable or constrained systems.
- Data complexity and variability that has an impact on integration and testing efforts.
- A drastic increase in transactions that affect systems across the entire grid.
- And an inability to validate the system end-to-end for accuracy and service performance.
Let's look at these problems in more detail.
Problem 1: Increased dependencies
It is difficult for development teams to get their heads around a complex, distributed system to support new usage scenarios:
- One meter-to-customer transaction can translate across a dozen different message protocols and communicate with many systems of record and services that you may have little or no access to while you are developing and integrating the system to support AMS.
- Operations teams may limit access to critical systems, leaving your development teams with a two-hour window per week.
- Your system needs to communicate with third-party services or systems that are outside of your team's control.
- And some of the components you need to use may not be developed yet.
From a user perspective, development and QA teams need to validate thousands of transaction scenarios into the system — but every time one team uses the environment, it taints the data in the environment for other teams.
Problem 2: Data complexity
Utility IT teams have to build software against live systems and volatile transaction patterns; a major side effect of that is the difficulty of modeling realistic, robust test data to work with throughout the software life cycle:
- The volatility of customer and energy transaction data, as well as the need to follow stateful transactions across live systems, makes it seemingly impossible to automate the testing of AMS implementations versus the applications.
- Sensitive customer account information and live transaction data needs to be controlled and masked from testing and development teams in order to avoid regulatory compliance and data corruption issues.
- Teams often spend 60 percent of the testing life cycle or more setting up and tearing down data, emailing or calling other teams on the phone to validate or reset specific data scenarios, and so on. This level of inefficiency and cost is unacceptable.
We need to be able to mitigate the impact of test data on live systems and on other software teams invested in the architecture.
Problem 3: Increased transaction volumes
The addition of thousands of new smart meter endpoints to a smart grid implementation that is usually still a work in progress creates a great deal of uncertainty about system reliability:
- What happens if large numbers of meters go down? Will they accurately report the outage? How will the grid respond if millions of meters fire off hourly usage data into the system? What impact would tens of thousands of Transactions Per Second (TPS) have on the infrastructure?
- We have found companies trying to literally test system reliability under assault by new smart meter endpoints by nailing several physical meters to a board and running them against the network or hand-coding fake "stub" software to simulate these meters. But that practice just doesn't scale enough to account for all the possible uses and transaction volumes needed.
Traditional utility IT infrastructures have never before had to account for this level of usage.
Problem 4: Inability to validate the end-to-end system
The many technology layers and interdependencies across modern utility systems increase the difficulty and cost of validating reliable outcomes from software:
- Manual testing, either at a user-interface level or at single endpoints, is the rule of the day in the utilities industry. These testing approaches provide very little visibility into root causes of problems.
- Validation is also typically a manual task; a tester can spend time trying to verify outcomes on the phone and manually looking for the expected data results in a system of record.
- False positives and failures are common. For instance you might have a website or service that says it "confirms" that a transaction happened, but there was no actual state change at the head end.
Teams need to be able confirm in an automated fashion that an end-to-end command was correctly processed and that the data was delivered at each middle-tier layer according to expected policies.
Solution: Virtualize dependencies, validate reliability
Service virtualization— a solution to the off-premise, off-cloud software constraints of utility companies — takes the next step past hardware virtualization by virtualizing the behavior, performance, and data of dependent services and applications. Multiple teams can use this practice to virtualize the behavior of dependent applications, automate data scenarios, and construct testable models of systems that do not yet exist from the model specifications.
Service virtualization provides a solution to address hardware virtualization limitations such as:
- Providing 24/7 access to the service endpoint on your terms.
- Removing system and software capacity constraints.
- Addressing data volatility across distributed systems.
- Reducing or eliminating the cost of invoking third-party systems for non-production use.
Service virtualization, used to simulate constrained and/or unavailable components, allow IT teams in the utilities industry to deliver against tighter timelines with less risk and lower total project costs (Figure 3).
Figure 3. By automating development and test dependencies in the software life cycle, you can eliminate utility constraints
Development and QA teams in the utilities industry can leverage service virtualization to reduce dependencies and constraints in the software life cycle that can delay projects and hinder application performance. Table 1 shows how this practice can be employed to meet the challenges of today's smart meter/smart grid implementations:
Table 1. Practices to parry the problems
|Dependency on unavailable or constrained systems||Virtual service environments allow development and testing teams to virtualize the behavior of constrained services and components by capturing the data and context of transactions and offering a virtual service that looks and acts just like the real thing for purposes of development and testing during the software life cycle.|
|Data complexity and variability||Test data management using virtual services. By capturing the complexity of data transactions across the system, teams get robust, stable virtual data scenarios with valid responses that realistically simulate the complex behaviors of customer sessions, stateful transactions, times and dates, and more. You will need to desensitize or obscure key customer or system data during development and test so that security and privacy policies are not compromised.|
|Increased transaction volumes||Virtualized performance testing configures a high-volume array of virtual meters that push thousands of TPS into the system or simulates the intermediate components like meter control and meter data management systems as they perform under load. Using this method, teams know they can scale their systems with the traffic needed for AMS initiatives.|
|Inability to validate end-to-end||Continuous build and validation services enable the end-to-end testing of service-based and composite application types. You can natively test complex end-to-end workflows and directly validate that the expected behaviors are occurring at each component that contributes to a given business process.|
The benefits of service virtualization and continuous validation
These examples are limited to AMS and smart grid initiatives and do not represent all of the utility customer scenarios where service virtualization and continuous validation can provide significant benefits. Nonetheless, the challenges of rolling out new functionality in complex, changing environments can be mitigated. Within the AMS and smart grid category, joint customers have achieved very compelling benefits in a short period of time. Customers have:
- Eliminated system constraints for parallel development:
- Increasing software functionality delivered and tested by up to 68 percent.
- Achieving faster time-to-market (10 weeks earlier delivery within a 3-month cycle).
- Achieved automation and 24/7 availability of test data scenarios:
- Eliminating 80 percent of test data management costs within first two weeks.
- Stabilizing volatile data and eliminating setup/teardown efforts.
- Simulated the impact of millions of new endpoints:
- Automating delivery of a high-volume simulation environment.
- Reducing up to 90 percent of test lab environment creation costs.
- Enabled continuous validation with fewer failures:
- Completing transparency of end-to-end workflows.
- Increasing customer satisfaction and suffering fewer penalties since there is less poor service level performance and quality.
Utility companies are challenged to modernize IT resources and existing technologies to meet current business and regulatory requirements. In this article, we have demonstrated how service virtualization facilitates development and testing of dynamic enterprise applications for AMS and implementation of the enabling software that sits atop a new smart grid architecture to ensure reliable and efficient services delivery.
- Learn about the LISA environment for development and test clouds.
- LISA Solutions for IBM help optimizing the life cycle of development, testing, integration, and cloud-based delivery.
- The IBM Smart Business Development and Test on the IBM Cloud site is your place to see how to start developing your applications for the cloud.
- In the developerWorks cloud developer resources, discover and share knowledge and experience of application and services developers building their projects for cloud deployment.
Get products and technologies
- The iTKO LISA Group is a community for members of the developerWorks community who are interested in service virtualization and software validation on the IBM Development and Test cloud.
- Join a cloud computing group on My developerWorks.
- Read all the great cloud blogs on My developerWorks.
- Join the My developerWorks community, a professional network and unified set of community tools for connecting, sharing, and collaborating.
Dig deeper into Cloud computing on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.