Contents


Improving the quality of WebSphere Commerce customizations

An in depth look at coding and unit testing

Comments

WebSphere Commerce leads the marketplace in integrating with all available systems. With that in mind, the demand to develop a customized solution is intense and rewarding. Using a parallel to Olympic sports, WebSphere Commerce projects have a specified development phase that needs to perform within a specific time period to produce deliverables, much like a team of athletes delivering a performance to win a Gold medal. A team of Olympic athletes train for many years to achieve success within a specific time frame and for single moment, competition day. It is common for Olympic athletes to study the importance of peak performance or team perfection. That level of perfection sometimes cannot be reached again.

When a project begins its development phase in WebSphere Commerce, everything needs to fall into place. Even if every team member does an excellent job to fulfill their roles, the project may develop too many issues to complete well. And when the development team does reach their peak of perfection, there is no turning back. Thus, any bugs will haunt them. This article looks at some ways to achieve a successful project delivery, achieve peak performance, and ensure a high quality deliverable through the strategies of minimizing bugs.

We all know the hard training and preparation for Olympic athletes. A development team does not have any time to train or prepare. Once the project is set in motion, the development team is expected to perform. The lead architect establishes standards and best practices which the development team will follow. If the development team has a demanding project with a lot of customized code, then any new standards or best practices would be difficult to implement.

Bulletproofing Web Applications states that there is resistance to implementing anything that adds overhead to the development process already in place. The best solution to this problem is to find the simplest process with the lowest implementation cost, which provides immediate results. Regardless of resistance, projects need to plan accordingly that will ensure their team will achieve peak performance. For WebSphere Commerce customized projects, peak performance means minimized bugs.

This article focuses on the following areas to plan and prepare for a strong peak performance:

  • Perform unit testing activities.
  • Perform code review activities.
  • Enforce coding standards.
  • Practice defensive programming techniques, such as:
    • Anticipate failures and test for errors.
    • Notify when errors occur and perform damage-control actions.
    • Use programming techniques, such as validating user input and embedding debug support.
  • Perform debugging and logging.

Performing unit testing activities

This section investigates unit testing activities as a means to minimize bugs. This helps the development team to perform efficiently throughout the development phase and fully participate throughout the project as a whole.

The development team performance may be impacted if they are not prepared to handle the volume of bugs exposed in later stages of the project. Peak performance requires a limited time-frame for optimal results. Therefore, this article focuses on unit testing activities. Later, it evaluates the execution phase of unit testing. Note that it is engaging in the unit testing activities that provides the most benefit to the project as a whole; whereas, the actual execution of those unit test cases is a metric of progress.

The main benefit of unit test activities is to prevent and find errors to minimize cost. Studies at IBM® showed that testing prior to coding is 50 percent more effective in detecting errors, and 80 percent after coding. This study and others show it is at least 10 times as costly to correct an error after coding as prior to, and 100 times as costly to correct a production error. In addition, quality is realized even before the development cycle as Chemical Bank® showed that two-thirds of all errors occur prior to coding.

On top of these studies, it is also known that errors found after delivery lead to the following negative consequences:

  • Unhappy customers.
  • Standstill of equipment or business processes.
  • Difficult and expensive error localization.
  • Complex problem tracking and remove processes.
  • New delivery and installation of the software patches

The negative consequences of bugs show that there are many benefits to unit testing activities. The most overlooked benefit is that unit test processes lead the way toward a clean architecture. Going through the process of preparing unit tests will carry over to project stakeholders towards a clear understanding of the solution. Since unit tests are created directly from customer use cases, developers are also breaking up the thick information intensive requirements document into real and simplified entities.

Using the Micro Design Phase

IBM Services that handle WebSphere Commerce customization projects have put these processes into their practice. Part of that is the Micro Design Phase, which is highly recommended for every WebSphere Commerce services project. These Micro Design activities complement the agile development requirements relative to unit testing benefits. As the macro design is fed into the micro design, there is some preparation of a unit test thinking activity. This activity prepares the development team for the scope of work they are engaged in, thus preparing them for their peak performance for their development cycles.

As you can see, it isn't the actual execution of unit testing that is required. It is the activity of preparing unit testing that is the advantage. This is like mentally preparing for a highly demanding match during the Olympics. Micro Design activities build a repertoire of techniques that is much like athletes realizing what they need to succeed.

Goals of unit testing execution

When unit tests are created, then you are involved in a new set of benefits. The developer's main benefit of engaging in unit testing is about knowing what not to test. When the building blocks work, you have a higher quality system. When you know what not to test, what to test becomes obvious. More importantly, when you know what is reliable, you are better prepared when defects are reported (remember, you have to plan the peak of performance as a win-win situation). Good unit tests do not necessarily test every possible permutation of class behavior, nor do they test ultra-simple methods. They provide a common-sense verification that the code unit behaves as expected, ensuring successful delivery of the code.

When performing unit testing, the goal is to test the smallest possible unit of an application. This reduces development time and cost, and provides the most coverage possible. Finding bugs during development prevents bugs from spawning more bugs, where you wade through problem after problem.

Let's review the types of testing:

  • White box testing: Ensures the unit is constructed properly and does not contain any hidden weaknesses.
  • Black box testing: Ensures the unit functions in the way it is intended to function.
  • Regression testing: Ensures modifications do not introduce errors into a previously correct unit.

Unit test is normally glass box (white box) testing. That is, it usually requires knowledge of internals of the class under test. In WebSphere Commerce customizations, the underlying behavior doesn't need to be known for a unit test. Those subsystems are assumed to be functioning with high quality. The WebSphere Commerce product behavior is black box testing, where no knowledge of the implementation is required. However, the customized extended behavior is considered as the white box.

Unit testing's prevalent concept of testing software is separated into parts starting with particular functions, then objects, then functional areas (anticipating as many interactions as possible). From this perspective, unit testing is about testing each class in isolation. To test classes in isolation, we need decoupling techniques. The scope of unit testing looks to test all new and changed paths in code. Scope includes verifying inputs and outputs, branches, loops, subroutine inputs and function outputs, simple program-level recovery, and finally, diagnostics and traces. Do not forget that all of this is only the white box areas of customized code, hiding the inherited behavior in the WebSphere Commerce product.

You should reuse concepts of isolating functionality when unit testing your customizations in WebSphere Commerce. However, the challenge lies in the definitions of what a unit really is in WebSphere Commerce. Since WebSphere Commerce already delivers high quality code, it is redundant for a thorough execution of unit tests for the entire product.

Therefore, focus on your specific customizations built on top of WebSphere Commerce. With this in mind, units in the WebSphere Commerce perspective are no longer classes in isolation. Guy W. Lecky-Thompson, in Corporate Software Project Management, acknowledges the concept of glue code. Glue code is anything that offers no immediate logic. In the case of WebSphere Commerce customizations, glue code binds existing WebSphere Commerce functionality with particular business requirements, the user interface, controller commands, task commands, and data beans. As you can see, WebSphere Commerce base code is now the black-box of your unit test; whereas, your layer of customizations is the white-box area of your tests. This means if a unit involves a controller command, then a unit test involves passing a command context and request properties to test. This is a step beyond the normal unit testing. WebSphere Commerce customizations is more about integration and reuse. Therefore, unit testing is at a much more intense and complex level.

Common themes of unit testing

When testing at the lowest level, the typical defects found in unit testing include problems with:

  • Loop termination.
  • Simple internal parameter passing.
  • Proper assignment statements.
  • Simple recovery routine logic.
  • Errors in functions and subroutines.

You most likely know those areas where typical defects occur in the code. You need to ensure quality of your code, thus removing opportunities for these typical defects to occur. The challenge is executing tests to catch those areas. The most common themes of executing testing include:

  • Testing the most common execution paths.
  • Testing unexpected arguments.
  • Testing when components under tests encounter errors from components they use.

In WebSphere Commerce, most common execution paths include validating test parameters within the Controller commands validateParameters() method. This is the most common method of validating input parameters coming from user input, whether from browser JSP pages or other types of user interfaces. Another common execution path is the performExecute() method of your customized WebSphere Commerce commands.

The challenges when testing against the most common execution paths is extending existing WebSphere Commerce functionality. Thus, most of the common execution paths have their dependencies exclusive of your customizations. In cases where external dependencies are hidden to your development progress, you cannot wait until they are available to begin the testing or integration progress. On the other hand, there are possible problems that stem from inadequate simulations.

Scott Loveland in Software Testing Techniques said it best when challenged with testing with external dependencies: "Finding bugs during the development process cycle is like driving for the first time along a foggy road at night. You can only see ahead a few feet at a time, but the further you go, the more you discover - and if you drive too quickly, you might end up in a ditch."

Without a well scoped set of unit test cases, bugs are the hardest to capture when jumping right into code. Therefore, a specific mapping of use case inputs to outputs needs to be expressed as unit test cases.

Guy W. Lecky-Thompson suggested in Corporate Software Project Management: "You need a strict input to output mapping with adequately expressed specifications with expected behavior." You already see that it is complicated to unit test in an environment, such as WebSphere Commerce, where core functionality is being leveraged by the business requirements and your requirements are designed on top. With that in mind, the following are some development methodologies that ensure a strict mapping to successfully express project specifications.

Methodologies that complement unit testing adoption

The waterfall and iterative project methodologies are derived into IBM Services specific methodologies for WebSphere Commerce customization delivery. This is because it sets expectations with the customer on what is to be delivered, thus setting milestones for the project plan.

With iterative methodology, continuous deliverables show progress to the project stakeholders to increase confidence of the project progress as a whole. Whichever project methodology is selected, the low level development processes for WebSphere Commerce customizations require a flavor of agile software development processes.

The agile software development processes are required to meet the changing requirements while at the same time also adapting to the project level process methodologies. Agile software development is working code with a focus on rapid production. There is a high demand to deliver early and often, making agile processes complement iterative methodologies already set in place by the project team.

Along with the merge of complementing methodologies, you need to handle changing requirements throughout all development and test phases. There are some agile methodologies, such as:

Spiral model
This introduces and uses risk assessment and analysis through the development process.
Evolutionary model
This exploits the basics of the waterfall model by flowing seamlessly from one phase to the next.

However, with incorporating strong unit testing capabilities, the best suited agile development methodology for WebSphere Commerce applications is the eXtreme programming and reactionary approach:

eXtreme programming (XP)
The idea of eXtreme Programming is to have continued feedback from users. Constant communication and cooperation between development and test across all phases foster feedback crucial to the success in the XP model. In this case, the WebSphere Commerce application must have reliable customizations. Developers are expected to code or build their test cases for unit cases before they begin coding the actual software that will be tested. The IBM Services Micro Design phase works well with eXtreme Programming processes.
Reactionary approach
What happens when the plan collapses after problems crop up? A solution to this is the one-team approach that involves early daily meetings where project stakeholders set priorities based on the current set of issues. The development team runs comprehensive tests on the fixed code, identify problems, patch, deliver, and continue with the project.

The reactionary approach is used to get the project back on track, but cannot be a dependent element for a successful solution. This is due to peak performance consideration of the team's progress. The reactionary approach has its negative consequences, which disrupts a team's peak performance.

Looking back at the Olympic parallel introduced earlier - when a plan starts to fall off the critical path, it is the same as when a team is losing their match in competition. How does a champion team redeem themselves when it is seemingly against all odds? It is much the same as a WebSphere Commerce project. The reactionary approach gets the project back on track. However, despite the promise the project shows that it is back on track, is the development phase back on track? Are there metrics already in place to gauge the status? Yes, there might be a significant deliverable in place. Yet, if the proper integration and user acceptance testing is not yet executed, there is no way to know if those development deliverables patched on are successful. Of course, unit testing execution provides a metric to verify development stabilityl, but has your team fed the patches back into the Micro Design Document? eXtreme programming requires that every reported bug has a new associated set of unit tests. This means revisiting and reviewing the Micro Design Document when there is any patch, fix, or change.

You have seen it every time watching the Olympics. The team comes back from a losing streak, looking like it is almost ready to take on the gold medal. Then all of a sudden, the team winces its win away. Much the same, your project may seem to be on track after the reactionary stage. However, when user acceptance testing gets underway, there are unacceptable large number of defects. What seemed to be a successful reactionary approach previously turns into another set of bugs, then more set of actions to react to emerging problems and so on.

eXtreme approach to minimizing bugs

The eXtreme programming approach to testing provides a strategy to ensure peak performance. This approach suggests you do the following:

  • Write tests before code.
  • All code must have unit tests, which can be run automatically in a single operation.
  • When a bug is reported, tests are created to reproduce it before an attempt is made to fix the bug.

Writing tests before code

Writing test cases before coding is always a daunting task. There is no time available to develop the code, let alone create test cases before the phase starts. This process is not easy to adopt in many projects because the team may not directly see the return on investment. However, the following benefits apply:

  • Test cases provide additional documentation and are easier to absorb than specification documents.
  • Test cases promote understanding of the requirements.
  • It is more difficult to write tests for existing code than to write tests before and while writing code.

Writing tests before code means taking customer use cases as input and creating test cases as output. This is why IBM Services produces the Micro Design Document for their projects. The Micro Design Document enables architects to pass that document directly to any development team member to allow for quick skills transfer. There is also an added value of absorbing the unit test cases into a plan of minimizing bugs. Adopting unit test best practices has an impact on the project team.

Two methods increase the adoption of strong unit test plans:

  • Use of automation ensures the benefit of performing each required practice early on compensates for resources necessary to perform it.
  • Tailoring the development process to the team's current projects, strengths, and weaknesses (cannot push every practice into place into existing scoped projects).

Automating unit tests

JUnit is a good framework to run unit testing. HttpUnit emulates relevant portions of the browser's interaction with the application. JUnit is designed to report successes or failures in a consistent way. However, JUnit is expected to test all of the low level code, whereas WebSphere Commerce already provides code to which customizations are built on top. Therefore, it may help developers clarify a specific format to create unit test cases. These automation frameworks do not run easily in a WebSphere Commerce environment automatically because of the amount of preliminary customized work involved. This is because of the integration between WebSphere Commerce black-box subsystems and the external dependencies in the runtime environment.

As stated earlier in this article, the benefit is thinking through the unit test cases to prepare the development team, like preparing an Olympic team of athletes for competition. The coverage and feedback to design are the ultimate benefits to minimize bugs in a project, whereas the executed unit tests are merely a metric relevant to the observer.

Costs and benefits of automated unit tests

There are costs involved in creating and updating tests for the purpose of automation. These costs are higher for automated tests. In contrast, the cost for test runs is much higher in manual testing. Automated tests pay off if they are repeated many times. A WebSphere Commerce customization project needs to consider the automated test payoff. Your development team may prepare code for the very first phase of a project for a customer. That does not imply that the development is done because Phase 1 launches successfully. The WebSphere Commerce customization assets are to be reused and continued for future development phases. With unit tests in place, whether automated or manual, future development teams can inherit those test cases and can apply refactoring of the code to size the impact of future requirements. Remember, WebSphere Commerce in itself is black-box, but the customizations are considered white-box until the solution architect finds a way to reuse some of the out-of-the-box WebSphere Commerce functionality.

When the functionality or the public interface of a class changes as would be the case for future phases of development, then you also have to adapt our test cases (thus updating the Micro Design Document). This means that, in reality, the cost for test automation is not constant because it depends on the rate in which the software changes. Despite difficulty in estimating costs for unit test automation, consider the following:

  • In continuous integration, automation of most test cases is a must; after all, these tests run several times daily during demanding development stages.
  • In incremental process models without continuous integration, the costs for automation need to be weighed against the costs for the test runs. It is recommended to collect experience by initially automating the build verification tests.
  • In sequential models, the development is followed by a time-limited integration phase. For this reason, unit tests are needed less often, and the benefit of automation becomes questionable. Often, automating unit test is not cost effective in strong sequential processes. Nevertheless, automated tests have to be created for critical components that cannot be tested manually, such as important internal calculation components. In such a case, it is recommended to use a testing framework.

Tailoring the development process

This article discussed some development methodologies, but only in terms of unit testing approaches. IBM Services has well-established best practices for organizing the development team members into place and adapting the processes to the needs of the project. Whether or not your project adapts automated unit testing, the benefits of unit tests activities compensate for the lack of execution opportunities. With unit testing adoption by the project team, the following are the code-related best practices to use when writing unit test cases:

  • Write tests to interfaces.
  • Do not bother testing javaBean properties with simple getters.
  • Maximize test coverage.
  • Do not rely on the ordering of test cases.
  • Avoid side effects - changing state of system.
  • Read test data from classpath, not file system.
  • Avoid code duplication in test cases.
  • "Plan Stub" classes during the design phase for those unit cases where external dependencies cannot be tested.

External dependencies

As quoted earlier in this article, care has to be taken when driving down a foggy road. Areas that are tough to unit test are the Web interface, EJBs, and external dependencies.

If your unit tests can include Web interface testing, the following are the most popular scenarios in that respect:

  • Resubmission of a forms.
  • Implications of the use of the Back button.
  • Security issues, such as resistance to denial of service attacks.
  • Issues if the user opens multiple windows.
  • Implications of the browser.
  • Whether both GET and POST requests should be supported.

Challenges testing EJB

The largest testing obstacle is testing data that best represents the target environment. WebSphere Commerce provides databeans that you can dummy up with test data to help unit test scenarios. Also, most development environments are lightweight and do not depend on a production database. Rather, they reference test data or a lightweight database such as Cloudscape. When testing with EJBs, the challenge is heavy use of entity beans, and where all of the business logic is in the implementation classes (dependence on EJB container at runtime). There are some solutions to unit test around this obstacle. One is to code business logic in session beans. Session beans and message-driven beans are more testable, but even less testable than POJOs. Here are some suggestions when preparing to unit test EJBs:

  • For remote EJBs, write RMI clients.
  • For remote or local EJBs, use a tool, such as Cactus.
  • Use a substitute for the EJB container. For example, consider integrated J2EE environment with IDEs or MockEJB.

Remember, unit testing is about isolation. You lose more than gain when executing in a deployed environment.

Challenges with outgoing emails

JavaMail and outgoing emails are an external dependency that are also a challenge with unit testing. How to unit test messages, such as email? Some example packages, like org.springframework.mail and org.springframework.javamail, allow application code to use an interface that can be easily stubbed. Another option is to design in practices to temporarily output the email content to a file or the log tracing. This article will discuss the benefits of log tracing later.

Tools that complement unit testing

This section looks at tools that complement unit testing activities:

  • Test generators
  • Coverage testing tools
  • Mutation testing tools
  • WebSphere Commerce 6.0 JSP Viewer

Test generators

When you are advanced with your unit test procedures, efforts are made towards generating tests based on artifact generating tools. Technologies such as Jet (included in later versions of Rational® Application Developer) allow models to generate artifacts. Test generators tools are not as advantageous for the following reasons:

  • Reliance on such tool discourages following XP best practices.
  • You should think about the content of the test cases.
  • Such tools complicate the build process.
  • It's hard to see what test coverage is meaningful.
  • Such tools struggle with understanding object state (method calls on object must be in certain order).

Coverage testing tools

Coverage tools identify code that isn't being exercised. The problem with coverage tools is that it only gives you information about the code you have written. You need to supply the intelligence to make the best use of the information. In this sense, tests should be primarily driven by what the code should do, not focusing on maximizing coverage. Coverage tools are an informative metric so this is more geared towards advanced testers who know how to use those metrics.

Mutation testing tools

Mutation test tools can change around the code to pinpoint potential problems. In this sense, a strong test suite should fail. Mutation test tools test the quality of the testing rather than the extent of testing.

WebSphere Commerce 6.0 JSP Viewer

The JSP Viewer was intended to allow JSP developers to focus on their presentation layer and unit test without dependency on WebSphere Commerce runtime. With advanced experience of this framework, this complements unit testing efforts. The idea here is to have a hybrid mix between a mockup and actual customized executable code. First starting with the wireframe from the customer, it eventually inputs into the JSP Viewer. As pieces of development are put together, some of the pages are linked to the customized code, while other pages are still using the JSP Viewer for mimicking runtime or data. For more information, see JSP Viewer in the WebSphere Information Center.

Performing code reviews

When you schedule a WebSphere Commerce customization project, set milestones for the development team to hold formal review checkpoints. These reviews are scheduled within the development cycle and best placed upon unit completion before code is submitted to formal testa builds. An example code review checklist is best represented in The Complete Guide to Software Testing by Bill Hetzel, showing the following main points as a tool to aid code reviews:

  • Provide structure for the review.
  • Vehicles for recording results (individual and overall).
  • A means of guiding the review activity.
  • Vehicles for learning from the past.
  • Ensure systematic and comprehensive coverage.
  • Tools for quantifying and measuring results.

Code reviews are generally expressed verbally by developers. Through this process errors are often exposed, thus minimizing error impact on the project. Plus, code reviews enables development team members to acknowledge areas of duplicated efforts and merge, thus also increasing quality code for the final delivery.

Enforcing coding standards

Coding standards significantly reduce opportunities for developers to introduce errors into an application. It is much the same as that team of athletes on the final day of competition. No matter what happens during their match, as long as they stick together, they can overcome unpredictable occurrences. The development team also needs to stick together and overcome challenges together. The most controllable and predictable way for that to happen is with coding standards set in place. There are two types of coding standards:

1. Industrywide coding standards
For example, best practices of Java™ is an industry wide coding standards, for example, use StringBuffer instead of String for nonconstant strings. WebSphere Commerce widely uses Java and the industry's best resource of coding conventions is found with Sun's®Java coding conventions. It is suggested to introduce refinements and variations if you prefer, but don't stray too far away from common Java practices. Some other standards include:
Draft Java Coding Standard
AmbySoft Inc. Coding Standards for Java
Object-oriented Guidelines
Unmaintainable code
2. Custom coding standards
These rules are specific to a certain development team, project, or developer.

Three types of custom coding standards are as follows:

Company
These coding standards are rules specific to company or development team. For example, naming convention such as "All classes must be defined in com.<company name>.commerce or its subpackage".
Project-specific
These rules are designed especially for a particular project. For example, naming convention such as "All classes must be defined in com.<project name>.commerce or its subpackage".
Personal
These rules help you prevent your most common errors. For example, "Use all upper case letters for the field names in an interface".

WebSphere Commerce customization projects may find it difficult to govern a development team to keep up with coding standards while making it work successfully. This is because WebSphere Commerce supports such complex integrations, where particular business projects may only use a variety of frameworks and code. Project-specific coding standards may be a challenge because time is required to pinpoint which WebSphere Commerce integrations match the business requirements, then governing those standards within the team. Even if those coding standards don't seem to fit well with the team's intense demands, those standards will keep them together much like a team follows through with an Olympic match. For example, when the team scores in a match, they see it as feedback that they are on the right track. They don't suddenly celebrate and sit back for the rest of the match, but rather continue those successful actions, following the standard they have set for themselves.

Ruling out causes of project distractions

As stated earlier, many bugs occur before the development phase started and before coding standards are set in place. This is why a Micro Design phase is such an important phase in any WebSphere Commerce customization. Let's look at the common mistakes that occur before the development phase of a project:

Going straight to code
In this case, the design phase usually goes straight into the development phase too fast without official sign-off on use cases. Developers are new the project team and may not have had a chance to review the coding standards or practiced those standards which may conflict with their current styles.
Permitting a moving target
To avoid this, encourage a strict change control board which would therein discourage scope.
Not correcting personnel assignment mistakes
IBM Services has a best practice to assign the right person to the right task.
Holistic
Everyone on the team is equally skilled in all aspects of development. For example, you define a number of artifacts, which include JSPs, EJBs, and commands. Then with those artifacts known, you can assign anyone on your team to complete any of the artifacts.
Core competency
People specialize in their areas. In core competency, you have a JSP specialist, an EJB specialist, and a command specialist.
Typically, the skill ramp up time is much higher with holistic and there is some personnel assignment risk. This is optimal with a strong Micro Design document put in place. Regardless of the approaches above, Micro Design activities can minimize personnel assignment risks.
Save integration testing activities until the end of the project
Plan stubs and loopback testing into the design phase of the project before coding starts.

When code is predicable and reliable, the extent of bugs is easier to prevent, monitor, and fix. Other causes of project problems don't pile up amongst bad code. Goals of coding standards ensure good code. The following list shows a short list of good code:

  • Good code is extensible without drastic modification.
  • Good code is easy to read and maintain.
  • Good code is well documented.
  • Good code makes it hard to write bad code around it.
  • Good code is easy to test.
  • Good code is easy to debug.
  • Good code contains no code duplication.
  • Good code gets reused.

As you can see from the list above, good code looks so simple. Why doesn't everyone just do that? Everyone does intend to do that, but everything has to fall into place for good things to happen. Watching out for project pitfalls sometimes seem to take precedence over governing coding best practices.

Practicing defensive programming techniques

Let's go back to reviewing some techniques of coding practice without worrying about other causes of project distractions.

Programming styles have impact on coding practices. This is because developers revert to their best known way to deliver artifacts if the team does not have a strong direction. When the team does not have a strong direction, the following are some good points to remember that relate to programming style:

  • Code that contains over-long methods make it impossible to ensure that all code is exercised. It is unreadable and hard to maintain.
  • Code that relies on magic global static state held in singletons cannot be tested in isolation. It will be inflexible if the assumption it was based on changes.
  • Code that is not pluggable - where it's impossible to change the implementation of a particular collaborator to a test stub - will make it hard to parameterize and extend an application.

You can minimize programming style conflicts with code reviews to ensure consistency. Automated code reviews, such as Rational Code Review, are helpful at the code level. Code reviews in this case isn't to correct style, but rather to have the development team collectively agree on a consensus.

The best way for developers to minimize conflicts with programming style is to continually own and update the Micro Design Document. Upon any change or realized understanding, the Micro Design Document can reflect those changes. If a project level Micro Design Document exists, each developer should have their own Micro Design Document. This inherits the information from the project level Micro Design Document. That way, each developer can include their own unit testing plan and reasons and decisions for programming changes.

Anticipating failures using programming techniques

Programming techniques can predict how the code will be written. With developers thinking about how they will test their code, it affects the result of the code. Among that concept, there are other programming techniques that can anticipate failures. The following is a list of offensive programming techniques to consider in projects:

Avoid duplicate code
This is where code reviews also play a key role. The following list reviews the problems with duplicate code:
  • Too much code.
  • Confusing readers as to the intent.
  • Inconsistent implementation.
  • Ongoing need to update two pieces of code to modify what is really a single operation.
  • Fixes in one area of code leave the duplicate areas still vulnerable.
Duplicate efforts in a project are always an issue, whether it is exposed or not. The project may be successful and launch, but then later down the road, for future phases of the solution, the duplicate code might cause problems. The following techniques help to avoid duplication efforts:
  • Adopt existing frameworks where possible.
    A framework is a generic architecture that forms the basis for specific applications within a domain or technology area. The challenge of frameworks is achieving the flexibility, yet being usable. Excessive flexibility means that the framework contains code that will probably never be used, and may be confusing to work with. However, if the framework isn't flexible enough to meet a particular requirement, developers will go their own direction. Framework evaluation may be required to analyze the potential adoption to meet the project business requirements. As a side, The Selfish Class by Brian Foote and Joseph Yode uses a biological analogy to characterize successful software artifacts that result in code reuse. For external frameworks used in your solution, use the following checklist:
    • Quality of the project documentation.
    • Projects status (http://www.sourceforge.net).
    • Quality of the design.
    • Quality of the code.
    • Does the release include test cases?
  • Is there zero tolerance for code duplication?
  • Ensure good communication amongst developers.
  • Develop and maintain some single infrastructure packages that implement functionality that's widely used.
  • Adopt standard architectural patterns, even where it's not possible to share code.
  • Use code reviews.
Allocation of responsibilities
From the class level, a method should have a single clear responsibility. All operations are at the same level of abstraction. From a component level, this means isolating functionality. For example, SQL injection is separated from the class that invokes it so that the SQL statements are replaced to support underlying database without changing the invoking components.

The same goes with JSP files where some J2EE developers have practiced coding Java logic right into the JSP files. WebSphere Commerce recommends to use data beans to handle the data and to use JSTL tag libraries to handle the way the logic is coded into the JSP. This way, functionality is isolated such that unit testing may not even have to hit Web pages to test the way the data is executed in runtime.
Another way to isolate functionality at the code level is to look into ways of loose coupling objects themselves using reflection. Using reflection is one of the best ways to parameterize Java code. Using reflection to choose instantiate and configure objects dynamically allows you to exploit the full power of loose coupling using interfaces. Such use of reflection is consistent with the J2EE philosophy of declarative configuration.

Limit the length of code blocks
The customized code should have minimal functionality for each method. All blocks of code should have minimal code within them. This allows you and a code reviewer to view the code construct within one view. The limited length forces you to simplify the logic. When there are errors, it is much easier to fix and maintain.
Limit the functionality of functions
Functions should have limited function to them. This reduces the dependency on yourself and reviewers to understand what a function does from a high level perspective. If you were to go through all of your finished code, it is much easier to conceptualize the articacts through simplified functions than to try to understand the complexities within each function.
Limit the need for comments within the code
Yes, limiting comments does sound counter-productive at first. When an artifact is coded well, that does imply it is readable and simple. Comments become redundant. One strategy to enhance readability and simplify calling code is to use parameter consolidation. This means to encapsulate multiple parameters to a method into a single object.
Code example 1. Non-consolidated method example
 public void setOrderHeader(	String accountName, 
 				int balance, 
 				int roundedTotal, 
 				int roundedTax,
 				String comments);

You can simplify it as:
Code example 2. Consolidated method example
 public void setOrderHeader(OrderHeader orderHeader);

The main advantage is flexibility, you don't need to break signatures to add further parameters. You can add additional properties to the parameter object. This means that you don't have to break code in existing callers that are not interested in the added parameters. By populating the default values, you allow subclasses to use the syntax as follows:
Code example 3. Invoking consolidated method example
 OrderHeader o = new OrderHeader();   
 o.setAccountName("ABC Ltd.");  
 configurable.setOrderHeader(o);

Here, the options object's constructor sets all fields to default values, so you need to modify only those that vary from the default. If necessary, you can make the parameter object an interface, to allow more flexibile implementation. The disadvantage of parameter consolidation is the potential creation of many objects, which increases memory usage and the need for garbage collection. Objects consume heap space; primitives do not. Whether this matters depends on how often the method will be called. If the method is remote, it can cause performance degradation. This is because marshaling and unmarshaling several primitive parameters will always be faster than marshaling and unmarshaling an object.
Readable code
Readable code includes:
  • Descriptive names on both variables and functions.
  • Use of named constants (for example, MAX_NAME_LENGTH rather than 25). Also in this sense, avoid literal constants because this is error prone.
    Code example 4. Hard-coded constant example
    private static final int ACCOUNT_SPENDING_LIMIT = 100000;
    if (balance > ACCOUNT_SPENDING_LIMIT) {
           SendMessageToCSR(balance, ACCOUNT_SPENDING_LIMIT);
    }

    This is better represented as:
    Code example 5. Readable constant example
    private static final int DEFAULT_ACCOUNT_SPENDING_LIMIT = 10000;
    protected int accountSpendingLimit() {
            return DEFAULT_ACCOUNT_SPENDING_LIMIT;
    }
    if (getBalance() > accountSpendingLimit() ) {
            SendMessageToCSR(getBalance(), accountSpendingLimit());
    }
  • Short and easily understandable blocks of code.
  • Good separation of logic.
  • Well thought-out distinction of interface functions and helper functions using private and public constructs.
Limit the scope of variables
Variables and methods should have the least possible visibility of private, package, protected, and public. Variables should be declared as locally as possible. Allowing subclasses to access protected instance variables produces tight coupling between classes in an inheritance hiearchy, making it difficult to change the implementation of classes within it. Protected instance variable is only acceptable if it's final. Avoid protected instance variables because they usually reflect bad design. There's usually a better solution. The only exception is the rare case when an instance variable is made final.
Method visibility
Hide methods as much as possible. The fewer methods that are public, package protected, or protected, the cleaner a class is and the easier it is to test, use, subclass, and refactor. Often, the only public methods that a class exposes will be the methods of the interfaces it implements and methods exposing javaBean properties.
Variable scoping
Inner interfaces are typically used when a class requires a helper that may vary in concrete class, but not in type, and when this helper is of no interest to other classes. The disadvantage of anonymous inner classes is that they don't promote code reuse.
Create unit tests
As stated earlier in this article, it may be a good idea to create the test before coding the functionality. This helps you have a clear view on what the unit is supposed to provide.

Most common defensive coding practices

The following list is the most common set of rules overlooked during the development cycle. These problems mostly return during user acceptance testing and confuse integration efforts.

Handle nulls correctly
Document method behavior on null arguments. Write test cases that invokes methods with null arguments to verify the documented behavior. Don't assume that an object can never be null at a particular point without good reason as assumption can cause problems.
Consider the ordering of object comparisons
Code example 6. Source of the NullPointerException
	if (myStringVariable.equals(MY_STRING_CONSTANT))

Is defensively better represented as:
Code example 7. Defending against NullPointerException
	if (MY_STRING_CONSTANT.equals(myStringVariable))

What if myStringVariable is null inthe first example? NullPointerException results are hard to debug.
Use Short-circuit Evaluation
There is always that error during user acceptance testing when they don't want to enter data when you assumed it is always there.
Code example 8. Assumption evaluation
	if ( o.getValue() < 0 )

Is defensively better represented as:
Code example 9. Short-circuit evaluation
	if ( (o!=null) && (o.getValue() < 0))

This is safe even if the object is null.
Distinguish whitespace in debug statements and error messages
Consider this error statement:
Code example 10. Debug an error string freely
	Error in com.foo.bar.MagicServlet:  Cannot load class com.foo.bar.Magic

What if the class is there? It may be helpful to add quotes to the output.
Code example 11. Debug a error string encapsulated
	Error in com.foo.bar.MaricServlet:  Cannot load class 'com.foo.bar.Magic '

There is a better way to understand simple errors with some extra formatting to the messages.
Prefer arrays to collections in public method signatures
Use a typed array in preference to a collection, if possible, when defining the signatures for public methods.
Documenting code
Code that isn't fully documented is unfinished and potentially useless. Guidelines for documenting code are:
  • Learn to use Javadoc:
    • Use Javadoc comments on all methods.
    • Always document runtime exceptions.
    • Comments on methods and classes should normally indicate what the method or class does, but also how necessary a class is implemented.
    • Use /* comments for comments longer than 3 lines and // for shorter.
    • Use Javadoc comments on all instance variables.
    • When a class implements an interface, don't repeat comments about the interface contract.
    • Always document the type of keys and values in a map, as well as the map's purpose.
    • Document element types permissible in a collection.
    • Ensure all comments add value (for example, "loop through the array elements" does not help).
  • While there's no need to document obvious things, it's essential to document non-obvious things.
  • Take every opportunity to improve documentation.
  • Include a package.html file in each package (for javadoc).
  • Document early and always keep documentation up to date.
  • Don't use "endline" (or "trailing") comments.
  • Don't include a change log in class documentation.
  • Unless required in your organization, don't use massive comments at beginning of files with mission statement, license terms, and so on.
  • Generate full javadoc comments daily and make them available to the team.
Runtime performance
There is also the issue of runtime performance where customer requirements do include runtime performance expectations. Even though the code and application may be working as designed and your unit tests can only test the quality of code, runtime performance may have negative results. For this article, there is one common code area which is one of many examples. This example prepares the code to improve runtime performance, to use allocation of responsibility, and to improve readability. This example improves the coding best practice when using ServerJDBCHelperAccessBean:
Code example 12. ServerJDBCHelperAccessBean without allocation of responsibility
ServerJDBCHelperAccessBean JDBCHelper = new ServerJDBCHelperAccessBean(); 
try 
{
	String orgId = getOrganizationId();
	query =   "SELECT orgentityname FROM  orgentity"  
		+ " WHERE and orgentity.orgentity_id = " + orgId;
	Vector queryResult = JDBCHelper.execute(query);
	businessName = buildResult(queryResult);
} 
catch (Exception e) 
{
	throw...
}

You can better code this code excerpt to enhance performance. When the SQL is parameterized this way, you can cache it.
Code example 13. ServerJDBCHelperAccessBean with allocation of responsibilities
private String getBusinessNameQuery(String orgId) {
	ServerJDBCHelperAccessBean JDBCHelper = 
		new ServerJDBCHelperAccessBean(); 
	try 
	{
		String query = null;
		String orgId = getOrganizationId();
		query = buildBusinessNameQuery();
		if (query!=null && query.length()>0){
			Object[] parameters = new Object[1];
			parameters[0] = orgId;
			Vector queryResult = 
			JDBCHelper.executeParameterizedQuery(query, parameters);
			businessName = buildBusinessNameResult(queryResult);
		}
		StringBuffer logInfo = new StringBuffer();
		logInfo.append(orgId).append(query).append(businessName);
		ECTrace.trace(ECTraceIdentifiers.COMPONENT_EXTERN, 
		getClass().getName(), methodName,logInfo.toString());
	} 
	catch (Exception e) 
	{
		throw...
	}
	....
}

private String buildBusinessNameQuery() {
	String methodName = "buildBusinessNameQuery";
	StringBuffer query = new StringBuffer();
	query.	
		append("SELECT  orgentityname ").
		append(" FROM  orgentity  WHERE orgentity.orgentity_id = ?");
	ECTrace.trace(ECTraceIdentifiers.COMPONENT_EXTERN, 
		getClass().getName(), methodName,query.toString());
	return query.toString();
}

There are other areas of WebSphere Commerce customizations that you can improve to enhance runtime performance. However, that is another topic to cover in a future article.

Backward compatibility

WebSphere Commerce is designed to integrate with a large number of backend and messaging systems. Do some careful planning when dealing with the backward compatibility of tested software. As suggested with service-oriented architecture, service versioning of Web services may have an impact on coding practices. Essentially, changes to Web services need to ensure for backward compatibility. The following types of changes are backwards compatible:

  • The addition of new service operations to an existing service description using WSDL. If existing requestors are unaware of a new operation, then they will be unaffected by its introduction.
  • The addition of new XML schema types within a WSDL service definition document that are not contained within previously existing types. Again, even if a new operation requires a new set of complex data types, as long as those data types are not contained within any previously existing types (which would in turn required modification of the parsing code for those types), then this type of change will not affect an existing requestor.

Debugging and logging

One way to work through and test code is debugging. The WebSphere Commerce development tools provide a lightweight debugging environment.

However, there are some issues to clarify about debugging:

  • Debugging sessions are transient.
  • Debugging is time consuming when it becomes necessary to step-by-step through code.
  • Debuggers don't always work well in distributed applications.

For these reasons, development teams need to think in terms of tracing and logging as a better alternative than to depend on debugging to find problems.

There are some extra benefits to tracing and logging as well:

  • Logging encourages thought about the programs structure and activity, regardless of bugs reported.
  • Unit tests are valuable in indicating what may be wrong with an object, but won't necessarily indicate where the problem is.
  • Code isn't ready for production unless it is capable of generating log messages and its log output is easily configured.

WebSphere Commerce already has a solution to handling tracking and logging, called component tracing. This replaces the infamous system.out, which should not be used.

Component tracing allows the developer to enable only specific isolated components to trace their code.

Also, when the solution is launched to production, administrators can reuse the rich component tracing to monitor and establish sources of possible defects.

Making exceptions informative

Along with the tracing and logging capability, exceptions in WebSphere Commerce can be thrown to an error bean. Specific errors are mapped to a properties file to include details about the operation that failed, and preserving the stack trace to the logs. Tracing sent to the error bean gives precise information about what the process was trying to do when it failed, and information about what might be done to correct the problem. Here is an example:

Code example 14. Making exceptions informative
WebApplicationContext failed to load config from the file /WEB-INF/
ApplicationContext.xml; 
cannot instantiate class 'com.foo.bar.Magic' 
attempting to load bean element with name 'too' - 
check that this class has a public no arg constructor.

The following code excerpt is an example of how to trace code in WebSphere Commerce.

Code example 15. Merging unit testing with component tracing
StringBuffer logItems = new StringBuffer();
logItems.append("userId").append(userId).append("\n");
logItems.append("registrationType").append(registrationType).append("\n");
logItems.append("registrationFlag").append(registrationFlag).append("\n");
logItems.append("otherFlags").append(otherFlags).append("\n");
logItems.append("importantInfo").append(importantInfo).append("\n");

ECTrace.trace(ECTraceIdentifiers.COMPONENT_EXTERN,getClass().getName(),
		METHODNAME,logItems.toString());

As you see from the trace identifier, the EXTERN component is used. This maps to the component com.ibm.websphere.commerce.WC_EXTERN that you can enable in the WebSphere Application Server environment. The EXTERN component is used in this case to minimize the amount of tracing that is sent to the logs to complement the unit testing feedback.

If an error message is not informative enough during the development phase when building the functionality, refactoring can come into play. Refactoring not only cleans functional code, but also improves:

Error messages
Where a failure with a confusing error message indicates an opportunity to improve the error message.
Logging
During code maintenance, you can refine logging to help in debugging.
Documentation
If a bug results from a misunderstanding of what a particular object or method does, documentation should be improved.

Conclusion

To achieve peak performance within a specific time frame, it's about being well-equipped and prepared to provide outstanding results. For a development phase within a project, these key ingredients are code reviews, a strong unit testing plan, and valuable coding standards. This article investigated each of these areas in depth in relation to WebSphere Commerce customizations, how they are expressed through the IBM Services Micro Design Phase, and how they benefit the project as a whole. With those areas planned in place, the team should be ready to take on any obstacles the project may have.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Commerce
ArticleID=196662
ArticleTitle=Improving the quality of WebSphere Commerce customizations
publish-date=02212007