When evaluating the quality of a software release and its stability across several iterations, you can look at the test case results as they were accumulated across the iterations of a test plan. As the iterations progress towards release, the number of failing points in test cases should decrease or at least stay very low with respect to the number of passing test points. Hopefully as you approach the release, the number of blocked or deferred point also goes to near zero. By using a stacked bar graph you can easily represent the test points accumulated for each iteration and look for these trends.
At this point, we will construct a test plan results graphic keying on the test case execution records that are linked to test case result records. The test case execution records are specific to an iteration of a specific test plan. Graphing the test results stored as attributes in the test case result records is fairly straightforward exercise with the power of the JRS Report Builder. For this example, I am using the report construction workflow present in the 6.0.1 M2 milestone build that is available for download from jazz.net.
Test case execution records linked to test results records scoped by a test plan
Step 1
In the Report Builder interface, first select Build and the first step is to choose a report type. Current Data should be the preselected value and you just click Continue.
Step 2
Under Limit scope, you select the Quality Management project area where your test artifacts reside. In this case all related test artifacts reside in the same project area by product design. Click Continue to move on.
Step 3
Under Choose artifacts, you select Test Case Execution Record artifacts as the primary record to be reported on. Click Continue to move on.
Step 4
Under Optional:Traceability links, you select All related Test Case Result records and click OK.
Step 5
Since the link type of Optional is correct, you just click Continue to move on.
Step 6
Under Set conditions, you click Add condition, select Test Case Execution Record, select Test Plan Name, and choose the test plan (or plans) you want in your report. Click Add and Close to dismiss the dialog and continue to move on.
Step 7
Under Format results, you first remove the project area name column since this is known and implied by the test plan. Click the X to the right and confirm the deletion when prompted. Then we need to add the missing data columns by clicking on Add attribute columns.
Step 8
Next you add the iteration attribute of the Test Case Result records.
Step 9
Then you add all of the point attributes from the Test Case Result records.
Step 10
At this point all of the data is present but you want to sort it by test plan, iteration, and test case execution record name in that order. Click the Sort Type to Ascending in each of these columns in succession to get the right sort order.
Step 11
Also you want your report to have test plan and iteration in the first two columns on the left side of your report so that the sort order cascades properly. Use the Action up and down arrows to set up the shown column order. When you have it the way you want it, click Continue to move on.
Step 12
At this point you needd to name your report (eg, Test Plan Results by Iteration), tag it so it is filed in the right group of reports in the public catalog and set to Public to share your report. Click Save and Continue to move on.
Step 13
Review the tabular version of your report to make sure it is selecting all the record data that you are interested in. Once you are satisfied the data is correct (and complete), we can go back and change the default format to a graph.
Step 14
Click on Format results to move back and click Graph to set up your graphical parameters.
Step 15
Instead of clumping the results by test plan, we want to show as categories using Iteration. Then we want to combine the numeric columns from the table in a Stacked Bar (set in Graph type), change it to Horizontal (set in Orientation), and exclude Points Attempted and Points Total since they don't represent distinguishing results. Once these changes are made, click Save to save your changes.
Step 16
Run your completed report showing how the test results change based on the test iterations in your test plan.
When looking at a test plan, one of the important aspects is to understand what requirements are being verified. When you are a test lead and looking to see what test cases are linked to various requirements, you can get a feel for where the test plan is focused and where there may be duplication of coverage across its test cases. In addition, you can see which test cases have not yet been linked to the requirements they cover and can prompt the test case designer to add the missing links.
Let's now see how the report can be built with the JRS Report Builder. In this instance, I am using the version 6.0.1 M2. I believe all the functionality is available in the 6.0 GA version although the workflow has been adjusted to select the project areas before selecting the primary artifact type.
Test plan with linked test cases with verified requirements
Step 1
Go into Report Builder and choose to build a new report. As a new feature in 6.0.1 M2, you can choose either a current data or historical trend data report. Choose the current data report and click Continue.
Step 2
Select the Quality Management project area and associated Requirements Management project areas where the linked requirements are kept. Click Continue when you have selected the correct projects.
Step 3
Select Test Plan as your primary artifact for your report then click Continue.
Step 4
Pick the relationship to Test Case as the All Related link type and click OK
.
Step 5
Pick the relationship from Test Case as Requirements – Validates link type and click OK
.
Step 6
Now that you have the traceability links set up, you need to go back and set the Test Plan to Test Case link to Required instead of optional. Click Continue to move on.
Step 7
Click Add Condition to limit the report to a certain test plan (or plans). In the Attributes of drop list, select Test Plan and then the Name attribute. Then in the right hand scrolling list, you select the test plan or plans by name. Then click Add and Close to move on.
Step 8
Now that you have set the conditions and check the condition listed, click Continue to move on.
Step 9
In selecting the columns for the report, we don't need the test plan's project name since the test plans are all from a single project area. Therefore we use the delete function to remove this column from the report.
Step 10
However we want to add the requirement type so we know what types of requirements we are linking to in this test plan. Click the Add attribute columns button. Select Requirement in the Attributes of drop list and select the Type attribute. Click the Add button to move on.
Step 11
Review the listed report columns and edit the column heading for Type to be Requirement Type for clarity and then click Continue to go on.
Step 12
Now that the report is done, you need to give it a name, a tag for its public category and selecting Public (publish to catalog). Click Save to complete this and Continue to move on.
Step 13
At this point, the report runs and displays the data for your test plan coverage report.
One of the important aspects of product quality is the test case coverage of the product’s requirements. This helps the product development team to understand if there has been enough testing from a breadth perspective of all the requirements that the product is supposed to meet. Although by itself without deep examination of the thoroughness of the test cases, this metric is not enough to know that the product is properly tested. Remember that test coverage only is typically measured as direct or indirect coverage of a requirement by a test case and does not speak to how completely the requirements have been tested. Test coverage is only valuable if the requirements have been decomposed to the point where verification points have been incorporated in a test case to test every aspect of the covered requirement. Most test cases are linked to a requirement based on a partial coverage criteria with makes the test coverage optimistic at best. Having a single test case for each requirement is a fairly low bar of test coverage and yet many test plans fail to meet this threshold.
Let’s now turn to how to build a test requirement coverage report. There are two ways to look at coverage. First let’s look at the content of a set of requirements modules or collections and how many requirements have links to test cases. Then we will look at a test plan (and its associated test cases) and see which requirements it tests.
Requirements with linked test cases
Step 1
Go into Report Builder and choose to build a new report. Select Requirements of Type “CapabilityRequirement” as the artifact type. This is a custom requirement type that we built into our ALM Requirements project area in DOORS Next Generation.
Step 2
For the Traceability links section add an option link type “Validated By Test Case”. I believe that this is a new feature that is being added in the coming release and is present instead of the “Related Test Case” choice since this article is based on a Report Builder 6.0.1M1 pre-release installation. Choose the appropriate value for your installation.
Step 3
This is what the link diagram looks like after the link type has been chosen.
Step 4
After you click continue, we next limit the scope to the requirement and quality management project areas that contain the linked artifacts on which we want to report.
Step 5
Now we add a condition on which requirements we want to report. In this case we want to see which of the Key Capabilities for each of the products have linked test cases.
So we choose to select only those requirements contained in these modules (or collections). Then we click the Add and Close button to save these selections.
Step 6
After clicking Continue button, we move to Format results. This is a list of the default columns chosen for us by the Report Builder.
Step 7
Since we do not need the Requirements Project as we are only in one, we delete this entry as well as the Requirement Type as they are always the same. Then we move the Requirements Collection up to the top as the first column since it is how we are wanting to sort the data (which we set in a later step). The resulting list of columns looks like this picture.
Step 8
We want to sort first by Collection or Module and then by Requirement and then by Test Case. This provides the most organized list based on analyzing the requirements collections and which have associated test cases. Click Continue to complete this step.
Step 9
We give the report a descriptive name, tag it with a category of reports (in this case under my name), and set the privacy to Public so that it will show up in the shared report catalog for all to use. Click Save to save the report and publish it in the public catalog.
Step 10
To run the report to check that the data looks correct, click Continue and page through the resulting data. Note that some requirements have many test cases linked while others have no test cases.
Step 11
Next I chose to open the Preview at the top of the builder page and switch the view to Graph and choose the Collection or Module as the X axis categories. In addition I switched the graph type to Stacked bar and Orientation to Vertical. In addition I count the number of times each test case is linked to a requirement in that collection. Then I save these as default settings.
Step 12
Finally I can run the finished report and get the graph view as the default which I can change to a table view whenever I run the report. In addition by clicking on an individual slice of a bar graph, I get tabular data on only that part of the graph. By the way the bottom chunk of each bar represents those requirements without test cases. Currently selecting this chunk displays all of the requirements data for the entire collection or module. In a future version, this should only display those requirements without linked test cases.
In another blog entry I will post the second requirement coverage report.
In this article I want to cover the development of some Jazz Reporting Service (JRS) custom reports and dashboard widgets to manage and track the progress of the quality metrics and test execution by the system test organization. We can now build these custom reports very quickly with little or no training with the new JRS Report Builder. In separate installments I will take you step by step through using the Report Builder interface to build the following reports:
Test Case Approval Status
Test Requirements Coverage
Test Execution Results by Iteration
Test Execution Coverage Status
Defect Find and Verification Status
Defects Found by Component
Open Defects by Severity
Background
Within the IBM Collaborative Lifecycle Management (CLM) solution, there are many ways to implement software development projects. As we are developing new versions of the CLM solution, we internally track our project progress and use the CLM solution for our software development environment. This includes using IBM Rational Team Concert (RTC) for change management, planning, build, and source control management. We use IBM Rational Quality Manager (RQM) for quality and test management and IBM DOORS Next Generation (DNG) for describing and managing our product requirements.
As a large development team we are organized in smaller teams around the product boundaries. We have separate RTC project areas to support the development of Jazz Foundation, RTC, RQM, DNG, and Reporting subsystems. In addition there is a centralized System Testing DNG and RQM project areas that are used to test the entire integrated CLM solution. All of these project areas are “friended” so that artifacts can be linked across these project area boundaries.
Test Case Approval Status
Step 1
If you are managing a testing effort involving the development of new test cases to cover either new or existing functionality, you will want to track the approval status of your test cases. Once they are approved test cases are ready for execution as an auditable regression test phase (often during the final interval of testing for a release).
Step 2
To look at the status of just those test cases being used for a given test plan (or set of plans), you want to scope the list of test cases by the test plan that they are linked into. Note that you set the relationship to Required so that it properly limits which test cases show up in the list.
Step 3
Using JRS Report Builder you first build a test case report and select your project area or areas where the test plans and cases reside.
Step 4
Then you limit your test cases by listing only the test plans for which you want to report status by adding a condition to the test plan that is where the test cases are linked. Select the name attribute of the test plan and chose the test plans you want in your report.
Step 5
Since test case status is not a default column, you need to add Test Case Status as an attribute column.
Relabel the column to test case status to distinguish it from the Test Plan Status and move it up in the column order to just under the test case URL column.
Step 6
I would like to look at the results as a horizontal bar graph grouped by status values. Up in the Preview bar, I select Graph instead of Table and refresh the view.
Step 7
Notice that all the test cases are totaled in a single vertical bar and the test case project area name is the label on the x-axis. This is not what I wanted so I change the categories on the x axis to Test Case Status and select “Count values of a single attribute” with a value of Test Case Status. I also set the orientation to Horizontal to get the graph I wanted.
Step 8
By clicking the Save button, the report builder interface moves me over to the Name and share section where I name the report Test Case Status and fill out a brief description of the report and tag it with my name so I can share it with others. Finally I click the Save button again and this saves my report.
Step 9
Clicking continue moves me to the Run report step and displays the final report.
Hello there! Did you wake up this morning worried about this situation? It turns out that most software development shops that are trying to stay current with the times have to deal with this issue sooner or later. There are the academic solution is to expand every agile development team into a true cross functional team consisting of scrum leaders, developers, testers, business analysts, and someone playing the role of customer representative. The problem is that this view of the world doesn't understand enterprises – the developer's or the customer's. In this installment, I will focus on the role of testing in and around the agile development team.
In my experience, this situation has significant shortcomings as far as meeting the expectations of the business framework in which it operates. Here are some of the considerations that are hard to reconcile:
How much does my customer representative really know about the business needs of the eventual user of the software we are delivering?
How effective are my scrum team members at integrating the software into a customer production-like environment (including big data, security context, and working interfaces)?
How realistic are the user work flows used to test the software during the sprint?
How do I know that the software will handle expected volume of user requests?
How do I know if the software will remain operating efficiently for weeks or months without requiring a restart?
Agile Software Development
For an agile team to be effective, it must focus on delivering a few specific customer visible features during each sprint. At the end of the sprint, the goal is for those features (or user stories) to be fully functional and of production quality. The team then demonstrates those features to the customer representative to obtain feedback and confirmation that the feature content meets the customer need as expressed by the user story. Based on the customer's feedback, the team may receive new feature requests to add to or change the behavior of the current state of the software. These items are then incorporated into the sprint backlog of candidate user stories for the subsequent sprints. All of this is old hat to those who have participated in agile development or a scrum team.
The agile team typically might run two or four week sprints throughout the year without pause. At certain intervals determined by the business environment, end of sprint deliveries are then moved into production as the next release of the software. This release process may be at the end of every sprint or monthly or quarterly depending on the business context.
So at this point we have an agile development team pumping out release candidate software. Notice that I said “candidate” and not “quality”. The questions raised above about integration, fitness for customer usage, scalability, and robustness might be very hard to answer within the sprint context. For example, how to you detect and fix a very slow memory leak that might not be easily found without a two week application stress test? If the sprint is only two weeks long, then clearly this test can not be run totally within the sprint where the new function was developed.
By this time, I hope that the picture has been painted well enough that you see that enterprise application software development requires a surrounding test framework to make sure that efficient and productive agile development teams can deliver high quality software.
Separate integration and system testing
Because of the need to build and maintain a production-like test environment, a separate team with system administration and operations experience is best equipped to do this. There may be integrations that need to be set up that require a special skill set.
In addition, this team must obtain an understanding of the customer needs and business context to construct realistic user scenarios. They must also populate the test environment with large data equivalent in size and shape to the amount of data expected in the production environment. By running test scenarios that mimic expected customer workflows, the number of customer found defects can be minimized. This team also reviews those customer found defects and builds in validations to ensure that regressions in those areas of the application do not recur. The more customer facing involvement that this team has, the more effective it is in preventing customer found defects in new releases of the application.
Incorporating testing in and around an agile development team
There are certain activities that are closely tied to the development team and tend to be white box in nature. That is the application architecture and component interconnectedness helps determine the best test strategy and where to focus the testing. In general if you think in terms of styles of testing, this would include unit testing, component integration testing, and basic functional testing. These should be incorporated in the agile development team's activities and considered part of completing a user story within a development sprint. In general, unit tests should be programmatically developed and incorporated in the build process. This means that they are run with each build and their results can be monitored on a build by build basis. Any failures should be immediately resolved and these unit tests can be used to signal whether a build is worth putting forward for further testing. In addition, after passing unit tests, the application should be automatically installed on a test environment and some basic sanity tests run. These tests should also be automated and considered a build verification test. Any build not passing these tests should also be considered “dead on arrival” and not tested further. The development, monitoring and maintenance of unit and BVT tests should be the responsibility of the agile development team and factored into its workload. One way to address this is to attach a unit test work item and a BVT work item as a child of every user story and included in the “cost estimate” or story points for each user story. Likewise in the case of four week sprints, the developers tend to reserve the last week as an integration and functional test week to make sure everything is working for the end of sprint demonstrations and that independently developed component changes that must interact can be tested thoroughly as part of the sprint. For two week sprints, the developers may use acceptance test driven development. They start by building the functional tests first and then the actual implementation proceeds until the test passes. If this doesn't occur within the sprint, the entire user story and associated change sets are not delivered until the following sprint.
There are other testing styles such as system testing, scalability testing, robustness testing, and various performance characterizations that typically can not be contained within the sprint nor exercised by the development team as they require specialized skills and potentially take longer than a sprint to perform. Ideally some of this testing can be started within the sprint taking early code drops either weekly or possibly more frequently. This permits early detection of integration problems beyond the application borders with other production applications and interfaces that are not easily tested within a developer's test environment. Also getting an early indication of scalability and robustness can give feedback while the initial code is being developed that restructuring or alternate algorithm design is required. This could also help detect the need for caching or paging functions in places where large or frequently accessed data may be expected in the application. This testing is normally done using the end of sprint deliveries and start after the functional testing is completed within the sprint. This reduces the amount of time that these tests may be blocked by instability or critical failures. However, in my experience, the earlier in the sprint these tests can be started, the more likely that the major problems can be discovered within the sprint and fixed before the end of sprint delivery is made. The post sprint testing then can proceed efficiently and get earlier results in the areas of performance and stability. Early system testing can provide an indication of whether the end of sprint delivery will qualify as a production release candidate. Getting that feedback also provides great value to improve the end of sprint delivery.
In summary, limited testing within the sprint using an early code drop can greatly enhance the quality of the end of sprint delivery. For this reason, your test teams should begin running system tests and performance tests as part of the sprint and plan on a regression pass or stable version performance characterization based on the end of sprint delivery. This approach maximizes early feedback and sprint delivered quality while ensuring that there aren't any last minute changes that cause regressions.
Release schedule implications
System and performance test teams should follow the development team's sprint schedule with an extra sprint at the end after functionality freeze to complete their release testing. During this extra sprint, documentation, language translations, and packaging can be completed as well for the new functionality completed in the release to round out the complete production quality application. Since testing has proceeded in step with the agile development process, no extended test period is required before the release is at production quality. This concept actually works very well in practice in my experience in working with agile teams over recent years.
Software Quality Checklist
To produce high quality software, there are a number of dimensions of testing to consider. Here is a list of questions that should be answered for any production quality software delivery:
Proper functionality
Are my user stories a true reflection of the customer's business need?
Do the minimum viable product (functionality) statements cover everything that is mandated for a first release?
Did the customer readout (feedback session) at the end of the sprint result in confirmation that the customer's needs were met?
Architectural consistency
Are the new functions developed in the same user interface style, integrated seamlessly into the adjacent workflows of the application, and navigation is logical and consistent?
Are the data handling functions done within appropriate contexts taking into account transaction integrity, logging, auditability, scalability, backup, and recovery?
Can the software be hosted in the same enterprise infrastructure – both hardware and middleware – that the other enterprise applications use?
Development and functional test focus questions
Are there automated unit tests to check the integrity of the function and its code paths?
Has the developer built and tested the new functions in their sandbox?
Has the code undergone peer review prior to promotion to the integration stream?
Has a second team member written and executed functional tests based on the user story?
Has the user story been demonstrated to the customer representative at the end of sprint readout session?
Integration and system test focus questions
Has the software build been installed and configured in a production-like environment?
Has the independently developed role-based user scenario been exercised against the test environment?
Does the test environment contain substantial data consistent with the production environment?
Is the user work flow intuitive and easy to follow?
Did the software design meet all the minimum viable product statements from the customer?
Is there any significant functionality gaps in what the customer would expect?
Performance and robustness focus questions
Can dozens, 100s, or 1000s of users exercise this new function simultaneously?
Can the application operate properly and with good performance for a long time using this function?
Are their side effects introduced impacting the performance of the rest of the application caused by exercising this function?
Does this new function impact the application's scalability?
Can the function be interrupted, transactions rolled back and restarted maintaining data integrity?
Summary
I have shared some of my experiences working with customers and the internal IBM product development teams. I hope that it stimulates some thoughts about folding all styles of testing into our new world of agile software development.
In general, I am a “glass half full” person so I try to look
for the positives in any situation.There are many things that make using the IBM Rational Jazz platform
into an interesting and rewarding proposition for our customers.One of those is being able to access
virtually any software development project asset by a URL. You can send this
URL to anyone on your project (or even outside your project if guests have
read-only access) and that person can click on that URL, authenticate, and
start examining that project asset.This
is a huge boon to software development!If you think about it, you didn’t send recipient a copy that may be
out-dated in minutes, hours, or days, but a link to the live asset in its
process context.Its links to other
related assets are all “live” and can be used to navigate to the assets
providing context to that asset.No more
proliferation of outdated versions of assets.No more worries about whether the asset has changed since it was sent to
you.
Now with all that said, there are some implications of using
URLs to reference project assets that you need to be aware of.If that asset is moved or deleted, you end up
with a useless broken link.The current
state of the Jazz technology is that you don’t ever want to change or break
that link.This is the genesis of the
absolute rule that you can not change your Public URL for your Jazz server.You don’t want to break a bunch of asset URLs
that have been distributed to project members.
However the IT business is very fluid and customer
infrastructure is always changing, growing, morphing into a new way of
supporting software development every day.The lifetime of a server system hosting development assets may be as
short as two years or possibly as long as five or ten years.Projects may start out with five people and
grow to two hundred people over the course of a five, ten or more year
lifetime.How do you plan for this
public URL that can never change?
From my perspective (and I believe it is a sound “best
practice”), the Public URL of a Jazz server should be created with the intent
that it be a long-lived logical address to a repository of Jazz hosted
assets.This means it is not a physical
server name and it is not the system name of an operating system instance
running in a cloud or virtual image.The
hostname component of the Public URL should be a logical Jazz server name.The example I like to use is
jts01.myServerFarm.myCompany.com for the Jazz Team Server and
qm01.myServerFarm.myCompany.com for the Quality Management Server.
As for implementation of these servers, it is totally
independent of where and on what hardware the server is hosted.Using a DNS entry, you can steer the requests
to whatever host name that the server is hosted.
Of course as usual the devil is in the details and it is no
different for the Public URL.The whole
Public URL must not change!This
includes a hostname, a port, and a context root for the Jazz application that
is processing the request.In that vein,
for new installations of the Jazz Team Server the public URL maybe something
like https://jts01.NAjazzservers.ibm.com:9443/jts for its address and a new Change and ConfigurationManagement Server would use https://ccm01.NAjazzservers.ibm.com:9443/ccm for its address.
One of the nicest features of the new Release 3.0.1 version
of the Jazz products is that you can host one or more of the applications on
the same web application server.This
can be done to save on deployment costs (of having all individual servers – one
per application) with the understanding that later on, you may want to separate
out heavily used applications on to their own servers. There is really very
little “routing” that you must change to make this transition happen.Let’s go through an example of a small team’s
deployment of the full Collaborative Lifecycle Management capabilities on a
single application server and how it can be migrated without breaking any
Public URLs to a distributed set of servers when the tools user base grows
(possibly within the existing projects).
Starting with a single web application server with a host name
of webapp027.NAservers.ibm.com, we add four aliases to that server name in the
DNS:
-jts01.NAjazzservers.ibm.com
-ccm01.NAjazzservers.ibm.com
-qm01.NAjazzservers.ibm.com
-rm01.NAjazzservers.ibm.com
The Public URLs for these Jazz applications are set as
follows:
Then six months later you bring on 50 more developers and
100 more testers as your projects ramp up.This necessitates moving the JTS and QM servers off of the existing
server to provide additional server capacity.In order to accomplish this change, you are provided with two additional
hardware servers: webapp028.NAservers.ibm.com and
webapp029.NAservers.ibm.com.After
installing and configuring the JTS application on webapp028 and QM application
on webapp029, you change the DNS entries so that jts01.NAjazzservers.ibm.com is
an alias for webapp028 and qm01.NAjazzservers.ibm.com is an alias for
webapp029.The web applications are
relocated but still point to the original databases containing the jazz
repository data for their respective applications.This operation can be accomplished overnight
without any hesitation as the actual repository was never touched during this
process.After doing this distribution
of the applications, the CCM and RM applications are still running on the original
webapp027 server while JTS is running on webapp028 and QM on webapp029.
In summary, by using static logical host names as a component
of the Public URL, you can safely and easily adjust your server topology based
on your organizations changing needs.Hopefully, this strategy can be useful for your new Jazz product
deployments.
There are many configurations that you can use when deploying the CLM products on a single web application server. When you are doing this, there are a number of best practices to consider:
Create separate virtual hostnames for each of the application instances so that the publicURL can be preserved when that application instance needs to be split out on to its own separate hardware server.
Although it is possible to host multiple IBM WebSphere application server profiles or multiple tomcat servers (each running its own JVM on a separate port), these applications can all be run on a single profile or tomcat server and share the JVM resources. This is much more memory and CPU efficient than running four JVMs and having the operating system schedule and switch among them.
If you are stuck with multiple applications having the same host name in their respective publicURL, there is a way using a HTTP server as a reverse proxy to steer the application requests to separate back-end application servers. In fact this is a common configuration for very large Jazz installations such as the one hosted on jazz.net.
When you are starting with a concurrent active user load of under 100, you can comfortably deploy all of the applications on a single mid-range 64-bit hardware server with multiple CPUs and 6-8GB of memory on a 64-bit operating system. This assumes that the Rational Reporting for Development Intelligence and all of the databases are hosted on other servers. As your concurrent active user load increases you can split out the CCM (Change and Configuration Management) application if the growing segment of user load is using work items, planning, or source management functions. You can split out the QM (Quality Management) application if the growing segment of user load is authoring tests, doing test planning, or executing manual or automated tests. You can split out the RM (Requirements Management) application if the growing user load is defining requirements, story boarding, business process diagramming, managing requirement collections, and setting requirement attributes and link types. As the total concurrent user population increases across the board, you may want to host the JTS (Jazz Team Server) on its own hardware server to permit full capacity growth.
The bottom line is that you can start small on a single application server and migrate to larger and more distributed hardware as your user load increases.
A customer question came in about our stated multi-CPU server requirement for the Jazz products.
When we say dual CPUs with quad CPUs preferred as our system requirements, what do we really mean?
My response follows and the discussion will make the hardware folks cringe at its vagueness but you will get the point.
------------------------------------------ The requirement on multiple CPUs and/or cores is specified very loosely. The idea is that all of the Jazz products run on a JVM that can make full use of any and all parallelism available to push transactions through the server. We have purposely not specified cores as the more you have the better thread parallelism can be achieve but we do not wish to overconstrain the problem.
Many of today's servers come in a dual CPU (with dual or quad core / CPU) configuration that might be considered the nominal mid-range Intel architecture server. This is an example of a server that nicely supports the distributed configuration described in the CLM sizing guide (Collaborative Lifecycle Management Sizing Guide).
Getting more from our guidance is not important because the customer's usage of the product is likely to vary enough from our performance workload as to make more specific recommendations pointless. You will have to deploy to production, allow your users to learn the product and develop their usage pattern before you can begin to gauge how your organization's workload stresses your specific hardware configuration.
Just try not starting with 10 year old hardware and make sure you have enough memory.
A customer's question: How do I know when my Jazz server is reaching its capacity?
By the way, although this question was answered specifically for capacity expansion of the IBM Rational Team Concert product, it would generally apply to the other Collaborative Lifecycle Management products as well.
------------------------
There are actually several things in play when trying to decide when to split off yet another CCM (Change and Configuration Management) application instance.
1. Can I start all new component developments, projects and activities that are loosely coupled from those resident on other CCM instances: This is because sharing of components, plans, and work items is harder and less visible across CCM project area and server boundaries. You need to take into account when new project areas should originate on a new server rather than be a part of an existing project area or server.
2. Can I merely grow the current CCM by moving it to a more powerful virtual or physical system and maintain and grow my current project area structure? Remember the application server HOSTNAME and publicURL has to be maintained but otherwise you can change out the actual physical and virtual hardware (renamed to the same HOSTNAME) and keep on expanding your current CCM system.
3. Do I also need to expand my number of JTS (Jazz Team Server) application servers at the same time because its capacity has also been reached? There is a limit of ~2000 concurrently active users that can share a single JTS server (on a midrange server) as well that must be considered.
I would monitor the CPU utilization and the java heap utilization on the CCM and think about creating the next project area on a new CCM server when you have reached a high water mark (averaged across several minutes) of 50 to 70% CPU utilization depending on how dynamic your utilization is. The more volatile it is, the sooner I would stop adding additional projects to that server. Just make sure that the allocated heap memory is much smaller (by 1-2 GB) than the physical server memory (assuming only a CCM server is running on that system) and is staying no more than 70-80% allocated.
REMEMBER that project area movement from one CCM server to another or from one JTS server to another is not supported. Growth should be done as you are adding project areas and keeping room for internal project growth as you allocate projects to servers.
First of all, this is not a novel or ground-breaking
approach but rather one based on standard industry practice.In fact, the test plan is one of the standard
documents called out in the IEEE 829 “Standard for Software Test
Documentation”.In the Wikipedia entry for IEEE 829 it calls
out the following topics to be covered in a test plan:
How the testing will be done (including SUT
(system under test) configurations).
Who will do it
What will be tested
How long it will take (although this may vary,
depending upon resource availability).
What the test coverage will be, i.e. what
quality level is required
As
you can see RQM has taken this as a basis for some of the sections in the basic
test plan artifact.In addition, by
examining the “What will be tested” and combining the notion in the Rational
Jazz platform of keeping information in one artifact and linking it wherever it
should be referenced, you can see where the linkages to test cases come into
the test plan.The notion of “What the
test coverage will be” is clearly the genesis of why requirements or
requirements collections are linked into the test plan.
For
a moment let’s talk about the need of reporting the team’s progress against a
test plan.When you want to look at
testing progress, there is a need to look at the pile of work that has been
completed and is to be done.Also you
want to look at what team members are assigned and working on as well how the
team’s progress tracks against the time line put together for the testing
project.As seen above in the content of
the test plan, the notion of “How long will it take” is included.In RQM, the test schedule section includes
testing intervals which define WHEN the test case execution will be
performed.To best track the project,
you want to report against the scheduled work that is supposed to be completed
in the testing intervals.Running test
execution reports against a test plan provides an automated way of getting this
information in an easily consumable format.Of course, in order to get this information to come out, each of the
test cases that should be executed during a given test interval needs to be:
Linked to the test plan
Test execution records (TERs)
have been generated assigning the test case execution to test intervals of
the test plan
I
have found that many of our customers do not understand nor see the value in
the test execution record as an integral component of scheduling the testing
work.It is often overlooked as part of
the test planning function that is typically done by the test leads or managers
rather than the individual writing the test case or test script.Because it is attached to the test case, it
is often viewed as a part of test creation focused on identifying the test
environment rather than its planning function.For those that need a refresher, the test execution record maps the test
case to a test environment to a test interval in a test plan.Other items identified include the “owner” of
the test case execution and which test script is to be run.This test execution record can be seen as a
plan item for execution of this test case as a part of the test plan.Once the TERs have been created against the
test intervals of a test plan, the reports provide very clear and detailed
testing project status against the test plan.
In
summary, the test plan is the centralized and most important test asset in RQM
to provide test reporting.Without the
test plan established and elaborated, there can be no reasonable tracking of
project progress against a plan.Any
attempt to count test case executions without the use of test plans is
difficult because it is expected that many projects (and releases) may exist in
the same project area so that asset re-use is possible and encouraged.Furthermore, if release N’s functional tests
become a subset of release N+1’s regression tests, you definitely want to keep
multiple testing projects in the same project area in RQM.Separating out test execution results must be
done by associating the test executions (test execution results) with a test
plan at the time of execution.This is
done as a planning step by generating TERs for all test cases in the test
plan.Reports are then run against a
test plan.
Often application functionality is tested by executing
business workflow sequences.These
workflow sequences can be broken down into logical business transactions. Each
of these business transactions can be developed independently and maintained as
separate manual tests.These manual
tests can be used to implement multiple workflows.
To get maximum value from this concept, you should implement
each of these manual tests as a separate test case with a corresponding manual
test.The manual test starts from a
known application state, executes the business transaction, and verifies its
results in the application.
Use Test Suites for Business Workflows
For the business workflow testing, you design a test suite
that is basically a sequence of these business transactions represented by a
number of test cases (and their corresponding manual tests).By implementing workflows in this way, if new
workflows introduce an enhanced capability with a small addition to the
application, the testing of this workflow can be put together very
quickly.Most of the workflow will
consist of existing business transactions with either a few being modified to
take into account the new functionality or by the addition of a new business
transaction if it is a completely new step in the workflow.
As an example, in web testing where each screen tends to
contain data fields to be filled in and potentially an enforced sequence of
steps, this may map to a single manual test.Let’s begin with the assumption that screen to screen navigation is not
linear but rather a network of possible workflows.By implementing each screen or set of screens
that have a definite sequence as an independent test case (and manual test),
you can build a catalog of test cases from which you can construct business
workflows through your application.
Impact of Blocking Defects
When a test case is blocked then none of the test suites
(workflows) containing that test case are also blocked.This highlights the importance of fixing
blocking defects towards the completion of the testing effort.As you report on your testing status, you can
want to highlight how many of the test cases can not be executed due to
blocking product defects.Currently the
only way to do that is to attempt each of the test suites containing the test
case which is blocked.This permits the
execution of all the preceding test cases up to the blocking test case but
doesn’t permit any of the subsequent test cases to be executed.By looking at the overall test cases planned,
you can see that there are many test cases not run and maybe a few that are
blocked.The status is a little misleading
if there are a few blocking defects that are present in a large number of the
test suites.
View Requirements Impact
By viewing the test suite results, you can see if there have
been a large percentage of test suites attempted and ended up blocked.By running
the “Plan Requirements Defect Impact” report in RQM, you can see what
requirements are impacted by defects and then trace those “blocked” requirements
over to the test cases and test suites related to those blocked
requirements.This is one way to assess
the impact of the current blocking defects.In the case of an integrated tool solution, this report must be run
against the combined data warehouse either through the Insight product or in
the new CLM 3.0 product set.
There are a number of customers happily using Manual Tester
to document and execute their manual tests for their software testing
projects.This tool provides an easy way
to formally document the user steps, provide test input data, and validate the
expected responses from the software under test.If you have dozens or hundreds of these
tests, you can make use of Rational Test Manager to manage whole projects of
tests and their corresponding results.By implementing a folder structure and possibly make use of test suites,
you can organize your tests and results as well.
There is now a new generation of test management and manual
test assist technologies available from IBM Rational called Rational Quality
Manager (RQM).This web server based
tool is based on the Jazz collaboration platform and has many enhancements and
advanced capabilities not present in the Test Manager and Manual Tester
environment.
One of the advantages of Quality Manager is the zero client
install footprint on the tester’s test machine environment.Since RQM is browser based, you don’t have
any test software product to install on all your test machines.This permits you to wipe out and reimage the
test machine and only install the software being tested.A second great advantage is its ability to
support testers in geographically distributed locations from a central server
and provide real time status reports of how that testing is progressing.The third important benefit is the ability to
collaborate through the tool platform, assigning tasks or defects to whomever
on the team needs to address that particular issue.This includes a personal “To Do” list for
each team member as well as the ability to monitor the whole team’s activities
as they happen.
So now that we have discussed the reasons that you may want
to move forward from the individual productivity enhancements afforded by the
Test Manager and Manual Tester approach to add the team oriented, collaborative
approach of Quality Manager, what do you have to do for the migration of your
manual testing project?Basically the
RQM product comes with two migration tools, one for migrating a Test Manager
project to RQM and one for migrating individual Manual Tester tests to RQM.
There are three steps to the migration for those moving to
RQM:
Migrate
all of your manual tests from Manual Tester to RQM.
Customize
(if desired) any of the custom fields in Test Manager assets using the XML
mapping file so that data comes over into the proper fields for Test Plans
and Test Cases.
Run
the Test Manager Migration Wizard on a test machine with all of the Test
Manager and Manual Test assets in their proper directories so the linkage
is preserved over in RQM.
Obviously going from Windows GUI applications like Rational
Test Manager and Manual Tester to a web based interface is a big change from a
usage model perspective.The layout of
controls, the translucent window, the compact views are all gone in the browser
interface.However by adjusting the
height of the browser panel or using dual or wide monitor space, you can find
geometries that permit the visibility of the application under test as well as
the manual test execution view in the browser.
Many of our current customers are undergoing this transition
and most have done so with little trouble during the asset migration
process.The usability of the manual
test execution from within RQM is not quite as convenient but is close to the
capability of the Manual Tester product.Its great advantage is its seamless integration with the test management
and execution views within RQM as well as the roll-up of testing progress
across all of the testers on the team.
For those customers linking to Requisite Pro based
requirements, the story is just as impressive.By specifying the Requisite Web server during the Test Manager
migration, you even get the Requisite Pro requirements linked in to the test
cases as was the case with the original project assets.No changes are required on the Requisite Pro
side as long as it is the up-to-date version (7.1.1 or later) of the Requisite
Web server.This server can then be
seamlessly integrated as your requirements server for RQM as well.
For anyone contemplating moving from Rational Test Manager and Rational Manual Tester to RQM, I hope this has been a helpful overview of the movement forward. I have attempted to give a little perspective on the benefits of moving ahead to the new IBM Rational Jazz collaborative platform for teams.
I guess folks have become conditioned in our technological world to divide things up into little buckets so they can get better understand what is going on. Unfortunately Rational products have had several different "bucketing" schemes depending on which ones you have experience with. Rational Robot and Rational TestManager, for example, had a test repository that we called a project. This project contained all of the test assets and results associated with a particular testing effort. Although it worked pretty well for a small team working together, the test repository pretty much had to be local (on the LAN) with the testers and the test machines or it did not have acceptable performance characteristics. Rational ClearQuest introduced a slightly different concept with its schema repository that links up with a project from TestManager but was considerably broader in scope. The schema repository holds all change records, action items, requests for enhancements, defect records, or any arbitrary records that you want to track across your entire business area or possibly even your entire enterprise.
When the Jazz platform was introduced and a new family of products were built on it, they decided on yet another similar (but different) concept as its partitioning unit, the project area. This project area concept was still different from either of the previous two concepts. The project area is more akin to the schema repository of Rational ClearQuest that the project from Rational TestManager. The project area provides a complete isolation of all stored artifacts from those stored in other project areas. This permits a single installation of a Jazz server to provide its services to multiple totally independent projects or business units. However, because end users are so accustomed to chopping up their little projects into separate partitions that their first instinct is to put every project into a separate Jazz project area. This is a huge mistake in a lot of cases! If for example you wish to share or re-use assets across projects or releases, you will need to have them all in a single project area. There have been features added to Jazz products such as Rational Quality Manager's concept of Test Plan or Test Case categories to facilitate viewing project specific data. These categories permit per project reporting and management while keeping all of the test assets together in oneproject area. The intent of the products' design is to permit easy sharing, re-use, linking, and copying of assets within a project area so the users can benefit from the productivity advantages of these concepts. Hopefully, as we discuss the Jazz products more with our customers, they can get comfortable with not slicing and dicing their project up into separate project areas. Over the past few months, I have found that this single issue has engendered user disappointment by the lack of cross-project area linkage when it is really not intended nor needed if the customer's usage of the tool is properly aligned with its design. If you have a customer (or if you are one), please take this discussion into consideration during the application of the new Jazz products.
Visiting Testing Shops in America In the last several months I have had the opportunity to spend time with managers of several large testing teams. Each of these managers was a leader of a test team. Their team is tasked with testing dozens of applications that make up a portion of the hundreds of applications that must be tested annually by their larger organization. In this entry I will describe some of what I found.
Tasks and Status Let's talk about test status reporting and passing out daily testing tasks. This is what I call the Micromanagement of test teams. This composite view of what these test managers expressed as their normal process for doing this part of their job:
Based on information from the build team, they spend up to 30 minutes per day assigning tests to individual team members to be run that day.
Based on individual member's test status spreadsheets, they spend up to 1 hour per day building a summary spreadsheet reporting status on the team's testing progress.
Each individual on their team spends up to 1 hour per day recording their testing status in a spreadsheet that is then emailed to the team lead as their daily test results.
Each week in preparation for the Director's operations meeting, the test manager spends up to 4 hours rolling up a weekly team status from his team member's individual status spreadsheets.
The typical test team (across those that I talked to) had around 10 members and spent over 20 hours/week summarizing test status and rolling up known test results. If the loaded salary of that person is $50,000 / year, then that one team is spending $25,000 / year counting beans and not testing. Imagine a large testing shop with 20 or even 50 test teams, they might be spending up to $1 million just counting how much they got done!
Manual Testing of New or Changed Functionality The central function of these test groups were to test the new or changed functions being added to these existing applications. In 100% of these large testing shops, there was little to no test automation being used to do this. In most cases either an Excel or Word-based standardized template is being used to capture the user-entered steps that must be performed to execute the test scenario. The way that the input data and expected results were captured as part of the test scenario varied from shop to shop but there was a recognized need to represent both as part of the test procedure. In each of these test groups, the test results were manually entered into a results spreadsheet and kept by the tester. Defects found in the application were filed in the organization's defect tracking tool and relied totally on the tester to enter enough information for the defect to be reproduced and analyzed by the development team. In general there was no linkage between related test artifacts. The test scenario template was separate from the test results spreadsheet which was separate from the defect record documenting the defect found with the test. The tester's end of day status spreadsheet was the only way of connecting that Test scenario X with Defect Y.
If the testing practice was disciplined enough, the test scenario name is part of the information included in the defect description. The location of the manual test document (aka the test scenario) may be derived from the data recorded in the defect record so the steps to reproduce the problem do not have to be manually cut and pasted from the manual test document into the defect record. Furthermore the manual test documents are accessible andused by the developer in reproducing, fixing, and verifying that they fixed the problem.
Test Case Design and Manual Test Authoring In most cases, a business analyst has written either a functional requirement, use case, or business process flow document that clearly describes the business user flow that is implemented by the developer of this feature. This document is used as the basis for the formal test case and/or manual test (scenario). A variety of styles were used even within the same set of manual tests as to the level of detail and inclusion of sample test input data. In numerous manual tests, there was only a statement for the tester to observe if the result of the operation "worked". In one shop, there was an expectation that a screenshot would be captured as part of the test results document. This was noted to be needed in their industry as part of the proof that the test was actually performed and it passed for auditing purposes. While there was a notion that at least one test case would exist for each of the new or changed business requirements (or use cases), it was not clear how progress or status was tracked for how many of the requirements had designed test cases or how many of those tests had been run at any given time.
Summary My customer visits with these highly dedicated and motivated test managers helped me understand the project pressures and pain points that exist in their jobs. While I was somewhat disappointed by the technologies in use, I was struck by how disciplined each of these individuals was and how much effort it took for them to regularize and roll up the testing status information from their team's efforts.
In each case, I was visiting to help them evaluate and understand how Rational Quality Manager could bring great business value to their team's work. Indeed, by providing a common framework for storing test plans, test cases, manual tests, and test results, Rational Quality Manager could save them a large percentage of their daily and weekly tasks. By linking the requirement to the test case to the test result to the defect, you can eliminate the need to duplicate or even copy data from one place to another. By having a common repository for all test artifacts, the real-time project status can be gleaned by just logging in to the RQM desktop and can be reported out by clicking a pdf generation for attachment to your status email. Maybe one day, the director's of testing shops will even log in themselves and get their own status and not require an operations review of projects that are on target. The resulting brainpower savings can be focused on helping rescue the projects in trouble! But then some would say I am a dreamer!
I recently had a pair of encounters with two different audiences that were both enamored with Exploratory Testing. This was interesting to me because I had not encountered much buzz about it in my previous several years of working with customers all over the world. It turns out that this term was coined by Cem Kaner 25 years ago as a handle for the style of strategic, risk-based testing used routinely by Silicon Valley firms. Over the last five years or so, both he and James Bach have been publishing books, papers, and tutorials on this concept.
First I must admit that I have not yet read any of these resources and hope to inform myself better in the near future. With that said, it occurs to me that in the context of the two forums where ET was being touted so highly that it is much in common with Agile Development as a term. There are both good and bad usages of the term, it can mean many things to many people, and it can be used as either the excuse for poor, undisciplined process or properly applied, the only sane way to spend your testing resources.
What exploratory testing should notmean:
Unplanned random walks through an application in hopes of stumbling across an undiscovered defect
Randomly chosen data values (some of which make no sense) tossed into input fields in hopes of blowing up the application
License given to a tester to explore at random whatever corner of functionality reveals itself from blindly clicking the app forms putting in minimal data
Upon finding a defect, the tester can ignore the "steps for reproducing" part of the write-up of what happened
Removing the tester's obligation to document or record their test results as an accounting of what and how they tested the application
Exploratory testing to me is just another term that akin to risk-based testing that all testing efforts should employ. Below are a few things that immediately come to mind as ways to be very careful to spend your testing resources wisely and not just trying to cover the waterfront of all possible parts of your application. As we all know, exhaustive testing is not cost effective or even possible in today's extremely complex systems.
Start by testing the architectural risks of your application -- defects here require major redesign
Test application areas of complexity in the new release (assuming multiple releases exist) -- defects here require design changes and possibly lengthy rewriting phases
Test application areas where high concentrations of defects have been found previously -- defect riddled code is very difficult to reuse or modify without defects occurring
Using static analysis tools, test application functionality that uses objects that are identified as knots and tangles -- these areas are core parts of many operations
Test application areas using interfaces to legacy and external systems -- functionality may not be fully understood or clearly documented
Once these ideas were articulated as a "risk reduction" strategic way of testing an application, the burning need to support exploratory testing seemed to be reduced to a comment about "Oh yeah, that's what I wanted to see". In any case, the risk based testing support in RQM seems to be adequate to address this issue.
In summary, I need to read more about exploratory testing to make sure I haven't missed anything that could help distinguish our testing tools from others in the marketplace. At least, it seems there is enough synergy of discussing risk based testing that it quelled any concerns about the RQM feature set.