Tests

The Testing tab is available when editing any source. This section allows you to specify any number of tests that can be used to evaluate the functionality of your source.

To add tests, go to the Testing tab, and you will be taken to the Configuration subtab. In this section, you can add preconfigured test functions to your source by selecting one from the drop-down menu and clicking Add. You can optionally specify parameters to customize your test, and then click OK to save.

Once you have added tests for a source, click on the Execution tab. Click the Run Tests button to run the tests. The browser window will automatically refresh itself until testing is complete.

The results of the tests will be displayed in this window. You can always come back here to see the results of the last time tests were run. Results are color-coded: green signifies that a test completed successfully, red signifies that a test failed, and yellow signifies that a test is currently running. Details about failed tests are displayed in the interface, and a debug link is supplied to aid in determining why a test failed.

If you are running tests on a source bundle, the bundled sources will be tested first, and then the bundle itself will be tested, if it contains tests.

Another way to run source tests is to Scheduler them to be run. It is highly recommended that you create rigorous source tests, and schedule them to be run at a frequent interval, especially if you are meta-searching sources which you do not have control over. A simple way to do this is to schedule testing of the source All, which should be a bundle containing all of the sources you make available to your end-users in your Watson™ Explorer Engine application.