Providing a custom implementation for tests

Provide a fully customized and dynamic testing logic.

When you want to leverage custom data providers with test suites, you must start by generating a .tstx scenario file that defines the tests that you want to perform. Your custom data provider implementation provides input and expected data so that the tests you select can be performed.

However, in doing so, you are limited to a set of predefined test operators. To provide a fully customized and dynamic testing logic, follow these instructions.

Implementation class

To write a custom implementation of tests that run within a test suite, you need to define a class within your XOM that extends the com.ibm.rules.cdi.runtime.ext.BaseTestProviderListener class:

  • Implement the method getJobContext().
  • Implement the method test(Scenario scenario, IlrSessionRequest request, IlrSessionResponse response).
Note: Your XOM project(s) need to reference the distributed jrules-cdi-api.jar library for compilation purposes.

Implementing getJobContext()

The implementation of this method is always the same because it is needed for contextual purposes in the Decision Runner.

In your implementation class, use injection to implement this method:

import javax.batch.runtime.context.JobContext;
import javax.inject.Inject;

import com.ibm.rules.cdi.runtime.ext.BaseTestProviderListener;

public class MyCustomTester implements BaseTestProviderListener {
    @Inject
    JobContext jobContext;

    @Override
    public JobContext getJobContext() {
        return jobContext;
    }
}

Implementing test(Scenario scenario, IlrSessionRequest request, IlrSessionResponse response)

Your custom testing logic takes place inside this method, and the method returns an object that contains the status of your tests.

When a test suite runs, this method is called for each scenario defined in that test suite. The scenarios are defined by your custom data provider implementation. Depending on the scenario that is received, you can implement any test logic, making use of the following elements if needed:

  • Input ruleset parameters using request.getInputParameters() (or request.getInputParameter(String name)).
  • Output ruleset parameters: using response.getOutputParameters().
  • All information available from the request, response and scenario parameters.

To create the test results, you use the following utility methods, which are available in the class:


    /**
     * Creates an empty scenario test result, on which individual test results can then be added.
     */
    public CustomScenarioTestResult createScenarioResult();
		
    /**
     * Creates a scenario test result containing the specified individual test results.
     */
    public CustomScenarioTestResult createScenarioResult(List<TestResult> testResults);
	
    /**
     * Creates a scenario test result containing the specified individual test results.
     */
    public CustomScenarioTestResult createScenarioResult(List<TestResult> testResults, List<RuntimeError> errors);
		
    /**
     * Creates a custom test result.
     */
    public TestResult createTestResult(String operator, String testName, TestStatus status, Object[] expectedValues, Object observedValue) throws Exception;
	
    /** 
     * Creates a custom test result.
     */
    public TestResult createTestResult(String operator, String testName, TestStatus status, Object[] expectedValues, Object observedValue, List<RuntimeError> errors) throws Exception;
Note: If you have written test definitions in your custom data provider class, you can retrieve these definitions from the Scenario object by using scenario.getTests().

Full example

The following is a complete example:

import com.ibm.rules.cdi.core.RuntimeError;
import com.ibm.rules.cdi.core.Scenario;
import com.ibm.rules.cdi.core.service.output.TestResult.TestStatus;
import com.ibm.rules.cdi.runtime.ext.BaseTestProviderListener;
import com.ibm.rules.cdi.runtime.ext.CustomScenarioTestResult;
import com.ibm.rules.demo.Invoice;

import ilog.rules.res.session.IlrSessionRequest;
import ilog.rules.res.session.IlrSessionResponse;

public class MyCustomTester extends BaseTestProviderListener{
	
    @Inject
    JobContext jobContext;

    @Override
    public JobContext getJobContext() {
        return jobContext;
    }

	@Override
	public CustomScenarioTestResult test(Scenario scenario, IlrSessionRequest request, IlrSessionResponse response) {
		// creating the object instance to be returned by this method
		CustomScenarioTestResult output = createScenarioResult();
		
		// get an output parameter
		Invoice invoice = (Invoice)response.getOutputParameters().get("invoice");
		// get a field in this parameter
		// that will be used in tests
		Long amount = Math.round(invoice.getTotalPrice());
		
		try {
			// scenario #1
			if("MyScenario1".equals(scenario.getId())) {
				// setting 1 test result, computed by
				// expecting the amount to be equal to 120
				Long expectedAmount = 120L;
				TestStatus status = TestStatus.FAILURE;
				if(amount == expectedAmount) {
					status = TestStatus.SUCCESS;
				}
				output.addTestResult(createTestResult("test amount value", "amount is as expected", status, new Object[] { expectedAmount }, amount));
			}
			// scenario #2
			else if("MyScenario2".equals(scenario.getId())) {
				// setting 1 test result, computed by
				// expecting the amount to have a value of 500 at + or - 10%
				TestStatus status = TestStatus.FAILURE;
				if(amount >= 450 && amount <= 550) {
					status = TestStatus.SUCCESS;
				}
				output.addTestResult(createTestResult("test amount in range", "amount is in range", status, new Object[] { "500 at 10%" }, amount));			
			}
			// scenario #3
			else if("MyScenario3".equals(scenario.getId())) {
				// expecting the amount to have a value greater than 1000
				TestStatus status = TestStatus.FAILURE;
				if(amount >= 1000) {
					status = TestStatus.SUCCESS;
				}
				output.addTestResult(createTestResult("test amount is big", "amount is big", status, new Object[] { ">= 1000" }, amount));
			}
			// scenario #4 (last scenario)
			else {
				// reporting a test in success all the time
				output.addTestResult(createTestResult("test1", "always success", TestStatus.SUCCESS, new Object[] { "ok" }, "ok"));
				
				// reporting a test in error all the time
				RuntimeError error = new RuntimeError("something wrong happened");
				output.addTestResult(createTestResult("test2", "always in error", TestStatus.ERROR, new Object[] { "ko" }, "ko", Collections.singletonList(error)));
			}	
		}
		catch(Exception e) {
			// log error or throw a RuntimeException, ...
		}
		
		return output;
	}

Scenario file customization

To run custom tests by using a test suite, your must customize the .tstx scenario file:

  • When generating the .tstx file, do not select any test.
  • In the resulting file, customize the listeners section to add your implementation class and your custom data provider implementation.

For example, if the fully qualified name of your custom test implementation class is com.mycompany.mydecision.MyCustomTester, and the fully qualified name of your custom data provider implementation class is com.mycompany.mydecision.MyDataProvider:

<jsl:listeners>
    <jsl:listener ref="com.ibm.rules.cdi.runtime.batch.listeners.ScenarioReadListener"/>
    <jsl:listener ref="com.ibm.rules.cdi.runtime.batch.listeners.ScenarioChunkListener"/>
    <jsl:listener ref="com.mycompany.mydecision.MyCustomTester"/>
</jsl:listeners>
<jsl:chunk ...>
    <jsl:reader ref="com.mycompany.mydecision.MyDataProvider" />
    <jsl:processor ref="com.ibm.rules.cdi.runtime.batch.artifacts.ScenarioProcessor"/>
    <jsl:writer ref="com.ibm.rules.cdi.runtime.batch.artifacts.ScenarioWriter"/>
</jsl:chunk>