Defining custom splits
You can select the predictor and specify conditions for each split.
- In the interactive tree builder, right-click a node and select Customize split.
- Select the predictor that you want from the Split on list.
- You can accept the default conditions for each split or specify conditions
for the split as appropriate.
- For continuous (numeric range) predictors, you can use the available fields to specify the range of values that fall into each new node.
- For categorical predictors, you can use the available fields to specify the specific values (or range of values in case of an ordinal predictor) that map to each new node.
- Click Apply to regrow the branch using the selected predictor.

The tree can generally be split using any predictor, regardless of stopping rules. The only exceptions are when the node is pure (meaning that 100% of cases fall into the same target class, thus nothing remains to split), or the chosen predictor is constant (there is nothing to split against).
In Detailed view, the interactive tree displays statistics for available
predictors (or "competitors" as they are sometimes called) that can be used for the current split.
- For CHAID, the chi-square statistic is listed
for each categorical predictor; if a predictor is a numeric range, the
F
statistic is shown. The chi-square statistic is a measure of how independent the target field is from the splitting field. A high chi-square statistic generally relates to a lower probability, meaning that there's less chance that the two fields are independent—an indication that the split is a good one. Degrees of freedom are also included because these take into account the fact that it's easier for a three-way split to have a large statistic and small probability than it is for a two-way split. - For C&R Tree and QUEST, the improvement for each predictor is displayed. The greater the improvement, the greater the reduction in impurity between the parent and child nodes if that predictor is used. (A pure node is one in which all cases fall into a single target category; the lower the impurity across the tree, the better the model fits the data.) In other words, a high improvement figure generally indicates a useful split for this type of tree. The impurity measure used is specified in the tree-building node.