Skip to main content
FRAMES NO FRAMES

Strong constraints in CP Optimizer
PREVIOUS NEXT

A strong constraint (specified as strong, IloStrong, IloCP.strong, CP.Strong in OPL, C++, Java, and C# respectively) operates over a given set of variables and is a request for CP Optimizer to strengthen the inference performed between these variables.

In CP Optimizer, for reasons of efficiency, not all logical inferences are performed which logically could be. For example, given three integer variables x, y and z and the constraint z = x + y, then only the bounds of the three variables will be inferred by CP Optimizer. For example if the initial bounds of x, y and z are [0..10], [2..7] and [6..9] respectively, then CP Optimizer can deduce a tighter maximum value of 7 for x since values of 8, 9 or 10 for x would necessarily violate the constraint. This can be deduced quickly by examining only at the bounds of the variables.

Consider now that the domains of x, y and z can be, instead of ranges, non-contiguous, for example {0, 3, 5, 7, 9, 10}, {2, 4, 6, 7} and {6, 7, 8, 9} respectively. With these initial domains, it is impossible to have z = 8 since there is no combination of values for x and y which can make 8. So, a logical inference would be to remove 8 from the possible domain of z. However, since, for efficiency, CP Optimizer reasons only about ranges on arithmetic constraints, this deduction is not made. Normally, the effort spent making such additional deductions on arithmetic constraints is not paid back by equivalent reductions in the total search, and so for the most part, fast deductions based on ranges are globally the best choice. However, it can be that for some tightly coupled model structures (by tightly coupled we mean there are relatively few legal assignments to some subset of the variables and their corresponding constraints), modeling with constraints which deduce domains reductions based only on examination of bounds or other simple mechanisms is inefficient.

The very strongest domain reductions between variables can be achieved by using the allowed assignments constraint (specified as allowedAssignments, IloAllowedAssignments, IloCP.allowedAssignments, CP.AllowedAssignments in OPL, C++, Java, and C# respectively). This constraint takes as input an array of variables and a tuple set. The tuple set lists all the combinations of values that the variables can take. This constraint performs perfect (the strongest) domain reductions over the variables specified and is also a natural way of modeling problems with a naturally combinatorial structure. However, where the natural model method uses other means, such as arithmetic constraints, modeling via the allowed assignments constraint is somewhat cumbersome.

The idea then of strong constraints is to allow the user to model the problem using the constraints which are most natural to them, even if the default choice of propagation strength is below what the user would prefer (the strength of allowed-assignments would be preferable). By then using an additional "strong" constraint on the variables in question, CP Optimizer is requested to replace the "strong" constraint with an allowed-assignments constraint which admits the same solutions as the user's constraints, but provides the maximum strength of inference. Additionally, if CP Optimizer can identify that the new allowed-assignments constraint renders some other constraints in the model entirely redundant (in the sense that all deductions made are also made by the new allowed-assignments constraint), then it removes those redundant constraints.

The allowed-assignments constraints added to the model are generated by performing a complete search using the whole constraint model, but seeking only to fix the variables mentioned in the strong constraint during this search. Only those partial assignments which did not lead to an inconsistency (a failure) are added to the allowed-assignments constraint. This can actually result in an even stronger model than if the user were to directly use their own allowed-assignments constraint since in the case of the strong constraint, the inference of all other constraints in the model is taken into account.

For instance reconsidering the example above z = x + y where initial domains of x, y and z are {0, 3, 5, 7, 9, 10}, {2, 4, 6, 7} and {6, 7, 8, 9}, the strong annotation strong(x, y, z) will generate an allowedAssignments constraint over x, y and z whose tuple set is

{<0, 6, 6>, 
<0, 7, 7>, 
<3, 4, 7>, 
<3, 6, 9>, 
<5, 2, 7>, 
<5, 4, 9>, 
<7, 2, 9>}

This allowedAssignments constraint replaces the constraint z = x + y and the strong annotation. It is important to note that for this demonstration we took into account only the constraint z = x + y. In general, other constraints on variables x, y and z, will lead to a smaller tuple set because some of the combinations may not satisfy these other constraints.

Be aware that the use of strong constraints can be costly for two principal reasons :

1. The allowed-assignments constraints generated, by their very nature of performing the maximum amount of inference, are more costly than, for example, constraints which only tighten and examine variable bounds. Allowed-assignments constraints (and thus strong constraints) on variables with large domains or high numbers of variables are likely to be inefficient.

2. The generation process itself requires some time as a search process is carried out during model presolve. The same observations apply as to the first point: large domains or numbers of variables in a strong constraint are likely to take some time during generation.

Finally, care should be taken on the number of strong constraints used inside a model and to limit their use to the most combinatorial parts. Large numbers of allowed-assignments constraints can make the solving process quite inefficient in terms of the branching speed of CP Optimizer.

PREVIOUS NEXT