Temporal and seasonal algorithms policy filtration
Both the temporal and seasonal training algorithms now use policy filtration. This filtration reduces the number of policies that are generated during training and helps prevent memory‑related issues. Each algorithm applies selective filtering to remove the least relevant or least important policies based on its own criteria. To support this function, new environment variables are introduced. During execution, the trainer reads these variables and uses them to control how filtration is applied. The following sections describe each algorithm in detail.
Temporal grouping
- It assigns higher priority to policies with more temporal groups.
- It assigns lower priority to policies with fewer temporal groups.
- When the number of generated policies exceeds the configured limit, the algorithm keeps only the top n policies. Where, n is the policy limit.
- MAX_SIZE_OF_GROUP
- Controls the maximum number of temporal events that are allowed in a single temporal group.
- MAX_NUMBER_OF_GROUP
- Controls the maximum number of temporal policies that are allowed in the system, including active, draft, and inactive policies.
Seasonal training
Seasonal training filtration ranks policies to identify which ones to retain. It applies the following ranking.
Rank = (α − ωn) − ∑(1 − p)
- α (leniency factor)
- Sets the base allowance for how many time windows are acceptable before penalties apply. A higher α means more windows are acceptable.
- ω (big‑window penalty factor)
- Controls how aggressively the algorithm penalizes policies with many time windows. A higher ω increases the penalty for each additional window.
- n (time window count)
- Represents the number of seasonal windows that are defined in the policy. Although policies might include DayOfMonth, Day, Hour, and Minute windows, the ranking algorithm only considers DayOfMonth and Day windows to avoid noise and large fluctuations.
- p (time‑window p‑value)
- Indicates the strength of the seasonal pattern and ranges from 0 to 1.
Higher values represent stronger seasonality.
- Its p‑values
- The number of seasonal time windows it contains
The algorithm treats:
- Policies with many time windows as less important
- Policies with fewer time windows as more important
If too many or too few policies are filtered, you can adjust α and ω to increase or decrease sensitivity by using a custom configmap. The end of this section shows how to create one.
The following example shows how the algorithm calculates a policy’s rank.
Assume:
- α = 3
- n = 1
- ω = 1
- p‑values:
- p_month = 0.99
- p_week = 0.98
Because only a DayOfMonth window exists, the algorithm uses only the monthly p‑value (0.99).
- Calculation
-
Rank = (3 − (1 × 1)) − (1 − 0.99) Rank = 2 − 0.01 Rank = 1.99
- SE_EVENTSLIMIT
- It controls the maximum number of seasonal policies that are allowed in the system. This limit includes active, draft, and inactive policies.
- SE_BIGWINDOWPENALTYFACTOR
- It controls the big‑window penalty factor.
- SE_LENIANCYFACTOR
- It controls the leniency factor.
To change the value of any environment variable, create a custom configmap and apply it to the cluster. Applying this configmap makes the training service restart.
apiVersion: v1
kind: ConfigMap
metadata:
name: aiops-custom-size-profile
namespace: aiops
data:
profiles: |
generatedfor: NonHA
operandconfigs:
- name: ir-ai-operator
spec:
aiopsanalyticsorchestrator:
customEnv:
- containers:
- name: spark-pipeline-composer
kind: Deployment
env:
- name: MAX_SIZE_OF_GROUP
value: "1000"
- name: MAX_NUMBER_OF_GROUP
value: "100000"
- name: SE_EVENTSLIMIT
value: "100000"
- name: SE_BIGWINDOWPENALTYFACTOR
value: "3"
- name: SE_LENIANCYFACTOR
value: "1"