Hyperparameter definitions
Definitions of hyperparameters used in the experiment training. One or more of these hyperparameter options might be used, depending on your framework and fusion method.
| Hyperparameters | Description |
|---|---|
| Rounds | Int value. The number of training iterations to complete between the aggregator and the remote systems. |
| Termination accuracy (Optional) | Float value. Takes model_accuracy and compares it to a numerical value. If the condition is satisfied, then the experiment finishes early. For example, termination_predicate: accuracy >= 0.8 finishes the
experiment when the mean of model accuracy for participating parties is greater than or equal to 80%. Currently, Federated Learning accepts one type of early termination condition (model accuracy) for classification models only. |
| Quorum (Optional) | Float value. Proceeds with model training after the aggregator reaches a certain ratio of party responses. Takes a decimal value in the range 0 - 1. The default is 1. The model training starts only after party responses reach the indicated
ratio value. For example, setting this value to 0.5 starts the training after 50% of the registered parties responded to the aggregator call. |
| Max Timeout (Optional) | Int value. Terminates the Federated Learning experiment if the waiting time for party responses exceeds this value in seconds. Takes a numerical value up to 43200. If this value in seconds passes and the quorum ratio is not reached,
the experiment terminates. For example, max_timeout = 1000 terminates the experiment after 1000 seconds if the parties do not respond in that time. |
| Number of classes | Int value. Number of target classes for the classification model. Required if "Loss" hyperparameter is: - auto - binary_crossentropy- categorical_crossentropy |
| Learning rate | Decimal value. The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values. |
| Loss | String value. The loss function to use in the boosting process. - binary_crossentropy (also known as logistic loss) is used for binary classification.- categorical_crossentropy is used for multiclass classification.
- auto chooses either loss function depending on the nature of the problem. - least_squares is used for regression. |
| Max Iter | Int value. The total number of passes over the local training data set to train a Scikit-learn model. |
| N cluster | Int value. The number of clusters to form and the number of centroids to generate. |
| sigma | Float value. Determines how far the local model neurons are allowed from the global model. A bigger value allows more matching and produces a smaller global model. Default value is 1. |
| sigma0 | Float value. Defines the permitted deviation of the global network neurons. Default value is 1. |
| gamma | Float value. Indian Buffet Process parameter that controls the expected number of features in each observation. Default value is 1. |
Parent topic: Frameworks, fusion methods, and Python versions