Irina Nicolae

The Adversarial Robustness Toolbox v0.3.0: Closing the Backdoor in AI Security

A new release of the Adversarial Robustness Toolbox provides a method for defending against poisoning and "backdoor" attacks in machine learning models.

Continue reading