A Python library for Machine Learning Security

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

Adversarial Threats

adversarial_threats_art

ART for Red and Blue Teams (selection)

white_hat_blue_red

Learn more

Get Started Documentation Contributing
– Installation- Examples- Notebooks – Attacks- Defences- Estimators- Metrics- Technical Documentation – Slack, Invitation- Contributing- Roadmap- Citing

The library is under continuous development. Feedback,

 

 

 

To finish reading, please visit source site