Skip to content

pralab/secml-torch

Repository files navigation

secml-torch   

SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models

pypi py_versions coverage docs

SecML-Torch (SecMLT) is an open-source Python library designed to facilitate research in the area of Adversarial Machine Learning (AML) and robustness evaluation. The library provides a simple yet powerful interface for generating various types of adversarial examples, as well as tools for evaluating the robustness of machine learning models against such attacks.

Installation

You can install SecMLT via pip:

pip install secml-torch

This will install the core version of SecMLT, including only the main functionalities such as native implementation of attacks and PyTorch wrappers.

Install with extras

The library can be installed together with other plugins that enable further functionalities.

  • Foolbox, a Python toolbox to create adversarial examples.
  • Tensorboard, a visualization toolkit for machine learning experimentation.
  • Adversarial Library, a powerful library of various adversarial attacks resources in PyTorch.

Install one or more extras with the command:

pip install secml-torch[foolbox,tensorboard,adv_lib]

Key Features

SecML-Torch (SecMLT) is a PyTorch-native toolkit for evaluating and improving adversarial robustness. It provides:

  • Built-in support for evaluating PyTorch models.
  • Efficient native implementations of common evasion attacks (e.g. PGD, FMN), built directly for PyTorch, and poisoning/backdoor attacks.
  • Wrappers for external libraries (Foolbox, Adversarial Library) so you can run and compare attacks from a single interface.
  • Modular design to build adaptive/custom attacks to swap losses, optimizers, perturbation models, and add EoT easily with a few lines of code.
  • Attack ensembling modules to obtain worst-case per-sample robustness evaluations.
  • Robustness evaluation tools including metrics, logging, and trackers to ensure reproducibility and easy reporting.
  • Attacl debugging support such as built-in hooks and TensorBoard integration to monitor and inspect attack behavior.

Check out the tutorials to see SecML-Torch in action.

Category Attack / Attack Type Native Implementation in SecML-Torch Wrapped / Imported / Backend Alternatives
Advantages: GPU-native, efficient, modular/customizable, debugging tools Advantages: expands the attack catalogue, easy cross-checks
Test time PGD (fixed-epsilon, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time FMN (minimum-norm, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time DDN (minimum-norm, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time Other Evasion Attacks Work in progress ✔ Available via backend wrappers (Foolbox, Adversarial Library).
Training time Backdoor ✔ Native implementation -
Training time Label Flip Poisoning ✔ Native implementation -

Check out what's cooking! Have a look at our roadmap!

Usage

Here's a brief example of using SecMLT to evaluate the robustness of a trained classifier:

from secmlt.adv.evasion.pgd import PGD
from secmlt.metrics.classification import Accuracy
from secmlt.models.pytorch.base_pytorch_nn import BasePytorchClassifier


model = ...
torch_data_loader = ...

# Wrap model
model = BasePytorchClassifier(model)

# create and run attack
attack = PGD(
    perturbation_model="l2",
    epsilon=0.4,
    num_steps=100,
    step_size=0.01,
)

adversarial_loader = attack(model, torch_data_loader)

# Test accuracy on adversarial examples
robust_accuracy = Accuracy()(model, adversarial_loader)

For more detailed usage instructions and examples, please refer to the official documentation or to the examples.

Contributing

We welcome contributions from the research community to expand the library's capabilities or add new features. If you would like to contribute to SecMLT, please follow our contribution guidelines.

Acknowledgements

SecML has been partially developed with the support of European Union’s ELSA – European Lighthouse on Secure and Safe AI, Horizon Europe, grant agreement No. 101070617, Sec4AI4Sec - Cybersecurity for AI-Augmented Systems, Horizon Europe, grant agreement No. 101120393, and CoEvolution - A Comprehensive Trustworthy Framework for Connected Machine Learning and Secure Interconnected AI Solutions, Horizon Europe, grant agreement No. 101168560, and by the project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU.

sec4ai4sec    elsa    coevolution    serics    europe

About

SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models

Resources

License

Contributing

Stars

Watchers

Forks

Contributors 10

Languages