Attacks

Supported attacks

class tabularbench.attacks.capgd.capgd.CAPGD(constraints: Constraints, scaler: TabScaler, model: Model, model_objective: Model, norm: str = 'Linf', eps: float = 0.03137254901960784, steps: int = 10, n_restarts: int = 1, seed: int = 0, loss: str = 'ce', eot_iter: int = 1, rho: float = 0.75, fix_equality_constraints_end: bool = True, fix_equality_constraints_iter: bool = True, adaptive_eps: bool = True, random_start: bool = True, init_start: bool = True, best_restart: bool = True, eps_margin: float = 0.05, verbose: bool = False)[source]

CAPGD from the paper ‘Towards Adaptive Attacks on Constrained Tabular Machine Learning’ [https://openreview.net/forum?id=DnvYdmR9OB]

License: MIT Distance Measure : Linf, L2

Parameters:
  • constraints (Constraints) – The constraint object to be checked successively

  • scaler (TabScaler) – scaler used to transform the inputs

  • model (tabularbench.models.model) – model to attack.

  • model_objective (tabularbench.models.model) – model used to compute the objective.

  • norm (str) – Lp-norm of the attack. [‘Linf’, ‘L2’] (Default: ‘Linf’)

  • eps (float) – maximum perturbation. (Default: 8/255)

  • steps (int) – number of steps. (Default: 10)

  • n_restarts (int) – number of random restarts. (Default: 1)

  • seed (int) – random seed for the starting point. (Default: 0)

  • loss (str) – loss function optimized. [‘ce’, ‘dlr’] (Default: ‘ce’)

  • eot_iter (int) – number of iteration for EOT. (Default: 1)

  • rho (float) – parameter for step-size update (Default: 0.75)

  • fix_equality_constraints_end (bool) – whether to fix equality constraints at the end. (Default: True)

  • fix_equality_constraints_iter (bool) – whether to fix equality constraints at each iteration. (Default: True)

  • adaptive_eps (bool) – whether to use adaptive epsilon. (Default: True)

  • random_start (bool) – whether to use random start. (Default: True)

  • init_start (bool) – whether to initialize the starting point. (Default: True)

  • best_restart (bool) – whether to use the best restart. (Default: True)

  • eps_margin (float) – margin for epsilon. (Default: 0.05)

  • verbose (bool) – print progress. (Default: False)

Examples::
>>> attack = CAPGD(...)
>>> outputs = attack(inputs, labels)
forward(inputs: Tensor, labels: Tensor) Tensor[source]

N: Number of instances D: Number of features C: Number of classes input shape: [N, D] output shape: [N, C]

class tabularbench.attacks.moeva.moeva.Moeva2(model: ~tabularbench.models.model.Model, constraints: ~tabularbench.constraints.constraints.Constraints, norm=None, fun_distance_preprocess=<function Moeva2.<lambda>>, n_gen=100, n_pop=203, n_offsprings=100, save_history=None, seed=None, n_jobs=32, verbose=0, **kwargs)[source]

MOEVA from the paper ‘A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space ‘ [https://www.ijcai.org/proceedings/2022/0183]

License: MIT Distance Measure : Linf, L2

Parameters:
  • model (tabularbench.models.model) – model to attack.

  • constraints (Constraints) – The constraint object to be checked successively

  • scaler (TabScaler) – scaler used to transform the inputs

  • model_objective (tabularbench.models.model) – model used to compute the objective.

  • norm (str) – Lp-norm of the attack. [‘Linf’, ‘L2’] (Default: ‘Linf’)

  • eps (float) – maximum perturbation. (Default: 8/255)

  • n_gen (int) – number of generations. (Default: 100)

  • n_pop (int) – number of population. (Default: 203)

  • n_offsprings (int) – number of offsprings. (Default: 100)

  • save_history (bool) – whether to save the history. (Default: None)

  • seed (int) – random seed. (Default: None)

  • n_jobs (int) – number of parallel jobs. (Default: 32)

  • verbose (int) – verbosity level. (Default: 0)

Examples::
>>> attack = Moeva2(...)
>>> outputs = attack(inputs, labels)
check_pymoo_compiled()[source]

Check if pymoo is compiled.

generate(x: ndarray, y: ndarray, batch_size=None)[source]

Generate adversarial examples using batches. :param x: input data. :type x: np.ndarray :param y: target data. :type y: np.ndarray :param batch_size: batch size. :type batch_size: int

Returns:

adversarial examples.

Return type:

x_adv (np.ndarray)

class tabularbench.attacks.caa.caa.ConstrainedAutoAttack(constraints: Constraints, constraints_eval: Constraints, scaler: TabScaler, model, model_objective, n_jobs=-1, fix_equality_constraints_end: bool = True, fix_equality_constraints_iter: bool = True, eps_margin=0.01, norm='Linf', eps=0.03137254901960784, version='standard', n_classes=10, seed=None, verbose=False, steps=10, n_gen=100, n_offsprings=100)[source]

CAA from “Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data” [https://arxiv.org/abs/2406.00775]

License: MIT Distance Measure : Linf, L2

Parameters:
  • constraints (Constraints) – The constraint object to be used in the attack

  • constraints_eval (Constraints) – The constraint object to be checked at the end

  • scaler (TabScaler) – scaler used to transform the inputs

  • model (tabularbench.models.model) – model to attack.

  • model_objective (tabularbench.models.model) – model used to compute the objective.

  • n_jobs (int) – number of parallel jobs. (Default: -1)

  • fix_equality_constraints_end (bool) – whether to fix equality constraints at the end. (Default: True)

  • fix_equality_constraints_iter (bool) – whether to fix equality constraints at each iteration. (Default: True)

  • eps_margin (float) – margin for epsilon. (Default: 0.05)

  • norm (str) – Lp-norm to minimize. [‘Linf’, ‘L2’] (Default: ‘Linf’)

  • eps (float) – maximum perturbation. (Default: 0.3)

  • version (str) – version. [‘standard’] (Default: ‘standard’)

  • n_classes (int) – number of classes. (Default: 10)

  • seed (int) – random seed for the starting point. (Default: 0)

  • verbose (bool) – print progress. (Default: False)

  • steps (int) – number of steps. (Default: 10)

  • n_gen (int) – number of generations. (Default: 100)

  • n_offsprings (int) – number of offsprings. (Default: 100)

Shape:
  • inputs: torch.Tensor (N, F) where N = number of batches, F=Number of features.

  • labels: torch.Tensor (N, C) where N = number of batches`, C=number of classes. (only binary for now)

  • output: torch.Tensor (N, F) where N = number of batches, F=Number of features.

Examples::
>>> attack = ConstrainedAutoAttack(...)
>>> outputs = attack(inputs, labels)
forward(inputs, labels)[source]

input shape: [N, D] output shape: [N, C]

N: Number of instances D: Number of features C: Number of classes

get_seed()[source]

Return the seed for the random number generatorsed in the attack

class tabularbench.attacks.caa.caa.ConstrainedMultiAttack(objective_calculator, attacks, verbose=False)[source]

Constrained Multi Attack (CMA) A generic class to run multiple attacks iteratively while checking the constraint and success rate at each step, and running the nextr attacks only on unsuccessful examples

Parameters:
  • objective_calculator (ObjectiveCalculator) – The objective calculator to be used.

  • attacks (list) – List of attacks to be used.

  • verbose (bool) – Whether to print the progress. (Default: False)

check_validity()[source]

Check if at least 2 attacks are available, if the model used in the attack is compatible with each and is the same.

forward(inputs, labels)[source]

N: Number of instances D: Number of features C: Number of classes input shape: [N, D] output shape: [N, C]

class tabularbench.attacks.caa.caa.NoAttack[source]

Utility class to have no attack.

Building a new attack

Attacks from TabularBench follow the same structre as attacks from TorchAttacks. A new attack should then extend torchattacks.attack.Attack.

To evaluate success rate with constraint satisfaction in the new attack, call tabularbench.attacks.objective_calculator.ObjectiveCalculator:

To evaluate individual constraint losses, call tabularbench.constraints.constraints_backend_executor.ConstraintsExecutor

For a complete example, refer to the implementation of CAPGD

Submitting a new attack

We welcome attacks contributions that bring additional insights or challenges to the tabular adversarial robustness community.

  1. Create a new issue by selecting the type “Submit a new attack”.

  2. Fill in the form accordingly.

  3. Create a new Pull Request with the dataset definition and files (csv, …) and associate it with this issue.

  4. We will validate that the attack is working correctly, and run the architectures and defenses of the benchmark on it.

  5. Once included, the attack will be accessible on the API and on the public leaderboard.

If you find issues with existing attack in the API, please raise a dedicated issue and do not use the form.

Thank you for your contributions.