Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Rescaling-ACG, which is a improved version of ACG attack. #2460

Open
wants to merge 12 commits into
base: dev_1.19.0
Choose a base branch
from

Conversation

yamamura-k
Copy link
Contributor

Description

A new version of our ACG algorithm, Rescaling-ACG (ReACG), has been implemented. ReACG outperforms APGD and ACG, and also performs better for ImageNet models in particular. Our paper will be presented at International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI), 2024.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • Same Tests as ACG
  • Style check

Test Configuration:

  • OS: Ubuntu 24.04
  • Python version: 3.10.14
  • ART version or commit number: 1.18.0
  • TensorFlow / Keras / PyTorch / MXNet version

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • My changes have been tested using both CPU and GPU devices

…Conjugate Gradient (ACG) attack

Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
self.count_condition_1 = np.zeros(shape=(_batch_size,))
gradk_1 = np.zeros_like(x_k)
cgradk_1 = np.zeros_like(x_k)
cgradk = np.zeros_like(x_k)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'cgradk' is unnecessary as it is
redefined
before this value is used.
This assignment to 'cgradk' is unnecessary as it is
redefined
before this value is used.
cgradk = np.zeros_like(x_k)
gradk_1_best = np.zeros_like(x_k)
cgradk_1_best = np.zeros_like(x_k)
gradk_1_tmp = np.zeros_like(x_k)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'gradk_1_tmp' is unnecessary as it is
redefined
before this value is used.
gradk_1_best = np.zeros_like(x_k)
cgradk_1_best = np.zeros_like(x_k)
gradk_1_tmp = np.zeros_like(x_k)
cgradk_1_tmp = np.zeros_like(x_k)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'cgradk_1_tmp' is unnecessary as it is
redefined
before this value is used.
Copy link

codecov bot commented Jul 2, 2024

Codecov Report

Attention: Patch coverage is 87.15596% with 42 lines in your changes missing coverage. Please review.

Project coverage is 85.25%. Comparing base (1207d0a) to head (5411f45).

Files with missing lines Patch % Lines
...tacks/evasion/rescaling_auto_conjugate_gradient.py 87.11% 20 Missing and 22 partials ⚠️
Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff              @@
##           dev_1.19.0    #2460    +/-   ##
============================================
  Coverage       85.24%   85.25%            
============================================
  Files             329      330     +1     
  Lines           30143    30470   +327     
  Branches         5173     5228    +55     
============================================
+ Hits            25696    25976   +280     
- Misses           3019     3043    +24     
- Partials         1428     1451    +23     
Files with missing lines Coverage Δ
art/attacks/evasion/__init__.py 100.00% <100.00%> (ø)
...tacks/evasion/rescaling_auto_conjugate_gradient.py 87.11% <87.11%> (ø)

... and 3 files with indirect coverage changes

@beat-buesser beat-buesser self-assigned this Jul 2, 2024
@beat-buesser beat-buesser self-requested a review July 2, 2024 23:43
@beat-buesser beat-buesser added this to the ART 1.19.0 milestone Jul 2, 2024
@beat-buesser beat-buesser added the enhancement New feature or request label Jul 3, 2024
@beat-buesser beat-buesser changed the base branch from main to dev_1.19.0 July 3, 2024 17:46
@beat-buesser
Copy link
Collaborator

Hi @yamamura-k Thank you very much for you pull request! I'm starting the review now, please apologise the delay.

Copy link
Collaborator

@beat-buesser beat-buesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @yamamura-k The pull request looks good. Could you please take a look at the minor review comments below?


# MIT License
#
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2024

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment. This suggestion is reflected to the latest commit.

"""
This module implements the 'Rescaling-ACG' attack.

| Paper link:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a link to your paper? Please insert it here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The paper link does not appear to be publicly available yet. If it is not published after waiting for a while, we will consider uploading to arxiv.

Implementation of the 'Rescaling-ACG' attack.
The original implementation is https://github.com/yamamura-k/ReACG.

| Paper link:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add paper link.

Comment on lines 650 to 651
# if self.loss_type not in self._predefined_losses:
# raise ValueError("The argument loss_type has to be either {}.".format(self._predefined_losses))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to keep these commented lines?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these comments are necessarily necessary. They have been removed in the revised commit.


# MIT License
#
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2024

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment. This suggestion is reflected to the latest commit.


x_train_mnist_adv = attack.generate(x=x_train_mnist, y=y_train_mnist)

assert np.max(np.abs(x_train_mnist_adv - x_train_mnist)) > 0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the attack always create the same adversarial example? Could we also test for the expected pixel values in the adversarial image?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A CPU-only calculation with the same initial point should produce the same adversarial example each time.
However, when using a GPU, the adversarial example may be slightly different each time because the gradient calculation on the GPU is not deterministic.

@yamamura-k
Copy link
Contributor Author

@beat-buesser Thank you for your review! I have modified my codes to reflect review comments. Although I just removed redundant comments and updated some comment lines, some checks were not successful.

Our paper is not publicly available now, so I will add a link to the paper after it becomes available.

@beat-buesser
Copy link
Collaborator

@yamamura-k Thank you very much. I will fix the issue with pytest-flake8 in a separate pull request and merge this pull request afterwards.

Signed-off-by: yamamura-k <[email protected]>
@yamamura-k
Copy link
Contributor Author

@beat-buesser Thank you for your review and comments. We published our paper via arxiv, and I added the link. I think your review comments are addressed.

@beat-buesser
Copy link
Collaborator

Hi @yamamura-k Thank you very much for the updates and the published paper! Please apologise the delay because of vacation and fixing a bug in the unit tests. I have updated your branch with the fixed style checks and as soon as the test run has completed I'll take a final review and merge this pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants