-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Rescaling-ACG, which is a improved version of ACG attack. #2460
Conversation
…Conjugate Gradient (ACG) attack Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
self.count_condition_1 = np.zeros(shape=(_batch_size,)) | ||
gradk_1 = np.zeros_like(x_k) | ||
cgradk_1 = np.zeros_like(x_k) | ||
cgradk = np.zeros_like(x_k) |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning
redefined
This assignment to 'cgradk' is unnecessary as it is
redefined
cgradk = np.zeros_like(x_k) | ||
gradk_1_best = np.zeros_like(x_k) | ||
cgradk_1_best = np.zeros_like(x_k) | ||
gradk_1_tmp = np.zeros_like(x_k) |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning
redefined
gradk_1_best = np.zeros_like(x_k) | ||
cgradk_1_best = np.zeros_like(x_k) | ||
gradk_1_tmp = np.zeros_like(x_k) | ||
cgradk_1_tmp = np.zeros_like(x_k) |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning
redefined
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## dev_1.19.0 #2460 +/- ##
============================================
Coverage 85.24% 85.25%
============================================
Files 329 330 +1
Lines 30143 30470 +327
Branches 5173 5228 +55
============================================
+ Hits 25696 25976 +280
- Misses 3019 3043 +24
- Partials 1428 1451 +23
|
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
Hi @yamamura-k Thank you very much for you pull request! I'm starting the review now, please apologise the delay. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @yamamura-k The pull request looks good. Could you please take a look at the minor review comments below?
|
||
# MIT License | ||
# | ||
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020 | |
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your comment. This suggestion is reflected to the latest commit.
""" | ||
This module implements the 'Rescaling-ACG' attack. | ||
|
||
| Paper link: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a link to your paper? Please insert it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The paper link does not appear to be publicly available yet. If it is not published after waiting for a while, we will consider uploading to arxiv.
Implementation of the 'Rescaling-ACG' attack. | ||
The original implementation is https://github.com/yamamura-k/ReACG. | ||
|
||
| Paper link: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add paper link.
# if self.loss_type not in self._predefined_losses: | ||
# raise ValueError("The argument loss_type has to be either {}.".format(self._predefined_losses)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to keep these commented lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think these comments are necessarily necessary. They have been removed in the revised commit.
|
||
# MIT License | ||
# | ||
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2020 | |
# Copyright (C) The Adversarial Robustness Toolbox (ART) Authors 2024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your comment. This suggestion is reflected to the latest commit.
|
||
x_train_mnist_adv = attack.generate(x=x_train_mnist, y=y_train_mnist) | ||
|
||
assert np.max(np.abs(x_train_mnist_adv - x_train_mnist)) > 0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the attack always create the same adversarial example? Could we also test for the expected pixel values in the adversarial image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A CPU-only calculation with the same initial point should produce the same adversarial example each time.
However, when using a GPU, the adversarial example may be slightly different each time because the gradient calculation on the GPU is not deterministic.
Signed-off-by: yamamura-k <[email protected]>
Signed-off-by: yamamura-k <[email protected]>
@beat-buesser Thank you for your review! I have modified my codes to reflect review comments. Although I just removed redundant comments and updated some comment lines, some checks were not successful. Our paper is not publicly available now, so I will add a link to the paper after it becomes available. |
@yamamura-k Thank you very much. I will fix the issue with pytest-flake8 in a separate pull request and merge this pull request afterwards. |
Signed-off-by: yamamura-k <[email protected]>
@beat-buesser Thank you for your review and comments. We published our paper via arxiv, and I added the link. I think your review comments are addressed. |
Hi @yamamura-k Thank you very much for the updates and the published paper! Please apologise the delay because of vacation and fixing a bug in the unit tests. I have updated your branch with the fixed style checks and as soon as the test run has completed I'll take a final review and merge this pull request. |
Hi @yamamura-k Thank you very much for adding your new attack to ART! It will be part of the next release ART 1.19 around December 15 and immediately available on release branch |
Description
A new version of our ACG algorithm, Rescaling-ACG (ReACG), has been implemented. ReACG outperforms APGD and ACG, and also performs better for ImageNet models in particular. Our paper will be presented at International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI), 2024.
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
Test Configuration:
Checklist