You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-4
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,9 @@
1
1
# CIFAR10 Adversarial Examples Challenge
2
2
3
3
Recently, there has been much progress on adversarial *attacks* against neural networks, such as the [cleverhans](https://github.com/tensorflow/cleverhans) library and the code by [Carlini and Wagner](https://github.com/carlini/nn_robust_attacks).
4
-
We now complement these advances by proposing an *attack challenge* for the [CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) which follows the format of [our earlier MNIST challenge](https://github.com/MadryProj/mnist_challenge).
4
+
We now complement these advances by proposing an *attack challenge* for the
5
+
[CIFAR10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) which follows the
6
+
format of [our earlier MNIST challenge](https://github.com/MadryLab/mnist_challenge).
5
7
We have trained a robust network, and the objective is to find a set of adversarial examples on which this network achieves only a low accuracy.
6
8
To train an adversarially-robust network, we followed the approach from our recent paper:
7
9
@@ -21,9 +23,9 @@ Analogously to our MNIST challenge, the goal of this challenge is to clarify the
21
23
| Attack | Submitted by | Accuracy | Submission Date |
| PGD on the cross-entropy loss for the<br> adversarially trained public network | (initial entry) |**63.39%**| Jul 12, 2017 |
24
-
| PGD on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> adversarially trained public network | (initial entry) |**64.38%**| Jul 12, 2017 |
25
-
| FGSM on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> adversarially trained public network | (initial entry) |**67.25%**| Jul 12, 2017 |
26
-
| FGSM on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> naturally trained public network | (initial entry) |**85.23%**| Jul 12, 2017 |
26
+
| PGD on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> adversarially trained public network | (initial entry) | 64.38% | Jul 12, 2017 |
27
+
| FGSM on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> adversarially trained public network | (initial entry) | 67.25% | Jul 12, 2017 |
28
+
| FGSM on the [CW](https://github.com/carlini/nn_robust_attacks) loss for the<br> naturally trained public network | (initial entry) | 85.23% | Jul 12, 2017 |
0 commit comments