Commit 7ad1b9c 1 parent 5e3f990 commit 7ad1b9c Copy full SHA for 7ad1b9c
File tree 2 files changed +12
-9
lines changed
2 files changed +12
-9
lines changed Original file line number Diff line number Diff line change @@ -6,6 +6,15 @@ I'm playing with [PyTorch](http://pytorch.org/) on the CIFAR10 dataset.
6
6
- Python 3.6+
7
7
- PyTorch 1.0+
8
8
9
+ ## Training
10
+ ```
11
+ # Start training with:
12
+ CUDA_VISIBLE_DEVICES=0 python main.py
13
+
14
+ # You can manually resume the training with:
15
+ CUDA_VISIBLE_DEVICES=0 python main.py --resume --lr=0.01
16
+ ```
17
+
9
18
## Accuracy
10
19
| Model | Acc. |
11
20
| ----------------- | ----------- |
@@ -22,11 +31,5 @@ I'm playing with [PyTorch](http://pytorch.org/) on the CIFAR10 dataset.
22
31
| [ DenseNet121] ( https://arxiv.org/abs/1608.06993 ) | 95.04% |
23
32
| [ PreActResNet18] ( https://arxiv.org/abs/1603.05027 ) | 95.11% |
24
33
| [ DPN92] ( https://arxiv.org/abs/1707.01629 ) | 95.16% |
34
+ | [ DLA] ( https://arxiv.org/abs/1707.064 ) | 95.47% |
25
35
26
- ## Learning rate adjustment
27
- I manually change the ` lr ` during training:
28
- - ` 0.1 ` for epoch ` [0,150) `
29
- - ` 0.01 ` for epoch ` [150,250) `
30
- - ` 0.001 ` for epoch ` [250,350) `
31
-
32
- Resume the training with ` python main.py --resume --lr=0.01 `
Original file line number Diff line number Diff line change 86
86
criterion = nn .CrossEntropyLoss ()
87
87
optimizer = optim .SGD (net .parameters (), lr = args .lr ,
88
88
momentum = 0.9 , weight_decay = 5e-4 )
89
- scheduler = torch .optim .lr_scheduler .CosineAnnealingLR (optimizer , T_max = 100 )
89
+ scheduler = torch .optim .lr_scheduler .CosineAnnealingLR (optimizer , T_max = 200 )
90
90
91
91
92
92
# Training
@@ -148,7 +148,7 @@ def test(epoch):
148
148
best_acc = acc
149
149
150
150
151
- for epoch in range (start_epoch , start_epoch + 100 ):
151
+ for epoch in range (start_epoch , start_epoch + 200 ):
152
152
train (epoch )
153
153
test (epoch )
154
154
scheduler .step ()
You can’t perform that action at this time.
0 commit comments