Skip to content

Commit 7ad1b9c

Browse files
committed
Update README
Update README Update README Update README
1 parent 5e3f990 commit 7ad1b9c

File tree

2 files changed

+12
-9
lines changed

2 files changed

+12
-9
lines changed

README.md

+10-7
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,15 @@ I'm playing with [PyTorch](http://pytorch.org/) on the CIFAR10 dataset.
66
- Python 3.6+
77
- PyTorch 1.0+
88

9+
## Training
10+
```
11+
# Start training with:
12+
CUDA_VISIBLE_DEVICES=0 python main.py
13+
14+
# You can manually resume the training with:
15+
CUDA_VISIBLE_DEVICES=0 python main.py --resume --lr=0.01
16+
```
17+
918
## Accuracy
1019
| Model | Acc. |
1120
| ----------------- | ----------- |
@@ -22,11 +31,5 @@ I'm playing with [PyTorch](http://pytorch.org/) on the CIFAR10 dataset.
2231
| [DenseNet121](https://arxiv.org/abs/1608.06993) | 95.04% |
2332
| [PreActResNet18](https://arxiv.org/abs/1603.05027) | 95.11% |
2433
| [DPN92](https://arxiv.org/abs/1707.01629) | 95.16% |
34+
| [DLA](https://arxiv.org/abs/1707.064) | 95.47% |
2535

26-
## Learning rate adjustment
27-
I manually change the `lr` during training:
28-
- `0.1` for epoch `[0,150)`
29-
- `0.01` for epoch `[150,250)`
30-
- `0.001` for epoch `[250,350)`
31-
32-
Resume the training with `python main.py --resume --lr=0.01`

main.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@
8686
criterion = nn.CrossEntropyLoss()
8787
optimizer = optim.SGD(net.parameters(), lr=args.lr,
8888
momentum=0.9, weight_decay=5e-4)
89-
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)
89+
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200)
9090

9191

9292
# Training
@@ -148,7 +148,7 @@ def test(epoch):
148148
best_acc = acc
149149

150150

151-
for epoch in range(start_epoch, start_epoch+100):
151+
for epoch in range(start_epoch, start_epoch+200):
152152
train(epoch)
153153
test(epoch)
154154
scheduler.step()

0 commit comments

Comments
 (0)