Skip to content

Commit f4d8d9f

Browse files
authored
Merge pull request #7 from jangsoopark/v2.1.0
V2.1.0
2 parents 4d9416e + 4fdf008 commit f4d8d9f

15 files changed

+256
-59
lines changed

README.md

+25-14
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,7 @@ The proposed model only consists of **sparsely connected layers** without any fu
4141
## Training
4242
For training, this implementation fixes the random seed to `12321` for `reproducibility`.
4343

44-
The experimental conditions are same as in the paper, except for `data augmentation` and `learning rate`.
45-
The `learning rate` is initialized with `1e-3` and decreased by a factor of 0.1 **after 26 epochs**.
44+
The experimental conditions are same as in the paper, except for `data augmentation`.
4645
You can see the details in `src/model/_base.py` and `experiments/config/AConvNet-SOC.json`
4746

4847
### Data Augmentation
@@ -52,10 +51,9 @@ You can see the details in `src/model/_base.py` and `experiments/config/AConvNet
5251

5352
- However, for SOC, this repository does not use random shifting tue to accuracy issue.
5453
- You can see the details in `src/data/generate_dataset.py` and `src/data/mstar.py`
55-
- This implementation failed to achieve higher than 98% accuracy when using random sampling.
5654
- The implementation details for data augmentation is as:
57-
- Crop the center of 94 x 94 size image on 128 x 128 SAR image chip (49 patches per image chip).
58-
- Extract 88 x 88 patches with stride 1 from 94 x 94 image.
55+
- Crop the center of 94 x 94 size image on 100 x 100 SAR image chip (49 patches per image chip).
56+
- Extract 88 x 88 patches with stride 1 from 94 x 94 image with random cropping.
5957

6058

6159
## Experiments
@@ -141,14 +139,14 @@ MSTAR-PublicMixedTargets-CD1/MSTAR_PUBLIC_MIXED_TARGETS_CD1
141139
- Place the two directories (`train` and `test`) to the `dataset/raw`.
142140
```shell
143141
$ cd src/data
144-
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=94 --dataset=soc
145-
$ python3 generate_dataset.py --is_train=False --use_phase=True --dataset=soc
142+
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=100 --patch_size=94 --use_phase=True --dataset=soc
143+
$ python3 generate_dataset.py --is_train=False --use_phase=True --chip_size=128 --patch_size=128 --use_phase=True --dataset=soc
146144
$ cd ..
147-
$ python3 train.py
145+
$ python3 train.py --config_name=config/AConvNet-SOC.json
148146
```
149147

150148
#### Results of SOC
151-
- Final Accuracy is **99.18%** (The official accuracy is 99.13%)
149+
- Final Accuracy is **99.13%** at epoch 26 (The official accuracy is 99.13%)
152150
- You can see the details in `notebook/experiments-SOC.ipynb`
153151

154152
- Visualization of training loss and test accuracy
@@ -165,10 +163,10 @@ $ python3 train.py
165163

166164
| Noise | 1% | 5% | 10% | 15%|
167165
| :---: | :---: | :---: | :---: | :---: |
168-
| AConvNet-PyTorch | 98.56 | 94.39 | 85.03 | 73.65 |
166+
| AConvNet-PyTorch | 98.60 | 95.18 | 85.36 | 73.24 |
169167
| AConvNet-Official | 91.76 | 88.52 | 75.84 | 54.68 |
170168

171-
169+
<!--
172170
### Extended Operating Conditions (EOC)
173171
174172
#### EOC-1 (Large depression angle change)
@@ -216,15 +214,28 @@ MSTAR-PublicMixedTargets-CD2/MSTAR_PUBLIC_MIXED_TARGETS_CD2
216214
└ ...
217215
218216
```
219-
- Train Target: 2S1, BRDM2, T72, ZSU234 with depression angle 17$\degree$
220-
- Test Target: 2S1, BRDM2, T72, ZSU234 with depression angle 30$\degree$
217+
218+
#### Quick Start Guide for Training
219+
220+
- Dataset Preparation
221+
- Download the [soc-dataset.zip](https://github.com/jangsoopark/AConvNet-pytorch/releases/download/V2.0.0/soc-raw.zip)
222+
- After extracting it, you can find `train` and `test` directories inside `raw` directory.
223+
- Place the two directories (`train` and `test`) to the `dataset/raw`.
224+
```shell
225+
$ cd src/data
226+
$ python3 generate_dataset.py --is_train=True --use_phase=True --chip_size=96 --dataset=eoc-1
227+
$ python3 generate_dataset.py --is_train=False --use_phase=True --dataset=soc
228+
$ cd ..
229+
$ python3 train.py --config_name=config/AConvNet-EOC-1.json
230+
```
231+
221232
222233
#### EOC-2 (Target configuration and version variants)
223234
224235
### Outlier Rejection
225236
226237
### End-to-End SAR-ATR Cases
227-
238+
-->
228239
## Details about the specific environment of this repository
229240

230241
| | |

assets/figure/001.png

15.8 KB
Loading

assets/figure/2S1.png

1.23 KB
Loading
181 Bytes
Loading

assets/figure/soc-training-plot.png

4.18 KB
Loading

experiments/config/AConvNet-EOC-1.json

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44
"num_classes": 4,
55
"channels": 2,
66
"batch_size": 100,
7-
"epochs": 50,
7+
"epochs": 100,
88
"momentum": 0.9,
9-
"lr": 1e-3,
10-
"lr_step": [14],
9+
"lr": 1e-4,
10+
"lr_step": [50],
1111
"lr_decay": 0.1,
1212
"weight_decay": 4e-3,
1313
"dropout_rate": 0.5

experiments/config/AConvNet-SOC.json

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44
"num_classes": 10,
55
"channels": 2,
66
"batch_size": 100,
7-
"epochs": 50,
7+
"epochs": 100,
88
"momentum": 0.9,
99
"lr": 1e-3,
10-
"lr_step": [26],
10+
"lr_step": [50],
1111
"lr_decay": 0.1,
1212
"weight_decay": 4e-3,
1313
"dropout_rate": 0.5

notebook/experiments-SOC.ipynb

+27-24
Large diffs are not rendered by default.

notebook/target-chip.ipynb

+156
Large diffs are not rendered by default.

requirements.txt

+2-1
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,5 @@ tqdm==4.61.2
66
torchvision==0.10.0+cu111
77
matplotlib
88
scikit-learn
9-
seaborn
9+
seaborn
10+
Pillow

src/data/generate_dataset.py

+21-6
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
from absl import app
44

55
from multiprocessing import Pool
6+
from PIL import Image
67
import numpy as np
78

89
import json
@@ -13,23 +14,34 @@
1314

1415
flags.DEFINE_string('image_root', default='dataset', help='')
1516
flags.DEFINE_string('dataset', default='soc', help='')
16-
flags.DEFINE_boolean('is_train', default=True, help='')
17-
flags.DEFINE_boolean('use_phase', default=False, help='')
18-
flags.DEFINE_integer('chip_size', default=94, help='')
17+
flags.DEFINE_boolean('is_train', default=False, help='')
18+
flags.DEFINE_integer('chip_size', default=100, help='')
19+
flags.DEFINE_integer('patch_size', default=94, help='')
20+
flags.DEFINE_boolean('use_phase', default=True, help='')
21+
1922
FLAGS = flags.FLAGS
2023

2124
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
2225

2326

24-
def generate(src_path, dst_path, is_train, use_phase, chip_size, dataset):
27+
def data_scaling(chip):
28+
r = chip.max() - chip.min()
29+
return (chip - chip.min()) / r
30+
31+
32+
def log_scale(chip):
33+
return np.log10(np.abs(chip) + 1)
34+
35+
36+
def generate(src_path, dst_path, is_train, chip_size, patch_size, use_phase, dataset):
2537
if not os.path.exists(src_path):
2638
return
2739
if not os.path.exists(dst_path):
2840
os.makedirs(dst_path, exist_ok=True)
2941
print(f'Target Name: {os.path.basename(dst_path)}')
3042

3143
_mstar = mstar.MSTAR(
32-
name=dataset, is_train=is_train, use_phase=use_phase, chip_size=chip_size, patch_size=88, stride=1
44+
name=dataset, is_train=is_train, chip_size=chip_size, patch_size=patch_size, use_phase=use_phase, stride=1
3345
)
3446

3547
image_list = glob.glob(os.path.join(src_path, '*'))
@@ -40,7 +52,10 @@ def generate(src_path, dst_path, is_train, use_phase, chip_size, dataset):
4052
name = os.path.splitext(os.path.basename(path))[0]
4153
with open(os.path.join(dst_path, f'{name}-{i}.json'), mode='w', encoding='utf-8') as f:
4254
json.dump(label, f, ensure_ascii=False, indent=2)
55+
56+
# _image = log_scale(_image)
4357
np.save(os.path.join(dst_path, f'{name}-{i}.npy'), _image)
58+
# Image.fromarray(data_scaling(_image)).convert('L').save(os.path.join(dst_path, f'{name}-{i}.bmp'))
4459

4560

4661
def main(_):
@@ -57,7 +72,7 @@ def main(_):
5772
(
5873
os.path.join(raw_root, mode, target),
5974
os.path.join(output_root, target),
60-
FLAGS.is_train, FLAGS.use_phase, FLAGS.chip_size, FLAGS.dataset
75+
FLAGS.is_train, FLAGS.chip_size, FLAGS.patch_size, FLAGS.use_phase, FLAGS.dataset
6176
) for target in mstar.target_name_soc
6277
]
6378

src/data/loader.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
import numpy as np
22

3-
import torchvision
3+
from skimage import io
44
import torch
55
import tqdm
66

src/data/preprocess.py

+7-2
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,13 @@ def __call__(self, sample):
3535

3636
h, w, _ = _input.shape
3737
oh, ow = self.size
38-
y = np.random.randint(0, h - oh)
39-
x = np.random.randint(0, w - ow)
38+
39+
dh = h - oh
40+
dw = w - ow
41+
y = np.random.randint(0, dh) if dh > 0 else 0
42+
x = np.random.randint(0, dw) if dw > 0 else 0
43+
oh = oh if dh > 0 else h
44+
ow = ow if dw > 0 else w
4045

4146
return _input[y: y + oh, x: x + ow, :]
4247

src/model/network.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ def __init__(self, **params):
1212
self.classes = params.get('classes', 10)
1313
self.channels = params.get('channels', 1)
1414

15-
_w_init = params.get('w_init', lambda x: nn.init.kaiming_uniform_(x, nonlinearity='relu'))
15+
_w_init = params.get('w_init', lambda x: nn.init.kaiming_normal_(x, nonlinearity='relu'))
1616
_b_init = params.get('b_init', lambda x: nn.init.constant_(x, 0.1))
1717

1818
self._layer = nn.Sequential(
@@ -34,7 +34,7 @@ def __init__(self, **params):
3434
),
3535
nn.Dropout(p=self.dropout_rate),
3636
_blocks.Conv2DBlock(
37-
shape=[3, 3, 128, self.classes], stride=3, padding='valid',
37+
shape=[3, 3, 128, self.classes], stride=1, padding='valid',
3838
w_init=_w_init, b_init=nn.init.zeros_
3939
),
4040
nn.Flatten()

src/train.py

+10-4
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
import json
1515
import os
1616

17+
from data import preprocess
1718
from data import loader
1819
from utils import common
1920
import model
@@ -22,14 +23,17 @@
2223
flags.DEFINE_string('config_name', 'config/AConvNet-SOC.json', help='')
2324
FLAGS = flags.FLAGS
2425

25-
#common.set_random_seed(12321)
2626

27+
common.set_random_seed(12321)
2728

28-
def load_dataset(path, is_train, name, batch_size):
2929

30+
def load_dataset(path, is_train, name, batch_size):
31+
transform = [preprocess.CenterCrop(88), torchvision.transforms.ToTensor()]
32+
if is_train:
33+
transform = [preprocess.RandomCrop(88), torchvision.transforms.ToTensor()]
3034
_dataset = loader.Dataset(
3135
path, name=name, is_train=is_train,
32-
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
36+
transform=torchvision.transforms.Compose(transform)
3337
)
3438
data_loader = torch.utils.data.DataLoader(
3539
_dataset, batch_size=batch_size, shuffle=is_train, num_workers=1
@@ -99,7 +103,9 @@ def run(epochs, dataset, classes, channels, batch_size,
99103

100104
accuracy = validation(m, valid_set)
101105

102-
logging.info(f'Epoch: {epoch + 1:03d}/{epochs:03d} | loss={np.mean(_loss):.4f} | lr={lr} | accuracy={accuracy}')
106+
logging.info(
107+
f'Epoch: {epoch + 1:03d}/{epochs:03d} | loss={np.mean(_loss):.4f} | lr={lr} | accuracy={accuracy:.2f}'
108+
)
103109

104110
history['loss'].append(np.mean(_loss))
105111
history['accuracy'].append(accuracy)

0 commit comments

Comments
 (0)