Skip to content

Commit 654c89a

Browse files
author
jangsoopark
committed
soc experiments
1 parent 10f55ed commit 654c89a

File tree

8 files changed

+383
-29
lines changed

8 files changed

+383
-29
lines changed

README.md

+29-20
Original file line numberDiff line numberDiff line change
@@ -41,31 +41,28 @@ The proposed model only consists of **sparsely connected layers** without any fu
4141
## Training
4242
For training, this implementation fixes the random seed to `12321` for `reproducibility`.
4343

44-
- [x] Data Augmentation
45-
- [ ] Back-propagation
46-
- [ ] Mini batch Stochastic Gradient Descent with Momentum
47-
- [ ] Weight Initialization
48-
- [ ] Learning Rate
49-
- [ ] Early Stopping
50-
44+
The experimental conditions are same as in the paper, except for `data augmentation` and `learning rate`.
45+
The `learning rate` is initialized with `1e-3` and decreased by a factor of 0.1 **after 26 epochs**.
46+
You can see the details in `src/model/_base.py` and `experiments/config/AConvNet-SOC.json`
5147

5248
### Data Augmentation
53-
Source code is `src/data/generate_dataset.py` and `src/data/mstar.py`
49+
5450
- The author uses random shifting to extract 88 x 88 patches from 128 x 128 SAR image chips.
5551
- The number of training images per one SAR image chip could be increased at maximum by (128 - 88 + 1) x (128 - 88 + 1) = 1681.
5652

5753
- However, for SOC, this repository does not use random shifting tue to accuracy issue.
54+
- You can see the details in `src/data/generate_dataset.py` and `src/data/mstar.py`
5855
- This implementation failed to achieve higher than 98% accuracy when using random sampling.
5956
- The implementation details for data augmentation is as:
60-
- Crop the center of 94 x 94 size image on 128 x 128 SAR image chip.
57+
- Crop the center of 94 x 94 size image on 128 x 128 SAR image chip (49 patches per image chip).
6158
- Extract 88 x 88 patches with stride 1 from 94 x 94 image.
6259

6360

6461
## Experiments
6562

66-
### Standard Operating Condition (SOC)
63+
You can download the MSTAR Dataset from [MSTAR Overview](https://www.sdms.afrl.af.mil/index.php?collection=mstar)
6764

68-
You can download from [MSTAR Overview](https://www.sdms.afrl.af.mil/index.php?collection=mstar)
65+
### Standard Operating Condition (SOC)
6966

7067
- MSTAR Target Chips (T72 BMP2 BTR70 SLICY) which is **MSTAR-PublicTargetChips-T72-BMP2-BTR70-SLICY.zip**
7168
- MSTAR / IU Mixed Targets which consists of **MSTAR-PublicMixedTargets-CD1.zip** and **MSTAR-PublicMixedTargets-CD2.zip**
@@ -135,6 +132,18 @@ MSTAR-PublicMixedTargets-CD1/MSTAR_PUBLIC_MIXED_TARGETS_CD1
135132

136133
```
137134

135+
#### Results of SOC
136+
- You can see the details in `notebook/experiments-SOC.ipynb`
137+
138+
- Visualization of training loss and test accuracy
139+
140+
![soc-training-plot](./assets/figure/soc-training-plot.png)
141+
142+
- Confusion Matrix with best model at **epoch 28**
143+
144+
![soc-confusion-matrix](./assets/figure/soc-confusion-matrix.png)
145+
146+
138147
### Extended Operating Conditions (EOC)
139148

140149
### Outlier Rejection
@@ -156,28 +165,28 @@ MSTAR-PublicMixedTargets-CD1/MSTAR_PUBLIC_MIXED_TARGETS_CD1
156165
}
157166
```
158167

168+
---
169+
159170
## TODO
160171

161172
- [ ] Implementation
162173
- [ ] Data generation
163-
- [ ] SOC
174+
- [X] SOC
164175
- [ ] EOC
165176
- [ ] Outlier Rejection
166177
- [ ] End-to-End SAR-ATR
167178
- [ ] Data Loader
168-
- [ ] SOC
179+
- [X] SOC
169180
- [ ] EOC
170181
- [ ] Outlier Rejection
171182
- [ ] End-to-End SAR-ATR
172183
- [ ] Model
173-
- [ ] Network
174-
- [ ] Training
175-
- [ ] Early Stopping
176-
- [ ] Hyper-parameter Optimization
184+
- [X] Network
185+
- [X] Training
186+
- [X] Early Stopping
187+
- [X] Hyper-parameter Optimization
177188
- [ ] Experiments
178-
- [ ] Reproduce the SOC Results
179-
- [ ] 1 channel input (Magnitude only)
180-
- [ ] 2 channel input (Magnitude + Phase)
189+
- [X] Reproduce the SOC Results
181190
- [ ] Reproduce the EOC Results
182191
- [ ] Reproduce the outlier rejection
183192
- [ ] Reproduce the end-to-end SAR-ATR
34.4 KB
Loading

assets/figure/soc-training-plot.png

12.5 KB
Loading

docker/Dockerfile

+32
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
2+
# docker build . -t aconvnet-pytorch
3+
# Base container: docker pull pytorch/pytorch:1.9.0-cuda11.1-cudnn8-devel
4+
5+
FROM pytorch/pytorch:1.9.0-cuda11.1-cudnn8-devel
6+
7+
ARG DEBIAN_FRONTEND=noninteractive
8+
9+
RUN apt update
10+
11+
RUN pip install seaborn && \
12+
pip install numpy && \
13+
pip install scipy&& \
14+
pip install tqdm && \
15+
pip install jupyter && \
16+
pip install matplotlib && \
17+
pip install scikit-image && \
18+
pip install scikit-learn && \
19+
pip install opencv-python && \
20+
pip install absl-py && \
21+
pip install optuna
22+
23+
24+
RUN apt update && \
25+
apt install -y wget vim emacs nano libgl1-mesa-glx
26+
27+
28+
RUN mkdir -p /workspace
29+
30+
ARG work_dir=/workspace
31+
32+
WORKDIR ${work_dir}

notebook/experiments-SOC.ipynb

+309
Large diffs are not rendered by default.

requirements.txt

+8-7
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1-
scikit_image==0.18.1
2-
numpy==1.20.1
3-
torchvision==0.9.1+cu111
4-
matplotlib==3.3.4
5-
torch==1.8.1+cu111
1+
scikit-image==0.18.2
2+
numpy==1.21.1
63
absl-py
7-
tqdm==4.59.0
8-
4+
torch==1.9.0+cu111
5+
tqdm==4.61.2
6+
torchvision==0.10.0+cu111
7+
matplotlib
8+
scikit-learn
9+
seaborn

run-docker.sh

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#!/bin/bash
2+
3+
WORKSPACE=
4+
5+
docker run --gpus all --rm -it -p 8888:8888 --mount type=bind,src=${WORKSPACE},dst=/workspace aconvnet-pytorch /bin/bash

src/train.py

-2
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,11 @@
99
import torchvision
1010
import torch
1111

12-
from skimage import metrics
1312
import numpy as np
1413

1514
import json
1615
import os
1716

18-
from data import preprocess
1917
from data import loader
2018
from utils import common
2119
import model

0 commit comments

Comments
 (0)