Skip to content

Commit bb7c64c

Browse files
No public description
PiperOrigin-RevId: 684582762
1 parent 889a59b commit bb7c64c

File tree

1 file changed

+63
-0
lines changed
  • official/projects/waste_identification_ml

1 file changed

+63
-0
lines changed

official/projects/waste_identification_ml/README.md

+63
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,69 @@ Material Form Model V2| MobileNet | saved model | [click here](https://storage.g
7575
--config_file="config.yaml"`
7676
10. You can also start a screen session and run the training in the background.
7777

78+
## Config file parameters
79+
80+
- `annotation_file` - path to the validation file in COCO JSON format.
81+
- `init_checkpoint` - path to the checkpoints for transfer learning.
82+
- `init_checkpoint_modules` - to load both the backbone or decoder or any one
83+
of them.
84+
- `freeze_backbone` - if you want to freeze your backbone or not while
85+
training.
86+
- `input_size` - image size according to which the model is trained.
87+
- `num_classes` - total number of classes + 1 ( background )
88+
- `per_category_metrics` - in case you need metric for each class
89+
- `global_batch_size` - batch size.
90+
- `input_path` - path to the dataset set.
91+
- `parser` - contains the data augmentation operations.
92+
- `steps_per_loop` - number of steps to complete one epoch. It's usually
93+
`training tal data size / batch size`.
94+
- `summary_interval` - how often you want to plot the metric
95+
- `train_steps` - total steps for training. Its equal to
96+
`steps_per_loop x epochs`
97+
- `validation_interval` - how often do you want to evaluate the validation
98+
data.
99+
- `validation_steps` - steps to cover validation data. Its equal to
100+
`validation data size / batch size`
101+
- `warmup_learning_rate` - it is a strategy that gradually increases the
102+
learning rate from a very low value to a desired initial learning rate over
103+
a predefined number of iterations or epochs.
104+
To stabilize training in the early stages by allowing the model to adapt to
105+
the data slowly before using a higher learning rate.
106+
- `warmup_steps` - steps for the warmup learning rate
107+
- `initial_learning_rate` - The initial learning rate is the value of the
108+
learning rate at the very start of the training process.
109+
- `checkpoint_interval` - number of steps to export the model.
110+
111+
A common practice to calculate the parameters are below:
112+
113+
`total_training_samples = 4389
114+
total_validation_samples = 485
115+
116+
train_batch_size = 512
117+
val_batch_size = 128
118+
num_epochs = 700
119+
warmup_learning_rate = 0.0001
120+
initial_learning_rate = 0.001
121+
122+
steps_per_loop = total_training_samples // train_batch_size
123+
summary_interval = steps_per_loop
124+
train_steps = num_epochs * steps_per_loop
125+
validation_interval = steps_per_loop
126+
validation_steps = total_validation_samples // val_batch_size
127+
warmup_steps = steps_per_loop * 10
128+
checkpoint_interval = steps_per_loop * 5
129+
decay_steps = int(train_steps)
130+
131+
print(f'steps_per_loop: {steps_per_loop}')
132+
print(f'summary_interval: {summary_interval}')
133+
print(f'train_steps: {train_steps}')
134+
print(f'validation_interval: {validation_interval}')
135+
print(f'validation_steps: {validation_steps}')
136+
print(f'warmup_steps: {warmup_steps}')
137+
print(f'warmup_learning_rate: {warmup_learning_rate}')
138+
print(f'initial_learning_rate: {initial_learning_rate}')
139+
print(f'decay_steps: {decay_steps}')
140+
print(f'checkpoint_interval: {checkpoint_interval}')`
78141

79142
## Authors and Maintainers
80143
- Umair Sabir

0 commit comments

Comments
 (0)