You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -129,7 +139,7 @@ This is not much different from a pretraining config. We will:
129
139
```
130
140
131
141
1. A the model will be saved in Hugging Face format to `~/results` directory every 20,000 iterations.
132
-
2. Location of the dataset metadata file generated in Step 4.
142
+
2. Location of the dataset metadata file generated in Step 4 of quick start guide.
133
143
3. The learning-rate can be used to trade-off between learning and forgetting. A higher learning-rate will learn quickly on our new dataset but will cause forgetting. A lower learning-rate will instead retain more of the pretrained model's knowledge, but will slow down adapting to the new domain.
134
144
4. Config of the pretrained model. We load the model downloaded from the repository earlier.
135
145
5. This tells Fast-LLM to load the weights of the pretrained model. If we wanted to use the model's configuration, but train from scratch, we could use the same config but set this to `no`.
@@ -25,14 +25,24 @@ We already saw an example dataset configuration in the [quick-start guide](../qu
25
25
26
26
In this section we are interested in generalizing step 3. For more details on steps 1 and 2, please refer to the quick-start guide or [this example](data-configuration.md).
27
27
28
+
The section `data.datasets` holds descriptions of datasets used in training, validation, and testing.
29
+
30
+
The Training and Testing phases must have predetermined dataset names: `training`and `testing`, respectively. Each of these phases can have only one dataset.
31
+
32
+
For validation datasets, the rules are different. There can be as many validation datasets as needed, and their names are arbitrary. In the example above, the dataset name `validation` is chosen for simplicity. The datasets names used for validation and their application details are specified in the training config `evaluations` sections.
33
+
34
+
Adding multiple validation datasets increases flexibility in tracking the accuracy of your trained model. One possible scenario is using a separate validation dataset for each blended training dataset, allowing you to track training progress on each subset separately and observe how the model performs in real time on different subsets of your training data.
35
+
36
+
Below are examples of how to configure various aspects of training and validation datasets.
37
+
28
38
## Example 1: Blending multiple datasets
29
39
30
40
In this example, we have three datasets and want to sample from each of them during training with probabilities 0.70, 0.25 and 0.05. For this, we use the `blended` type which takes other datasets as arguments:
31
41
32
42
```yaml
33
43
data:
34
44
datasets:
35
-
Training:
45
+
training:
36
46
type: blended
37
47
datasets:
38
48
- type: file
@@ -54,7 +64,7 @@ In this example, we have a large dataset that comes pre-shuffled, so shuffling i
54
64
```yaml
55
65
data:
56
66
datasets:
57
-
Training:
67
+
training:
58
68
type: file
59
69
path: path/to/dataset.yaml
60
70
sampling:
@@ -68,10 +78,10 @@ In this example, we want to disable shuffling entirely, but only for the validat
68
78
```yaml
69
79
data:
70
80
datasets:
71
-
Training:
81
+
training:
72
82
type: file
73
83
path: path/to/training_dataset.yaml
74
-
Validation:
84
+
validation:
75
85
type: sampled
76
86
dataset:
77
87
type: file
@@ -91,7 +101,7 @@ In this example, we have a blend of datasets as in example 1, but we wish to set
91
101
```yaml
92
102
data:
93
103
datasets:
94
-
Training:
104
+
training:
95
105
type: blended
96
106
datasets:
97
107
- type: sampled
@@ -118,7 +128,34 @@ data:
118
128
!!! note "Default seed"
119
129
In the absence of explicit seed, Fast-LLM uses a default seed (`data.sampling`'s default) instead, and uses seed shifts to ensure different seeds for each phase and for the various blended datasets.
120
130
121
-
## Example 5: Advanced scenario
131
+
132
+
## Example 5: Specifying Multiple Validation Datasets
133
+
134
+
In this example, we show how to specify multiple validation datasets and configure how often they are applied, along with their usage attributes in the `training.evaluations` section.
135
+
136
+
Please note that the same dataset names must be used in the `training.evaluations` section. If a validation dataset is specified in the `datasets` section but not in `training.evaluations`, it will not be used for validation.
137
+
138
+
```yaml
139
+
training:
140
+
evaluations:
141
+
the_stack:
142
+
iterations: 25
143
+
interval: 50
144
+
fineweb:
145
+
iterations: 25
146
+
interval: 100
147
+
data:
148
+
datasets:
149
+
the_stack:
150
+
type: file
151
+
path: path/to/validation_the_stack_dataset.yaml
152
+
fineweb:
153
+
type: file
154
+
path: path/to/validation_fineweb_dataset.yaml
155
+
156
+
```
157
+
158
+
## Example 6: Advanced scenario
122
159
123
160
In this example, we combine everything we learned so far to create a complex scenario, where:
124
161
@@ -129,7 +166,7 @@ In this example, we combine everything we learned so far to create a complex sce
0 commit comments