You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1- The package can be installed by running the following command.
12
+
Paper link: https://arxiv.org/pdf/2101.06829.pdf
55
13
56
-
```pip install -r requirements.txt```
14
+
## Installation
57
15
58
-
2- Running inside docker container
59
-
```
60
-
docker build -t <image_name>:<tag> -f Dockerfile
61
-
```
16
+
We use the following packages, we recommend you to use a virtual environment (e.g. conda).
17
+
- Python 3.7
18
+
- torch == 1.5.0
19
+
- transformers == 2.11.0
20
+
21
+
Also, please run:
22
+
```pip install -r requirements.txt```
62
23
63
24
## Usage
64
25
This section explains steps to preprocess MultiWOZ dataset and training the model.
@@ -78,88 +39,23 @@ Each dialogue turn will be represented as a sequence, which contains previous us
78
39
```
79
40
80
41
81
-
### DST training:
82
-
training the model for predicting belief states.
83
-
84
-
```
85
-
train_dst.sh $GPU gpt2 $GPT2_TYPE $BATCH
86
-
```
87
-
88
-
For this task, we include ```none``` slot values in the sequence.
89
-
We observed that this will improve SimpleTOD performance on DST by reducing false positive rates.
90
-
```
91
-
<|endoftext|> <|context|> <|user|> am looking for a place to to stay that has cheap price range it should be in a type of hotel <|endofcontext|>
92
-
<|belief|> hotel name not mentioned , hotel area not mentioned , hotel parking not mentioned , hotel pricerange cheap , hotel stars not mentioned , hotel internet not mentioned , hotel type hotel <|endofbelief|> <|endoftext|>
93
-
```
94
-
95
-
96
-
### End-to-End training:
97
-
In this step, we train SimpleTOD on the sequence of context+belief+action+delex response.
98
-
Compared to DST task, we do not include ```none``` slot values, because of the sequence length limitaiton od GPT2.
99
-
```
100
-
train_end2end.sh $GPU gpt2 $GPT2_TYPE $BATCH
101
-
```
102
-
103
-
104
-
### Generation:
105
-
106
-
This script will generate SimpeTOD belief/action/responses.
107
-
Generation is based on each dialogue, where it create context for each turn and save the generated belief, action, and responses for the dialogue.
It will save the model output in a json file ```MODEL_OUTPUT``` which contains all dialogues with groundtruth user and system responses as well.
113
-
- In order to use DB search during generation, set ```--use_db_search``` (this will use *oracle* DB search results)
114
-
- In order to use DB search dynamically, set ```--use_db_search``` and ```--use_dynamic_db```
115
-
- To use oracle belief and actions, simple set ```--use_oracle_belief``` and ```--use_oracle_action```
116
-
117
-
### Evaluation
118
-
MultiWOZ evaluation contains two part, Dialogue State Tracking (DST) and End-to-End.
119
-
120
-
#### DST evaluation
121
-
122
-
In order to compute joint accuracy, simply run the following script using the generated
123
-
```MODEL_OUTPUT``` file. it will use the generated belief states to compute the metric. It will compute joint accuracy without any label cleaning.
124
-
```
125
-
python compute_joint_acc.py $MODEL_OUTPUT
126
-
```
127
-
There are two types of label cleaning that can be used to compute joint accuracy.
128
-
- To use default lable cleaning suggested by MultiWOZ author, please set ```--default_cleaning``` (for more details, please refer to [MultiWOZ](https://github.com/budzianowski/multiwoz) FAQ.5)
129
-
- We found other type of noisy annotation. Please refer to the paper for more details different types of noisy annotations. Here, we provide an option to compute joint accuracy by fixing Type 2 noisy annotation (where one or more slots are not labeled in some turns.) by setting ```--type2_cleaning```
130
-
- The complete list of Type 2 noisy annotations is [here](noisy_annotations/type_2_noisy_annotations.json). For more details on noisy annotation on MultiWOZ dataset, please refer to the paper
131
-
132
-
133
-
#### End-to-End evaluation
134
-
135
-
In order to compute inform/success/BLEU, simply run the following script. It will load generated belief states and responses, and computes the metrics.
136
-
```
137
-
python evaluate_multiwoz.py $MODEL_OUTPUT
138
-
```
139
-
140
-
### Demo
141
-
142
-
In order to test the model in real conversation with human, we have provided a simple script where user can input text in a multi turn setting, and see the responses from SimpleTOD.
143
-
It will generate lexicalized responses and belief states at each turn. For more information, please read the blog.
144
-
```
145
-
python demo.py $CHECKPOINT $DECODING
146
-
```
147
-
148
-
149
42
## Citation
43
+
150
44
```
151
-
@article{hosseini2020simple,
152
-
title={A simple language model for task-oriented dialogue},
153
-
author={Hosseini-Asl, Ehsan and McCann, Bryan and Wu, Chien-Sheng and Yavuz, Semih and Socher, Richard},
154
-
journal={arXiv preprint arXiv:2005.00796},
155
-
year={2020}
45
+
@misc{he2021joint,
46
+
title={Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models},
47
+
author={Tianxing He and Bryan McCann and Caiming Xiong and Ehsan Hosseini-Asl},
48
+
year={2021},
49
+
eprint={2101.06829},
50
+
archivePrefix={arXiv},
51
+
primaryClass={cs.CL}
156
52
}
157
53
```
158
54
159
55
160
56
## License
161
-
The code is released under the BSD-3 License - see [LICENSE](LICENSE.txt) for details
162
57
58
+
Please see LICENSE.md .
163
59
164
60
copy /export/share/tianxing-he/lm_finetune_gensave_20200805 to ./exps/lm_finetune
165
61
these will have partial2full noise samples with maskrate 0.2 or 0.4 for sst-2, mnli, qqp, qnli
0 commit comments