Skip to content

Commit 3babebd

Browse files
A newly proposed knowledge-based WSD approach, KEF (#430)
Co-authored-by: Sebastian Ruder <[email protected]>
1 parent 4ed3ff7 commit 3babebd

File tree

1 file changed

+7
-4
lines changed

1 file changed

+7
-4
lines changed

english/word_sense_disambiguation.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,10 +47,11 @@ The main evaluation measure is F1-score.
4747
| Model | All | Senseval 2 |Senseval 3 |SemEval 2007 |SemEval 2013 |SemEval 2015 | Paper / Source |
4848
| ------------- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | --- |
4949
|WN 1st sense baseline | 65.2 | 66.8 | 66.2 | 55.2 | 63.0 | 67.8 | [[1]](http://aclweb.org/anthology/E/E17/E17-1010.pdf) |
50-
|Babelfy | 65.5 | 67.0 | 63.5 | 51.6 | 66.4 | **70.3** | [[8]](http://aclweb.org/anthology/Q14-1019) |
50+
|Babelfy | 65.5 | 67.0 | 63.5 | 51.6 | 66.4 | 70.3 | [[8]](http://aclweb.org/anthology/Q14-1019) |
5151
|UKB<sub>ppr_w2w-nf</sub> | 57.5 | 64.2 | 54.8 | 40.0 | 64.5 | 64.5 | [[9]](https://www.mitpressjournals.org/doi/full/10.1162/COLI_a_00164) [[12]](http://aclweb.org/anthology/W18-2505) |
52-
|UKB<sub>ppr_w2w</sub> | **67.3** | 68.8 | 66.1 | 53.0 | **68.8** | **70.3** | [[9]](https://www.mitpressjournals.org/doi/full/10.1162/COLI_a_00164) [[12]](http://aclweb.org/anthology/W18-2505) |
53-
|WSD-TM | 66.9 | **69.0** | **66.9** | **55.6** | 65.3 | 69.6 | [[10]](https://arxiv.org/pdf/1801.01900.pdf) |
52+
|UKB<sub>ppr_w2w</sub> | 67.3 | 68.8 | 66.1 | 53.0 | **68.8** | 70.3 | [[9]](https://www.mitpressjournals.org/doi/full/10.1162/COLI_a_00164) [[12]](http://aclweb.org/anthology/W18-2505) |
53+
|WSD-TM | 66.9 | 69.0 | **66.9** | 55.6 | 65.3 | 69.6 | [[10]](https://arxiv.org/pdf/1801.01900.pdf) |
54+
|KEF | **68.0** | **69.6** | 66.1 | **56.9** | 68.4 | **72.3** | [[16]](https://doi.org/10.1016/j.knosys.2019.105030) [[code]](https://github.com/lwmlyy/Knowledge-based-WSD)|
5455

5556
Note: 'All' is the concatenation of all datasets, as described in [10] and [12]. The scores of [6,7] and [9] are not taken from the original papers but from the results of the implementations of [11] and [12], respectively.
5657

@@ -84,13 +85,15 @@ Note: 'All' is the concatenation of all datasets, as described in [10] and [12].
8485

8586
[15] [Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation](https://arxiv.org/abs/1905.05677)
8687

88+
[16] [Word Sense Disambiguation: A Comprehensive Knowledge Exploitation Framework](https://doi.org/10.1016/j.knosys.2019.105030)
89+
90+
8791
## WSD Lexical Sample task:
8892

8993
Above task is called All-words WSD because the systems attempt to disambiguate all of the words in a document, while there is another task which is called
9094
Lexical Sample task. In this task a number of words are selected and the system should only disambiguate the occurrences of these words in a test set.
9195
Iaccobacci et, al. (2016) provide the state-of-the-art results until 2016 [1]. Main tasks include Senseval 2, Senseval 3 and SemEval 2007. Evaluation metrics are as same as All words task.
9296

93-
9497
### Lexical Sample results:
9598

9699
| Model | Senseval 2 |Senseval 3 |SemEval 2007 | Paper / Source |

0 commit comments

Comments
 (0)