You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: 'All' is the concatenation of all datasets, as described in [10] and [12]. The scores of [6,7] and [9] are not taken from the original papers but from the results of the implementations of [11] and [12], respectively.
56
57
@@ -84,13 +85,15 @@ Note: 'All' is the concatenation of all datasets, as described in [10] and [12].
84
85
85
86
[15][Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation](https://arxiv.org/abs/1905.05677)
86
87
88
+
[16][Word Sense Disambiguation: A Comprehensive Knowledge Exploitation Framework](https://doi.org/10.1016/j.knosys.2019.105030)
89
+
90
+
87
91
## WSD Lexical Sample task:
88
92
89
93
Above task is called All-words WSD because the systems attempt to disambiguate all of the words in a document, while there is another task which is called
90
94
Lexical Sample task. In this task a number of words are selected and the system should only disambiguate the occurrences of these words in a test set.
91
95
Iaccobacci et, al. (2016) provide the state-of-the-art results until 2016 [1]. Main tasks include Senseval 2, Senseval 3 and SemEval 2007. Evaluation metrics are as same as All words task.
92
96
93
-
94
97
### Lexical Sample results:
95
98
96
99
| Model | Senseval 2 |Senseval 3 |SemEval 2007 | Paper / Source |
0 commit comments