Skip to content

Commit f095166

Browse files
authored
Add new results on English constituency parsing, English relationship extraction, Arabic language model (#609)
* Add new results on constituency parsing * Add new results on Relationship Extraction * Add new Arabic language model
1 parent 2beeccf commit f095166

File tree

3 files changed

+5
-2
lines changed

3 files changed

+5
-2
lines changed

arabic/language_modeling.md

+1
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,6 @@ Language modeling is the task of predicting the next word or character in a docu
55

66
| Model | Paper / Source | Code |
77
| ------------- | :-----:| :-----: |
8+
| Zen 2.0: Continue training and adaption for n-gram enhanced text encoders | [ZEN](https://arxiv.org/abs/2105.01279) | [Official](https://github.com/sinovation/ZEN2) |
89
|hULMonA: The Universal Language Model in Arabic|[hULMonA](https://aclanthology.org/W19-4608/) | [Official](https://github.com/aub-mind/hULMonA) |
910
|AraBERT: Transformer-based Model for Arabic Language Understanding|[AraBERT](https://arxiv.org/abs/2003.00104) | [Official](https://github.com/aub-mind/araBERT) |

english/constituency_parsing.md

+1
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ For a comparison of single models trained only on WSJ, refer to [Kitaev and Klei
3131

3232
| Model | F1 score | Paper / Source | Code |
3333
| ---------------------------------------------------------------------------------- | :------: | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
34+
| Span Attention + XLNet (Tian et al., 2020) | 96.40 | [Improving Constituency Parsing with Span Attention](https://aclanthology.org/2020.findings-emnlp.153/) | [Official](https://github.com/cuhksz-nlp/SAPar) |
3435
| Label Attention Layer + HPSG + XLNet (Mrini et al., 2020) | 96.38 | [Rethinking Self-Attention: Towards Interpretability for Neural Parsing](https://www.aclweb.org/anthology/2020.findings-emnlp.65.pdf) | [Official](https://github.com/KhalilMrini/LAL-Parser) |
3536
| Attach-Juxtapose Parser + XLNet (Yang and Deng, 2020) | 96.34 | [Strongly Incremental Constituency Parsing with Graph Neural Networks](https://arxiv.org/abs/2010.14568) | [Official](https://github.com/princeton-vl/attach-juxtapose-parser) |
3637
| Head-Driven Phrase Structure Grammar Parsing (Joint) + XLNet (Zhou and Zhao, 2019) | 96.33 | [Head-Driven Phrase Structure Grammar Parsing on Penn Treebank](https://arxiv.org/pdf/1907.02684.pdf) | |

english/relationship_extraction.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -70,8 +70,9 @@ reported here are the highest achieved by the model using any external resources
7070
| Model | F1 | Paper / Source | Code |
7171
| -------------------------------------- | ----- | --------------- | -------------- |
7272
| *BERT-based Models* |
73-
| Matching-the-Blanks (Baldini Soares et al., 2019) | **89.5** | [Matching the Blanks: Distributional Similarity for Relation Learning](https://www.aclweb.org/anthology/P19-1279) |
74-
| R-BERT (Wu et al. 2019) | **89.25** | [Enriching Pre-trained Language Model with Entity Information for Relation Classification](https://arxiv.org/abs/1905.08284) | [mickeystroller's Reimplementation](https://github.com/mickeystroller/R-BERT)
73+
| A-GCN (Tian et al., 2021) | **89.85** | [Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks](https://aclanthology.org/2021.acl-long.344/) | [Official](https://github.com/cuhksz-nlp/RE-AGCN) |
74+
| Matching-the-Blanks (Baldini Soares et al., 2019) | 89.5 | [Matching the Blanks: Distributional Similarity for Relation Learning](https://www.aclweb.org/anthology/P19-1279) |
75+
| R-BERT (Wu et al. 2019) | 89.25 | [Enriching Pre-trained Language Model with Entity Information for Relation Classification](https://arxiv.org/abs/1905.08284) | [mickeystroller's Reimplementation](https://github.com/mickeystroller/R-BERT)
7576
| *CNN-based Models* |
7677
| Multi-Attention CNN (Wang et al. 2016) | **88.0** | [Relation Classification via Multi-Level Attention CNNs](http://aclweb.org/anthology/P16-1123) | [lawlietAi's Reimplementation](https://github.com/lawlietAi/relation-classification-via-attention-model) |
7778
| Attention CNN (Huang and Y Shen, 2016) | 84.3<br>85.9<sup>[\*](#footnote)</sup> | [Attention-Based Convolutional Neural Network for Semantic Relation Extraction](http://www.aclweb.org/anthology/C16-1238) |

0 commit comments

Comments
 (0)