You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -37,6 +38,7 @@ This repository is based on PyTorch 2.1 and PyTorch-Geometric 2.4.
37
38
*[Pre-train](#pretraining) ULTRA on your own mixture of graphs.
38
39
* Run [evaluation on many datasets](#run-on-many-datasets) sequentially.
39
40
* Use the pre-trained checkpoints to run inference and fine-tuning on [your own KGs](#adding-your-own-graph).
41
+
* (NEW) Execute complex logical queries on any KG with [UltraQuery](#ultraquery)
40
42
41
43
Table of contents:
42
44
*[Installation](#installation)
@@ -47,8 +49,10 @@ Table of contents:
47
49
*[Pretraining](#pretraining)
48
50
*[Datasets](#datasets)
49
51
*[Adding custom datasets](#adding-your-own-graph)
52
+
*[UltraQuery](#ultraquery)
50
53
51
54
## Updates
55
+
***Apr 23rd, 2024**: Release of [UltraQuery](#ultraquery) for complex multi-hop logical query answering on _any_ KG (with new checkpoint and 23 datasets).
52
56
***Jan 15th, 2024**: Accepted at [ICLR 2024](https://openreview.net/forum?id=jVEoydFOl9)!
53
57
***Dec 4th, 2023**: Added a new ULTRA checkpoint `ultra_50g` pre-trained on 50 graphs. Averaged over 16 larger transductive graphs, it delivers 0.389 MRR / 0.549 Hits@10 compared to 0.329 MRR / 0.479 Hits@10 of the `ultra_3g` checkpoint. The inductive performance is still as good! Use this checkpoint for inference on larger graphs.
54
58
***Dec 4th, 2023**: Pre-trained ULTRA models (3g, 4g, 50g) are now also available on the [HuggingFace Hub](https://huggingface.co/collections/mgalkin/ultra-65699bb28369400a5827669d)!
@@ -340,17 +344,188 @@ class CustomDataset(InductiveDataset):
340
344
TSV / CSV files are supported by setting a delimiter (eg, `delimiter = "\t"`) in the class definition.
341
345
After adding your own dataset, you can immediately run 0-shot inference or fine-tuning of any ULTRA checkpoint.
342
346
347
+
## UltraQuery ##
348
+
349
+
You can now run complex logical queries on any KG with UltraQuery, an inductive query answering approach that uses any Ultra checkpoint with non-parametric fuzzy logic operators. Read more in the [new preprint](https://arxiv.org/abs/2404.07198).
350
+
351
+
Similar to Ultra, UltraQuery transfers to any KG in the zero-shot fashion and sets a few SOTA results on a variety of query answering benchmarks.
352
+
353
+
### Checkpoint ###
354
+
355
+
Any existing ULTRA checkpoint is compatible with UltraQuery but we also ship a newly trained `ultraquery.pth` checkpoint in the `ckpts` folder.
356
+
357
+
* A new `ultraquery.pth` checkpoint trained on complex queries from the `FB15k237LogicalQuery` dataset for 40,000 steps, the config is in `config/ultraquery/pretrain.yaml` - the same ULTRA architecture but tuned for the multi-source propagation needed in complex queries (no need for score thresholding)
358
+
* You can use any existing ULTRA checkpoint (`3g` / `4g` / `50g`) for starters - don't forget to set the `--threshold` argument to 0.8 or higher (depending on the dataset). Score thresholding is required because those models were trained on simple one-hop link prediction and there are certain issues (namely, the multi-source propagation issue, read Section 4.1 in the [new preprint](https://arxiv.org/abs/2404.07198) for more details)
359
+
360
+
### Performance
361
+
362
+
The numbers reported in the preprint were obtained with a model trained with TorchDrug. In this PyG implementation, we managed to get even better performance across the board with the `ultraquery.pth` checkpoint.
363
+
364
+
`EPFO` is the averaged performance over 9 queries with relation projection, intersection, and union. `Neg` is the averaged performance over 5 queries with negation.
365
+
366
+
<table>
367
+
<tr>
368
+
<th rowspan=2>Model</th>
369
+
<th colspan=4>Total Average (23 datasets)</th>
370
+
<th colspan=4>Transductive (3 datasets)</th>
371
+
<th colspan=4>Inductive (e) (9 graphs)</th>
372
+
<th colspan=4>Inductive (e,r) (11 graphs)</th>
373
+
</tr>
374
+
<tr>
375
+
<th>EPFO MRR</th>
376
+
<th>EPFO Hits@10</th>
377
+
<th>Neg MRR</th>
378
+
<th>Neg Hits@10</th>
379
+
<th>EPFO MRR</th>
380
+
<th>EPFO Hits@10</th>
381
+
<th>Neg MRR</th>
382
+
<th>Neg Hits@10</th>
383
+
<th>EPFO MRR</th>
384
+
<th>EPFO Hits@10</th>
385
+
<th>Neg MRR</th>
386
+
<th>Neg Hits@10</th>
387
+
<th>EPFO MRR</th>
388
+
<th>EPFO Hits@10</th>
389
+
<th>Neg MRR</th>
390
+
<th>Neg Hits@10</th>
391
+
</tr>
392
+
<tr>
393
+
<th>UltraQuery Paper</th>
394
+
<td align="center">0.301</td>
395
+
<td align="center">0.428</td>
396
+
<td align="center">0.152</td>
397
+
<td align="center">0.264</td>
398
+
<td align="center">0.335</td>
399
+
<td align="center">0.467</td>
400
+
<td align="center">0.132</td>
401
+
<td align="center">0.260</td>
402
+
<td align="center">0.321</td>
403
+
<td align="center">0.479</td>
404
+
<td align="center">0.156</td>
405
+
<td align="center">0.291</td>
406
+
<td align="center">0.275</td>
407
+
<td align="center">0.375</td>
408
+
<td align="center">0.153</td>
409
+
<td align="center">0.242</td>
410
+
</tr>
411
+
<tr>
412
+
<th>UltraQuery PyG</th>
413
+
<td align="center">0.309</td>
414
+
<td align="center">0.432</td>
415
+
<td align="center">0.178</td>
416
+
<td align="center">0.286</td>
417
+
<td align="center">0.411</td>
418
+
<td align="center">0.518</td>
419
+
<td align="center">0.240</td>
420
+
<td align="center">0.352</td>
421
+
<td align="center">0.312</td>
422
+
<td align="center">0.468</td>
423
+
<td align="center">0.139</td>
424
+
<td align="center">0.262</td>
425
+
<td align="center">0.280</td>
426
+
<td align="center">0.380</td>
427
+
<td align="center">0.193</td>
428
+
<td align="center">0.288</td>
429
+
</tr>
430
+
</table>
431
+
432
+
In particular, we reach SOTA on FB15k queries (0.764 MRR & 0.834 Hits@10 on EPFO; 0.567 MRR & 0.725 Hits@10 on negation) compared to much larger and heavier baselines (such as QTO).
433
+
434
+
### Run Inference ###
435
+
436
+
The running format is similar to the KG completion pipeline - use `run_query.py` and `run_query_many` for running a single expriment on one dataset or on a sequence of datasets.
437
+
Due to the size of the datasets and query complexity, it is recommended to run inference on a GPU.
438
+
439
+
An example command for running transductive inference with UltraQuery on FB15k237 queries
*`--threshold`: set to 0.0 when using the main UltraQuery checkpoint `ultraquery.pth` or 0.8 (and higher) when using vanilla Ultra checkpoints
459
+
*`--qe_ckpt`: path to the UltraQuery checkpoint, set to `null` if you want to run vanilla Ultra checkpoints
460
+
*`--ultra_ckpt`: path to the original Ultra checkpoints, set to `null` if you want to run the UltraQuery checkpoint
461
+
462
+
### Datasets ###
463
+
464
+
23 new datasets available in `datasets_query.py` that will be automatically downloaded upon the first launch.
465
+
All datasets include 14 standard query types (`1p`, `2p`, `3p`, `2i`, `3i`, `ip`, `pi`, `2u-DNF`, `up-DNF`, `2in`, `3in`,`inp`, `pin`, `pni`).
466
+
467
+
The standard protocol is training on 10 patterns without unions and `ip`,`pi` queries (`1p`, `2p`, `3p`, `2i`, `3i`, `2in`, `3in`,`inp`, `pin`, `pni`) and running evaluation on all 14 patterns including `2u`, `up`, `ip`, `pi`.
All are the [BetaE](https://arxiv.org/abs/2010.11465) versions of the datasets including queries with negation and limiting the max number of answers to 100
9 inductive datasets extracted from FB15k237 - first proposed in [Inductive Logical Query Answering in Knowledge Graphs](https://openreview.net/forum?id=-vXEN5rIABY) (NeurIPS 2022)
481
+
482
+
`InductiveFB15k237Query` with 9 versions where the number shows the how large is the inference graph compared to the train graph (in the number of nodes):
In addition, we include the `InductiveFB15k237QueryExtendedEval` dataset with the same versions. Those are supposed to be inference-only datasets that measure the _faithfulness_ of complex query answering approaches. In each split, as validation and test graphs extend the train graphs with more nodes and edges, training queries now have more true answers achievable by simple edge traversal (no missing link prediction required) - the task is to measure how well CLQA models can retrieve new easy answers on training queries but on larger unseen graphs.
11 new inductive query datasets (WikiTopics-CLQA) that we built specifically for testing UltraQuery.
493
+
The queries were sampled from the WikiTopics splits proposed in [Double Equivariance for Inductive Link Prediction for Both New Nodes and New Relation Types](https://arxiv.org/abs/2302.01313)
New metrics include `auroc`, `spearmanr`, `mape`. We don't support Mean Rank `mr` in complex queries. If you ever see `nan` in one of those metrics, consider reducing the batch size as those metrics are computed with the variadic functions that might be numerically unstable on large batches.
503
+
343
504
## Citation ##
344
505
345
-
If you find this codebase useful in your research, please cite the original paper.
506
+
If you find this codebase useful in your research, please cite the original papers.
507
+
508
+
The main ULTRA paper:
509
+
510
+
```bibtex
511
+
@inproceedings{galkin2023ultra,
512
+
title={Towards Foundation Models for Knowledge Graph Reasoning},
513
+
author={Mikhail Galkin and Xinyu Yuan and Hesham Mostafa and Jian Tang and Zhaocheng Zhu},
514
+
booktitle={The Twelfth International Conference on Learning Representations},
515
+
year={2024},
516
+
url={https://openreview.net/forum?id=jVEoydFOl9}
517
+
}
518
+
```
519
+
520
+
UltraQuery:
346
521
347
522
```bibtex
348
-
@article{galkin2023ultra,
349
-
title={Towards Foundation Models for Knowledge Graph Reasoning},
350
-
author={Mikhail Galkin and Xinyu Yuan and Hesham Mostafa and Jian Tang and Zhaocheng Zhu},
351
-
year={2023},
352
-
eprint={2310.04562},
523
+
@article{galkin2024ultraquery,
524
+
title={Zero-shot Logical Query Reasoning on any Knowledge Graph},,
525
+
author={Mikhail Galkin and Jincheng Zhou and Bruno Ribeiro and Jian Tang and Zhaocheng Zhu},
0 commit comments