Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study

Rahul NadkarniDavid WaddenIz BeltagyNoah SmithHannaneh HajishirziTom Hope.

doi:10.24432/C5QC7V

TL;DR

Applying domain-specific pretrained language models to biomedical knowledge graph completion and exploring ways of integrating them with or using them to augment knowledge graph embeddings to improve performance.
Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes. Predicting missing links in these graphs can boost many important applications, such as drug design and repurposing. Recent work has shown that general-domain language models (LMs) can serve as "soft" KGs, and that they can be fine-tuned for the task of KG completion. In this work, we study scientific LMs for KG completion, exploring whether we can tap into their latent knowledge to enhance biomedical link prediction. We evaluate several domain-specific LMs, fine-tuning them on datasets centered on drugs and diseases that we represent as KGs and enrich with textual entity descriptions. We integrate the LM-based models with KG embedding models, using a router method that learns to assign each input example to either type of model and provides a substantial boost in performance. Finally, we demonstrate the advantage of LM models in the inductive setting with novel scientific entities.

Citation

@inproceedings{
nadkarni2021scientific,
title={Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study},
author={Rahul Nadkarni and David Wadden and Iz Beltagy and Noah Smith and Hannaneh Hajishirzi and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=4Exq_UvWKY8},
doi={10.24432/C5QC7V}
}