Faithful Embeddings for EL+ + Knowledge Bases
Faithful Embeddings for EL+ + Knowledge Bases
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs. We present BoxEL, a geometric KB embedding approach that allows for better capturing the logical structure (i.e., ABox and TBox axioms) in the description logic EL++. BoxEL models concepts in a KB as axis-parallel boxes that are suitable for modeling concept intersection, entities as points inside boxes, and relations between concepts/entities as affine transformations. We show theoretical guarantees (soundness) of BoxEL for preserving logical structure. Namely, the learned model of BoxEL embedding with loss 0 is a (logical) model of the KB. Experimental results on (plausible) subsumption reasonings and a real-world application–protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods as well as state-of-the-art EL++ embedding approaches.
22 - 38
Xiong, Bo
d8c3ce0a-07ac-43f8-bd67-f230c6cbc1ec
Potyka, Nico
a8a29aeb-d747-4ac0-9c76-b093b4d3bb67
Tran, Trung-Kien
bf02ab7c-4d52-4536-a358-38839070c3ee
Nayyeri, Mojtaba
476e5009-e6fc-45e6-ac7f-c07fe0898632
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
23 October 2022
Xiong, Bo
d8c3ce0a-07ac-43f8-bd67-f230c6cbc1ec
Potyka, Nico
a8a29aeb-d747-4ac0-9c76-b093b4d3bb67
Tran, Trung-Kien
bf02ab7c-4d52-4536-a358-38839070c3ee
Nayyeri, Mojtaba
476e5009-e6fc-45e6-ac7f-c07fe0898632
Staab, Steffen
bf48d51b-bd11-4d58-8e1c-4e6e03b30c49
Xiong, Bo, Potyka, Nico, Tran, Trung-Kien, Nayyeri, Mojtaba and Staab, Steffen
(2022)
Faithful Embeddings for EL+ + Knowledge Bases.
21st International Semantic Web Conference, Virtual, Berlin, Germany.
25 - 27 Oct 2022.
.
(doi:10.1007/978-3-031-19433-7_2).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs. We present BoxEL, a geometric KB embedding approach that allows for better capturing the logical structure (i.e., ABox and TBox axioms) in the description logic EL++. BoxEL models concepts in a KB as axis-parallel boxes that are suitable for modeling concept intersection, entities as points inside boxes, and relations between concepts/entities as affine transformations. We show theoretical guarantees (soundness) of BoxEL for preserving logical structure. Namely, the learned model of BoxEL embedding with loss 0 is a (logical) model of the KB. Experimental results on (plausible) subsumption reasonings and a real-world application–protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods as well as state-of-the-art EL++ embedding approaches.
Text
Faithful Embeddings for EL++ Knowledge Bases
- Accepted Manuscript
More information
Accepted/In Press date: 7 July 2022
Published date: 23 October 2022
Additional Information:
arXiv:2201.09919v2
Venue - Dates:
21st International Semantic Web Conference, Virtual, Berlin, Germany, 2022-10-25 - 2022-10-27
Identifiers
Local EPrints ID: 474447
URI: http://eprints.soton.ac.uk/id/eprint/474447
PURE UUID: 313424de-25bb-4d61-8a2b-a5ead71e0786
Catalogue record
Date deposited: 22 Feb 2023 17:54
Last modified: 17 Mar 2024 03:38
Export record
Altmetrics
Contributors
Author:
Bo Xiong
Author:
Nico Potyka
Author:
Trung-Kien Tran
Author:
Mojtaba Nayyeri
Author:
Steffen Staab
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics