Jérôme Euzenat, Revision in networks of ontologies, Artificial intelligence 228:195-216, 2015
Networks of ontologies are made of a collection of logic theories, called ontologies, related by alignments. They arise naturally in distributed contexts in which theories are developed and maintained independently, such as the semantic web. In networks of ontologies, inconsistency can come from two different sources: local inconsistency in a particular ontology or alignment, and global inconsistency between them. Belief revision is well-defined for dealing with ontologies; we investigate how it can apply to networks of ontologies. We formulate revision postulates for alignments and networks of ontologies based on an abstraction of existing semantics of networks of ontologies. We show that revision operators cannot be simply based on local revision operators on both ontologies and alignments. We adapt the partial meet revision framework to networks of ontologies and show that it indeed satisfies the revision postulates. Finally, we consider strategies based on network characteristics for designing concrete revision operators.
p201. Clause 4. of the definition of a closure is incorrect. The relation was supposed to be set in the reverse direction (the standard definition is an equivalence). This mistake does not affect results.
Jérôme Euzenat, First experiments in cultural alignment repair, in: Proc. 3rd ESWC workshop on Debugging ontologies and ontology mappings (WoDOOM), Hersounisos (GR), pp3-14, 2014
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate action when communication fails. This approach has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We test here the adaptation of this approach to alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we perform a series of experiments in which agents react to mistakes in alignments. The agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We show that such a society of agents is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of a priori alignment repair algorithms.
The results of [
20140305-NOOR] are not correct due to various software bugs and the generated reference alignments. New results are [
20180308-NOOR] and [
20170208b-NOOR]. Conclusions hold for the former, they are more favorable to agents for the latter.
Ontology alignment, alignment repair, cultural knowkedge evolution, agent simulation, coherence, network of ontologies
Jérôme Euzenat, First experiments in cultural alignment repair (extended version), in: Valentina Presutti, Eva Blomqvist, Raphaël Troncy, Harald Sack, Ioannis Papadakis, Anna Tordai (eds), ESWC 2014 satellite events revised selected papers, Springer Verlag, Heidelberg (DE), 2014, pp115-130
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate action when communication fails. This approach, that we call cultural repair, has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We test here the adaptation of this approach to alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we perform a series of experiments in which agents react to mistakes in alignments. The agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We show that cultural repair is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of a priori alignment repair algorithms.
The results of [
20140305-NOOR] are not correct due to various software bugs and the generated reference alignments. New results are [
20180308-NOOR] and [
20170208b-NOOR]. Conclusions hold for the former, they are more favorable to agents for the latter.
Ontology alignment, Alignment repair, Cultural knowkedge evolution, Agent simulation, Coherence, Network of ontologies