Huge quantities of data described by ontologies and linked together are being made available. These are generated in an independent manner by autonomous providers such as individuals or companies. They are heterogeneous and their joint exploitation requires connecting them, ending up in a mesh of reticulated knowledge.
However, data and knowledge have to evolve facing changes in what they represent, changes in the context in which they are used and connections to new data and knowledge sources. As their production and exchange are growing larger and more connected, their evolution is not anymore compatible with manual curation and maintenance.
Goal: We work towards dealing with the dynamic aspects of knowledge made of interdependent ontologies, data, alignments and links. To that extent, change intrinsic to this knowledge, such as reasoning, querying or argumentation, or changes in the knowledge or the represented environment, which can be dealt with by belief revision or cultural evolution. |
In a pervasive computing environment, the environment itself is the interface between services and users. Using context information coming from sensors, location technologies and aggregation services, applications adapt their run time behaviour to the context in which users evolve, e.g., physical location, social or hierarchical position, current tasks as well as related information. These applications have to deal with the dynamic integration in the environment of new elements (users or devices), and the environment has to provide context information to newly designed applications. We study and develop a dynamic context management system for pervasive application. It must be flexible enough to be used by heterogeneous applications and to run dynamically with new incoming devices.
We have designed an architecture in which context information is distributed in the environment [Euzenat 2006a, b, c, 2008a]. Each device or service implements a context management component in charge of maintaining its local context. It can communicate with other context management components: some of them are context information producers, some of them are context information consumers and some of them are both. We have defined a simple protocol to allow a consumer to identify and determine the right producer for the information it needs. Context management components express their context information using an OWL ontology, and exchange RDF triples with each other. The openness of ontology description languages makes possible the extension of context descriptions and ontology matching helps dealing with independently developed ontologies. Thus, this architecture allows the introduction of new components and new applications without interrupting what is working.
We have developed a library to build the distributed context management system and we provide an implementation of these interfaces using the multi-agent system called JADE. The Alignment server allows the context information manager component to find correspondences between various ontologies with which it is confronted and thus to match application needs in terms of context information with the information provided by the other devices.
This implementation provides an agent managing (searching, broadcasting and updating) context information. Developers will have to describe the functionalities of their services, sensors or applications, i.e., to provide an OWL description of information that they can provide and/or information that they seek.
We have built a complete easily deployable ambient home environment. Our infrastructure manages context information flows from sensors and web services to pervasive application and a dynamic service composition infrastructure. We demonstrated it through several examples composed of a set of potentially interchangeable sensors and actuators. These applications are combined to present an integrated scenario which shows how an ambient home environment can improve the experience of a typical Grenoble resident and helps him to organise his leisure. This environment was showned at the Ubicomp 2007 conference.
This work is partly the thesis of Jérôme Pierson [Pierson 2009a] developed in collaboration with France Telecom R&D (Fano Ramparany).
References on context |
When two independently developed agents want to interact, they may not share the same ontologies. In order to reconcile their ontologies, they can take advantage of an alignment service which will provide alignments for the two ontologies. But if it is not possible to obtain an alignment that suits both parties, it is necessary for these parties, if they want to interact, to negotiate the meaning of terms, or, more modestly, to negotiate the correspondences in alignments. For that purpose, we have introduced a novel argumentation framework for arguing for and against correspondences found in alignments [Laera 2006a, b, c, 2007a]. This framework is based on previous work on argumentation in multi-agent systems, and especially value-based argumentation, but adapts it to the specific case of arguing about alignments and correspondences. It provides a first typology of arguments that can be applied to correspondences between ontology entities (based on the way correspondences have been obtained). A preference relation among arguments can be defined with regard to this typology. This relation may be different from agent to agent so that they do not all prefer the same arguments. We have used classical multi-agent argumentation theory in order to characterise what is an acceptable argument for an agent as well as the prefered extensions (of a set of arguments) for a set of agents having different preference relations. We also designed an argumentation protocol for reaching these preferred extensions. We provide strategies for evaluating arguments during the unfolding of the negotiation dialogue.
This work has been developed in collaboration with the Computer Science Departement of the University of Liverpool (Loredana Laera, Valentina Tamma and Trevor Bench-Capon).
So far, the argumentation process does not guarantee that the resulting (agreed) alignment is consistent. We have studied argumentation and consistency checking for alignment agreement. We have provided an argumentation model that combines both argumentation and logical consistency checking for proving consistent agreed alignments. We have evaluated our approach using the Conference dataset of OAEI. However, so far evaluations have shown that if argumentation or consistency checking alone provide definitely better agreed alignments, their combination does not improve on this [Trojahn 2010b].
All our work on argumentation for ontology alignment is summarised in [Trojahn 2011a].
References on argumentation |
Networks of ontologies are made of a collection of logic theories, called ontologies, related by alignments. They arise naturally in distributed and open contexts in which theories are developed and maintained independently, such as the semantic web. This calls for reconsidering problems, such as revision, in this context: given a set of ontologies connected by alignments, how to evolve them such that they account for new information. In networks of ontologies, inconsistency may come from two different sources: local inconsistency in a particular ontology or alignment, and global inconsistency between them. Belief revision is well-defined for dealing with ontologies; we have investigated how it can apply to networks of ontologies. We formulated revision postulates for alignments and networks of ontologies based on an abstraction of existing semantics of networks of ontologies [Euzenat 2015a]. We showed that revision operators cannot be simply based on local revision operators on both ontologies and alignments. We adapted the partial meet revision framework to networks of ontologies and show that it indeed satisfies the revision postulates. Finally, we considered strategies based on network characteristics for designing concrete revision operators.
References on revision |
We are considering the global evolution of knowledge represented by interdependent ontologies, data, alignments and links. Our goal is to be able to maintain such a structure with respect to the processes which are involved in its construction: logical inference, ontology matching, link key extraction, link generation, etc.
Our initial work is focused on how data and ontology changes cause alignment evolution, in particular when the alignment have been produced through instance-based matching using links between data. In this regard, we are developing techniques for circumscribing the elements and relationships affected by the change as well as evaluating the need for change propagation, i.e, most of the time a simple change will not trigger link key recomputation.
Alignments between ontologies may be established through agents holding such ontologies attempting at communicating and taking appropriate actions when communication fails. This approach, that we call cultural knowledge evolution, has the advantage of not assuming that everything should be set correctly before trying to communicate and of being able to overcome failures. We have tested this approach on alignment repair, i.e., the improvement of incorrect alignments. For that purpose, we performed a series of experiments in which agents react to mistakes in alignments. Agents only know about their ontologies and alignments with others and they act in a fully decentralised way. We showed that cultural repair is able to converge towards successful communication through improving the objective correctness of alignments. The obtained results are on par with a baseline of state-of-the-art alignment repair algorithms [Euzenat 2014b, c].
The benchmarks, results are available at https://sake.re and software at https://gitlab.inria.fr/moex/lazylav/.References on cultural repair |
Query answering in expressive knowledge bases is a difficult task and becomes more tedious when knowledge is described in different knowledge bases. We have considered query answering over a system in Distributed Description Logics, such that each ontology is a description logic knowledge base and alignments are made of directed relations called bridge rules. More precisely, we have defined the distributed answers of a given query expressed in terms of one ontology (called the target ontology) in the system [Alkhateeb 2007a]. These answers may contain individuals from different ontologies. Our definition of distributed answers is well founded since it covers local query answers. To compute these answers, we have provided an algorithm that reduces the problem of distributed query answering to local query answering. This algorithm extends the target ontology with knowledge deduced using bridge rules and local axioms of other ontologies, then locally evaluates the query over the extended ontology alone. Although this algorithm has been proved correct, it is not complete with highly expressive logic. However, the completeness of the algorithm for fragments of less expressive description logics is still open. There are two main defects for this algorithm: First, it consumes too much time and resources; Second, it requires total access to the foreign knowledge.
To overcome these problems, we provided another approach that consists of broadcasting a set of queries to each foreign ontology [Alkhateeb 2007b]. Each query in the set is obtained by translating terms of the initial query using bridge rules. Despite the incompleteness of this approach, it provides highly parallelised distributed query evaluation.
In order to effectively reason on distributed systems of networked ontologies, we introduced a new kind of distributed logics, namely Integrated Distributed Description Logics (IDDL) [Zimmermann 2007a], where ontologies are represented as description logic knowledge bases and alignments assert cross-ontology concept/role subsumptions or disjunctions, or cross-ontology instance membership. In particular, this formalism is adapted for reasoning with OWL ontologies aligned by automatic ontology matching tools. The semantics of the logic is the one we introduced (see Semantics of alignments and distributed systems).
The difference between IDDL and the other formalisms is that (i) IDDL focuses on alignments by considering them as independent pieces of knowledge among those of ontologies, (ii) IDDL does not make any expressiveness assumption on formalisms used in ontologies except for decidability, (iii) IDDL supports distributed reasoning, i.e., all local computing for ontologies can be independently performed by local reasoners.
We have developed an algorithm for consistecy checking in IDDL [Zimmermann 2008a, Zimmermann 2008b]. The procedure is correct and complete when the correspondences which appear in the alignments only assert cross-ontology subsumption of concepts or roles, or cross-ontology disjointness of concepts. The complexity class of our consistency checking is at least NP but depends on the complexity of local reasoners.
This algorithm has beed implemented and a preliminary version of the IDDL reasoner is available (http://iddl.gforge.inria.fr/). First experiments with the prototype show that it answers quickly on several real life cases.
In a semantic peer-to-peer network where peers resort to different ontologies, links between peers are then realised by means of alignments. So queries (and their answers) are successively translated according to the alignments before peers receive them.
We have designed an experimental picture annotation and sharing system based on semantic web technologies. The main characteristics of this system is that pictures are annotated with ontologies that can be modified by people annotations, e.g., by adding a class or a property that is missing. In consequence, people have annotations based on different ontologies that must be reconciled if they want to share annotations or to query the pictures of each other. Alignments between these local ontologies must be provided, before sharing picture annotations.
This system has been implemented and experimented by 7 people on a common corpus of 50 pictures. Each participant was free to use whatever ontology for annotating these pictures. They have been asked to align their ontology with that of another participant. We have used these results in order to compare basic similarity measures among ontologies [Euzenat 2009a]. We have also used this experiment as a basis for our work on semantic social network analysis (see below).
This work has so far remained preliminary, but it has shown us that this approach is practicable.
However, depending on the quality of alignments (they may be incorrect or incomplete), the mismatch between peer ontologies, and their own viewpoints, there is no guarantee that peers will obtain satisfactory answers to their queries. A trust mechanism can assist peers to select those paths in the network that are better suited to their queries.
We have designed a trust mechanism that can assist peers to select those peers in the network that are better suited to answer their queries [Atencia 2011a, 2015a]. The trust that a peer has towards another peer depends on a specific query and represents the probability that the latter peer will provide a satisfactory answer. In order to compute trust, we exploit both alignments and peers' direct experience, and perform Bayesian inference to approximate probability values. We have implemented our technique and conducted an evaluation. Experimental results showed that trust values converge as more queries are sent and answers received. Furthermore, the use of trust is shown to improve both precision and recall of query answers.
This work is developed in collaboration with Marie-Christine Rousset (LIG) in the framework of the DataRing project.
References on semantic peer-to-peer systems |
Social networks are simply the graph between people along social relations (usually denoting that they know each others). There has been much work on social network analysis for finding central people in a network or connecting efficiently an individual to another.
We introduced the notion of semantic social networks in order to describe networks embedding not only relations between people, but also the ontologies that people use. These ontologies can be used, for instance, in order to annotate resources such as documents, pictures, etc. We proposed an organisation for semantic social networks in three layers: social layer, ontology layer and concept layer. Each layer features a network based on different relations [Jung 2006a, 2007a].
People in the social network are related to the ontologies they use, and ontologies are related to the concepts they use and they define. However, it would be useful to be able to infer relations between people from the relations between concepts and ontologies. This has the advantage of providing potential proximity relations for people who do not even know each others. Such techniques can be useful for instance, for finding people to which it will be easier to forward a query or group of homogeneous people who will be more prone to design a consensus ontology [Jung 2007b]. We proposed some tentative propagation rules as well as measures for computing network analysis [David 2012c].
< Semantic interoperability | Index | References | Infrastructure > |
http://exmo.inria.fr/research/dynamics.html
|