My research work builds upon previously conducted research while integrating a semantic dimension into software evolution analysis. The main objective is to establish a semi-automatic mechanism for semantic annotation of artifacts and processes related to software evolution management, through a semantic orchestration represented in the form of graphs.
This approach enables multidimensional analysis of data, including actors in the software lifecycle as well as various types of users. It also aims to formalize semantic relationships between business domains and software systems, particularly through the adoption of an ontology dedicated to business process models.
Currently, my work focuses on developing a semantic annotation mechanism applied to artifacts related to the theme of Explainable Artificial Intelligence (XAI). This orientation aims to establish bridges between research conducted by different members of the SysReIC group, thus contributing to better integration of expertise within the LISIC laboratory.
LISIC's commitment to the field of artificial intelligence (AI), particularly through the creation of the humAIn alliance, as well as the growing importance of implementation processes for solutions based on AI algorithms—including those from machine learning—have led to a reorientation of our research work. This evolution aims to mobilize knowledge management approaches as a methodological lever to support the design and implementation of these processes.
In this dynamic, the SysReIC group, to which I belong, has set itself the mission of developing research axes centered on the integration of knowledge management into the processes of developing AI solutions. A central theme emerges from this orientation: the explainability of artificial intelligence systems.
We are currently conducting in-depth work on this issue, with the aim of better understanding how mechanisms for structuring, representing, and mobilizing knowledge can contribute to improving the transparency, comprehensibility, and interpretability of AI systems.
My main skills are related to knowledge management, particularly in knowledge-based systems based on rules. My research work has been oriented towards the development of systems and decision support platforms to better understand knowledge and its semantics. I am involved with my other colleagues to defend the idea that a better understanding of knowledge and its semantics could benefit from explainability in AI. I work with my colleagues to validate this idea and develop research projects that use knowledge management as support for the implementation of AI processes, particularly with regard to AI explainability.
My experience also allows me to be involved in the development of knowledge management tools for domain experts who are not necessarily computer science experts, in order to help them better use and exploit knowledge in their field of activity.
I actively participate in writing research topics and various reports related to projects in which the team is involved, with the objective of co-directing this work. The work envisaged by the SysReIC research group mainly concerns knowledge management and its application to problems related to artificial intelligence, particularly on AI explainability. We have also been able to obtain funding for co-supervised theses and several Master 2 level research internships that have joined our team around these topics concerning AI explainability. As a member of the group, I also have the role of co-supervising this research work.
My current research work is devoted to the issue of explainability of reasoning processes in Artificial Intelligence systems. This issue has become a major concern in the field of AI and has been highlighted in numerous works, such as the Villani report on AI. It is clearly established that the lack of "explainability" of AIs constitutes a serious obstacle to the development of AI in critical and/or sensitive domains, such as medicine, autonomous vehicle driving, or decision support systems in social domains, such as the allocation of study grants and/or social housing, etc. My work therefore aims to develop methods and tools to improve the explainability of AI systems, particularly by using approaches based on knowledge management. This contributes to strengthening trust in AI systems and facilitating their adoption in critical and/or sensitive domains.
Many works have been carried out in recent years to add explanations to AI systems, particularly those based on deep neural networks, which are both very performant but, in principle, designed and implemented as black boxes. Deep neural networks are used in many domains, such as speech recognition, computer vision, automatic translation, and modeling of natural processes. However, due to their complexity, these systems are often difficult to understand and explain, which can make it difficult to understand their decisions and their acceptability in critical domains. For this reason, many researchers are working on the explainability of deep neural networks in order to understand their internal functioning and make their decisions more comprehensible and transparent for users. This can also contribute to the trust and acceptance of these systems by the public.
The most recent research work explores various avenues to achieve explainability of AI systems. For example, some tools perform processing on AI models in order to reify the links between data and a model that led to a specific decision. Other work aims to produce simplified versions of existing AI models. Finally, some work attempts to alter the very process of model generation in order to directly provide more explainable models. While approaches differ, their common goal is to produce AI systems that can provide explanations that are understandable by the human actors concerned, such as doctors, production or maintenance engineers, etc. From our point of view, it is a matter of producing explanations at a level of semantic abstraction in accordance with their knowledge. The study of the state of the art allows us to observe that while these notions of knowledge and adapted semantics are often implicit, they themselves are still rarely reified.
My research work fits into this framework and aims to demonstrate that approaches for explainability would strongly benefit from a more in-depth consideration of knowledge and its semantics as first-order objects by explicitly linking them to AI systems through tools that allow them to be explicated and manipulated, such as knowledge graphs and ontologies. They will address both the business domain considered and user profiles, as well as the nature of the AI algorithms used and their specificities. These developed works are at the heart of the SysReIC working group theme.
Since the appearance of Knowledge-Based Systems (KBS), the need for explanation of reasoning has always been felt. This is motivated by a user's need to be reassured about how the KBS obtained its result or for debugging during the development of this system. Another reason is the use of reasoning explanation as a means of learning the application domain (medicine, banking and finance, etc.).
If in the past KBS were essentially based on rules, currently we speak rather of intelligent systems combining machine learning techniques, knowledge about application domains explicated in particular by ontologies based on description logics. The multiplicity of techniques used and the exponential size of ontologies can make the results of reasoning performed by intelligent systems difficult to grasp by different types of users.
The work first conducts a study of the state of the art of current work on reasoning explanation in intelligent systems. The notion of explanation is first apprehended in its traditional version by studying approaches developed for more than 40 years and for which a well-supplied bibliography already exists. Subsequently, current approaches are reviewed, considering those interested in description logic reasoners (ontologies) as well as those using machine learning techniques. Then, the work aims to show the importance of considering knowledge and its semantics as first-order objects by explicitly linking them to AI systems through tools that allow them to be explicated, such as knowledge graphs and ontologies.
I actively contribute to the scientific community through various editorial board memberships, program committee roles, and peer review activities. Below is a list of my scientific responsibilities organized in reverse chronological order.