Team 1: Dependability of industrial components and systems

Team manager :
Nasreddine Bouguila - Professor

Les doctorants à mobiliser dans le cadre de l'équipe :
Hamdi Marwa , Dali Hassen Mouna

The strengthening of failure prevention policies tends to raise awareness in all sectors, from the scientific training of individuals to production processes. A large number of decrees in Tunisia attempt to promote and implement directives on the monitoring of production systems with a view to their securing. In accordance with this policy, our proposal relates to improving the monitoring of systems, by offering training for young researchers through research into new tools and methods in the field of the operational safety of production systems.

The main objectives of this project are:

  • develop methods of state estimation and diagnosis of poorly understood systems, that is to say with ill-defined models and uncertain measurements.
  • bring out the unity of the concepts implemented to establish a coherent scientific corpus relating to the integrated design of safe operating systems.
  • optimize existing methods and develop new approaches to meet the growing dependability requirements for increasingly complex systems.

Dependability is, in essence, interdisciplinary and has a very broad spectrum both in terms of the methods implemented and the fields of application concerned. Characterizing the ability to provide a specified service, dependability is formally defined as "the quality of the service that the system delivers, so that users can place a justified trust in it". A safe operating system avoids or eliminates the danger and maintains the process in a fault-free operating state for which the level of confidence remains maximum.

This project is structured around four axes, the actions of which are presented below.

Diagnosis is essential in many application fields, for example for monitoring industrial installations, or even in the context of satellite autonomy.

Lhe linear model-based monitoring and diagnostic methods have now reached a certain maturity, after twenty years of development. However, the linearity of the models representing the process to be monitored consists of a strong assumption which limits the relevance of the results that can be obtained. The direct extension of the methods developed in the context of linear models to the case of any nonlinear models is delicate. On the other hand, interesting results have already been obtained if the modeling approach is based on the use of a set of simple structural models, each model describing the behavior of the system in an "operating zone". particular (defined for example, by the input values ​​or the state of the system). In this context, the multimodel approach which consists in developing the global model by interpolation of local linear models has already produced interesting results.

Conventional methods for automating the monitoring of complex systems generally fall into two broad categories:

- approaches which are based on the use of a behavior model constructed from the physics of the system or from human expertise (internal methods).

- approaches which assume that the knowledge available on the system is limited to its past and present observation (external methods). These methods assume that no model is available to describe cause and effect relationships (process operating model). The only knowledge is based on the measurement of signals taken from the installation to be monitored.

For internal methods, the performance of the diagnostic procedure in terms of fault detection and localization depends directly on the quality of the model used. To avoid these difficulties linked to the quality of the model of the system, an alternative therefore consists in using external methods based on measured signals originating from the system to be monitored. These are well suited to highlighting (linear) relationships between system variables without explicitly formulating the model that links them. In addition, it seems easier to take into account, within this type of method, criteria of detectability and isolability of defects.

This action is then based on an application architecture which, in addition to the nominal functions of the system, implements functions for detecting, locating and diagnosing faults, detecting changes in operating mode (in particular linked to changes in behavior environment), as well as functions of prognosis, accommodation of failures or attacks, reconfiguration of the control or of the objectives, all of these functions ensuring them the desired reactivity characteristics. We call FDIR, Fault Detection, Isolation and Recovery or FTC, Fault Tolerant Control, all the application mechanisms intended to ensure operational safety. This approach is concerned with the mechanisms specific to the application and which must be developed for each one in a specific way, but also with the executive mechanisms intended to ensure the dependability of the operational architecture, active or passive redundancy of the calculation means or communication, reassignment, re-scheduling of tasks on processors, consistency of the different replicas of a shared variable, respect for time limits, etc.

A fault tolerant system is characterized by its ability to maintain or regain malfunction performance (dynamic or static) close to that which it has in normal operating conditions. Much work to guarantee a certain degree of "tolerance" to faults has been derived from conventional robust control techniques (so-called "passive" approaches). Recently, there has been an effervescence of so-called "active" approaches, which are characterized by the presence of a diagnostic module (FDI Fault Detection and Isolation). Depending on the severity of the fault, a new set of control parameters or a new control structure may be applied after the fault has been detected and located.

In the literature, few studies have considered the delays associated with the order calculation time. After the occurrence of the fault, the failed system operates under nominal control until the fault tolerant control is calculated and then applied; during this period, the fault may cause severe loss of system performance and stability.

In design, significant advances relate to guaranteeing risk reduction in the presence of a dangerous situation by implementing active safety systems. This is based on the use of reliability databases, the inclusion of influence coefficients and the propagation of uncertainties. The key point concerns the taking into account of the uncertainties relating to the reliability data of the components for the evaluation of the dependability by using in particular the theories of fuzzy subassemblies, possibilities or obviousness. The privileged target of these studies has been Safety Instrumented Systems for which the requirements in terms of operational safety are essential. The study of the operational safety performance of high integrity protection systems can be carried out by Markovian models which provide a good formalization of the states that these systems can take depending on the events encountered (failure, test, maintenance, etc.) and the parameters studied (failure rate, maintainability, common cause failure, etc.).

The project is broken down into 7 tasks taking place sequentially or for some of them in parallel.


Goal: to be informed of the state of scientific production in the theme in question

a) Permanent monitoring of scientific production in systems modeling, statistical data processing, measurement validation, diagnosis,

b)Participation in working groups of the scientific community.


Purpose: work monitoring

a) Work progress management and evaluation procedure

b) Frequency of progress reports,

c) Participation in national working groups,

d) Participation in national and international congresses.

- Task 3 - MODELING:

Goal: to develop modeling tools in the field of complex processes. The three proposed approaches are intentionally different, the idea being to seek out the complementary aspects of these approaches.

a) Black box approach using the concept of multi-models and the characterization of uncertainties,

b)Black box approach using the concept of neural network,

c) Approach to statistical processing of PCA-based data.


Goal: to construct variables indicative of the presence of events in the data.

a)Techniques based on model residues,

b) Observer-based techniques,

c) Structuring of residues according to fault detection and isolation criteria.

- Task 5 - DIAGNOSIS:

Goal: develop tools for the detection and characterization of defects

a) Data and measurement validation,

b) Event detection,

c)Analysis of events, detection of anomalies,

d)Diagnostic sensitivity analysis. -


Aim: to test the diagnostic methods developed on concrete processes. These can be laboratory models, software-simulated processes, industrial pilot processes, databases from industrial partners. The important point is the study of the conditions of application and implementation of the methods, the formulation of realistic hypotheses, the analysis of the divergences between application and theory.

a) Definition of a common benchmark,

b) Inventory of industrial applications in Tunisia and making contacts to define the data access protocol,

c) Definition of protocols for the comparison of approaches,

d)Research of the advantages and disadvantages of the different approaches,

e) Merging of methods and results.

- Task 7 - SUMMARY:

Goal: analyze and quantify the results obtained in terms of scientific production, exchange of ideas and

training of researchers.

a)Analysis of the results in terms of scientific contribution, publications and

defended theses,

b)Identification and analysis of unresolved issues,

c)Identification of potential industrial partners.