9 resultados para Fault location algorithms
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
The Capacitated Location-Routing Problem (CLRP) is a NP-hard problem since it generalizes two well known NP-hard problems: the Capacitated Facility Location Problem (CFLP) and the Capacitated Vehicle Routing Problem (CVRP). The Multi-Depot Vehicle Routing Problem (MDVRP) is known to be a NP-hard since it is a generalization of the well known Vehicle Routing Problem (VRP), arising with one depot. This thesis addresses heuristics algorithms based on the well-know granular search idea introduced by Toth and Vigo (2003) to solve the CLRP and the MDVRP. Extensive computational experiments on benchmark instances for both problems have been performed to determine the effectiveness of the proposed algorithms. This work is organized as follows: Chapter 1 describes a detailed overview and a methodological review of the literature for the the Capacitated Location-Routing Problem (CLRP) and the Multi-Depot Vehicle Routing Problem (MDVRP). Chapter 2 describes a two-phase hybrid heuristic algorithm to solve the CLRP. Chapter 3 shows a computational comparison of heuristic algorithms for the CLRP. Chapter 4 presents a hybrid granular tabu search approach for solving the MDVRP.
Resumo:
Latency can be defined as the sum of the arrival times at the customers. Minimum latency problems are specially relevant in applications related to humanitarian logistics. This thesis presents algorithms for solving a family of vehicle routing problems with minimum latency. First the latency location routing problem (LLRP) is considered. It consists of determining the subset of depots to be opened, and the routes that a set of homogeneous capacitated vehicles must perform in order to visit a set of customers such that the sum of the demands of the customers assigned to each vehicle does not exceed the capacity of the vehicle. For solving this problem three metaheuristic algorithms combining simulated annealing and variable neighborhood descent, and an iterated local search (ILS) algorithm, are proposed. Furthermore, the multi-depot cumulative capacitated vehicle routing problem (MDCCVRP) and the multi-depot k-traveling repairman problem (MDk-TRP) are solved with the proposed ILS algorithm. The MDCCVRP is a special case of the LLRP in which all the depots can be opened, and the MDk-TRP is a special case of the MDCCVRP in which the capacity constraints are relaxed. Finally, a LLRP with stochastic travel times is studied. A two-stage stochastic programming model and a variable neighborhood search algorithm are proposed for solving the problem. Furthermore a sampling method is developed for tackling instances with an infinite number of scenarios. Extensive computational experiments show that the proposed methods are effective for solving the problems under study.
Resumo:
This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
Resumo:
The aim of this work was to show that refined analyses of background, low magnitude seismicity allow to delineate the main active faults and to accurately estimate the directions of the regional tectonic stress that characterize the Southern Apennines (Italy), a structurally complex area with high seismic potential. Thanks the presence in the area of an integrated dense and wide dynamic network, was possible to analyzed an high quality microearthquake data-set consisting of 1312 events that occurred from August 2005 to April 2011 by integrating the data recorded at 42 seismic stations of various networks. The refined seismicity location and focal mechanisms well delineate a system of NW-SE striking normal faults along the Apenninic chain and an approximately E-W oriented, strike-slip fault, transversely cutting the belt. The seismicity along the chain does not occur on a single fault but in a volume, delimited by the faults activated during the 1980 Irpinia M 6.9 earthquake, on sub-parallel predominant normal faults. Results show that the recent low magnitude earthquakes belongs to the background seismicity and they are likely generated along the major fault segments activated during the most recent earthquakes, suggesting that they are still active today thirty years after the mainshock occurrences. In this sense, this study gives a new perspective to the application of the high quality records of low magnitude background seismicity for the identification and characterization of active fault systems. The analysis of the stress tensor inversion provides two equivalent models to explain the microearthquake generation along both the NW-SE striking normal faults and the E- W oriented fault with a dominant dextral strike-slip motion, but having different geological interpretations. We suggest that the NW-SE-striking Africa-Eurasia convergence acts in the background of all these structures, playing a primary and unifying role in the seismotectonics of the whole region.
Fault detection, diagnosis and active fault tolerant control for a satellite attitude control system
Resumo:
Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.