979 resultados para fault model
Resumo:
This paper deals with fault detection and isolation problems for nonlinear dynamic systems. Both problems are stated as constraint satisfaction problems (CSP) and solved using consistency techniques. The main contribution is the isolation method based on consistency techniques and uncertainty space refining of interval parameters. The major advantage of this method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements, and model errors. Interval calculations bring independence from the assumption of monotony considered by several approaches for fault isolation which are based on observers. An application to a well known alcoholic fermentation process model is presented
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
The detailed geological mapping and structural study of a complete transect across the northwestern Himalaya allow to describe the tectonic evolution of the north Indian continental margin during the Tethys ocean opening and the Himalayan Orogeny. The Late Paleozoic Tethys rifting is associated with several tectonomagmatic events. In Upper Lahul and SE Zanskar, this extensional phase is recorded by Lower Carboniferous synsedimentary transtensional faults, a Lower Permian stratigraphic unconformity, a Lower Permian granitic intrusion and middle Permian basaltic extrusions (Panjal Traps). In eastern Ladakh, a Permian listric normal fault is also related to this phase. The scarcity of synsedimentary faults and the gradual increase of the Permian syn-rift sediment thickness towards the NE suggest a flexural type margin. The collision of India and Asia is characterized by a succession of contrasting orogenic phases. South of the Suture Zone, the initiation of the SW vergent Nyimaling-Tsarap Nappe corresponds to an early phase of continental underthrusting. To the S, in Lahul, an opposite underthrusting within the Indian plate is recorded by the NE vergent Tandi Syncline. This structure is associated with the newly defined Shikar Beh Nappe, now partly eroded, which is responsible for the high grade (amphibolite facies) regional metamorphism of South Lahul. The main thrusting of the Nyimaling-Tsarap Nappe followed the formation of the Shikar Beh Nappe. The Nyimaling-Tsarap Nappe developed by ductile shear of the upper part of the subducted Indian continental margin and is responsible for the progressive regional metamorphism of SE Zanskar, reaching amphibolite facies below the frontal part of the nappe, near Sarchu. In Upper Lahul, the frontal parts of the Nyimaling-Tsarap and Shikar Beh nappes are separated by a zone of low grade metamorphic rocks (pumpellyite-actinolite facies to lower greenschist facies). At high structural level, the Nyimaling-Tsarap Nappe is characterized by imbricate structures, which grade into a large ductile shear zone with depth. The related crustal shortening is about 87 km. The root zone and the frontal part of this nappe have been subsequently affected by two zones of dextral transpression and underthrusting: the Nyimaling Shear Zone and the Sarchu Shear Zone. These shear zones are interpreted as consequences of the counterclockwise rotation of the continental underthrusting direction of India relative to Asia, which occurred some 45 and 36 Ma ago, according to plate tectonic models. Later, a phase of NE vergent `'backfolding'' developed on these two zones of dextral transpression, creating isoclinal folds in SE Zanskar and more open folds in the Nyimaling Dome and in the Indus Molasse sediments. During a late stage of the Himalayan Orogeny, the frontal part of the Nyimaling-Tsarap Nappe underwent an extension of about 15 km. This phase is represented by two types of structures, responsible for the tectonic unroofing of the amphibolite facies rocks of the Sarchu area: the Sarchu high angle Normal Fault, cutting a first set of low angle normal faults, which have been created by reactivation of older thrust planes related to the Nyimaling-Tsarap Nappe.
Resumo:
The sandstone-hosted Beverley uranium deposit is located in terrestrial sediments in the Lake Frome basin in the North Flinders Ranges, South Australia. The deposit is 13 km from the U-rich Mesoproterozoic basement of the Mount Painter inlier, which is being uplifted 100 to 200 m above the basin by neotectonic activity that probably initiated in the early Pliocene. The mineralization was deposited mainly in organic matter-poor Miocene lacustrine sands and partly in the underlying reductive strata comprising organic matter-rich clays and silts. The bulk of the mineralization consists of coffinite and/or uraninite nodules, growing around Co-rich pyrite with an S isotope composition (delta S-34 = 1.0 +/- 0.3 parts per thousand), suggestive of an early diagenetic lacustrine origin. In contrast, authigenic sulfides in the bulk of the sediments have a negative S isotope signature (delta S-34 ranges from -26.2 to -35.5 parts per thousand), indicative of an origin via bacterially mediated sulfate reduction. Minor amounts of Zn-bearing native copper and native lead also support the presence of specific, reducing microenvironments in the ore zone. Small amounts of carnotite are associated with the coffinite ore and also occur beneath a paleosoil horizon overlying the uranium deposit. Provenance studies suggest that the host Miocene sediments were derived from the reworking of Early Cretaceous glacial or glaciolacustrine sediments ultimately derived from Paleozoic terranes in eastern Australia. In contrast, the overlying Pliocene strata were in part derived from the Mesoproterozoic basement inlier. Mass-balance and geochemical data confirm that granites of the Mount Painter domain were the ultimate source of U and BEE at Beverley. U-Pb dating of coffinite and carnotite suggest that the U mineralization is Pliocene (6.7-3.4 Ma). The suitability of the Beverley deposit for efficient mining via in situ leaching, and hence its economic value, are determined by the nature of the hosting sand unit, which provides the permeability and low reactivity required for high fluid flow and low chemical consumption. These favorable sedimentologic and geometrical features result from a complex conjunction of factors, including deposition in lacustrine shore environment, reworking of angular sands of glacial origin, deep Pliocene weathering, and proximity to an active fault exposing extremely U rich rocks.
Resumo:
The Monte Perdido thrust fault (southern Pyrenees) consists of a 6-m-thick interval of intensely deformed clay-bearing rocks. The fault zone is affected by a pervasive pressure solution seam and numerous shear surfaces. Calcite extensional-shear veins are present along the shear surfaces. The angular relationships between the two structures indicate that shear surfaces developed at a high angle (70°) to the local principal maximum stress axis r1. Two main stages of deformation are present. The first stage corresponds to the development of calcite shear veins by a combination of shear surface reactivation and extensional mode I rupture. The second stage of deformation corresponds to chlorite precipitation along the previously reactivated shear surfaces. The pore fluid factor k computed for the two deformation episodes indicates high fluid pressures during the Monte Perdido thrust activity. During the first stage of deformation, the reactivation of the shear surface was facilitated by a suprahydrostatic fluid pressure with a pore fluid factor kv equal to 0.89. For the second stage, the fluid pressure remained still high (with a k value ranging between 0.77 and 0.84) even with the presence of weak chlorite along the shear surfaces. Furthermore, evidence of hydrostatic fluid pressure during calcite cement precipitation supports that incremental shear surface reactivations are correlated with cyclic fluid pressure fluctuations consis- tent with a fault-valve model.
Resumo:
One of the techniques used to detect faults in dynamic systems is analytical redundancy. An important difficulty in applying this technique to real systems is dealing with the uncertainties associated with the system itself and with the measurements. In this paper, this uncertainty is taken into account by the use of intervals for the parameters of the model and for the measurements. The method that is proposed in this paper checks the consistency between the system's behavior, obtained from the measurements, and the model's behavior; if they are inconsistent, then there is a fault. The problem of detecting faults is stated as a quantified real constraint satisfaction problem, which can be solved using the modal interval analysis (MIA). MIA is used because it provides powerful tools to extend the calculations over real functions to intervals. To improve the results of the detection of the faults, the simultaneous use of several sliding time windows is proposed. The result of implementing this method is semiqualitative tracking (SQualTrack), a fault-detection tool that is robust in the sense that it does not generate false alarms, i.e., if there are false alarms, they indicate either that the interval model does not represent the system adequately or that the interval measurements do not represent the true values of the variables adequately. SQualTrack is currently being used to detect faults in real processes. Some of these applications using real data have been developed within the European project advanced decision support system for chemical/petrochemical manufacturing processes and are also described in this paper
Resumo:
This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system
Resumo:
One of the techniques used to detect faults in dynamic systems is analytical redundancy. An important difficulty in applying this technique to real systems is dealing with the uncertainties associated with the system itself and with the measurements. In this paper, this uncertainty is taken into account by the use of intervals for the parameters of the model and for the measurements. The method that is proposed in this paper checks the consistency between the system's behavior, obtained from the measurements, and the model's behavior; if they are inconsistent, then there is a fault. The problem of detecting faults is stated as a quantified real constraint satisfaction problem, which can be solved using the modal interval analysis (MIA). MIA is used because it provides powerful tools to extend the calculations over real functions to intervals. To improve the results of the detection of the faults, the simultaneous use of several sliding time windows is proposed. The result of implementing this method is semiqualitative tracking (SQualTrack), a fault-detection tool that is robust in the sense that it does not generate false alarms, i.e., if there are false alarms, they indicate either that the interval model does not represent the system adequately or that the interval measurements do not represent the true values of the variables adequately. SQualTrack is currently being used to detect faults in real processes. Some of these applications using real data have been developed within the European project advanced decision support system for chemical/petrochemical manufacturing processes and are also described in this paper