970 resultados para Fault model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stress release model, a stochastic version of the elastic rebound theory, is applied to the large events from four synthetic earthquake catalogs generated by models with various levels of disorder in distribution of fault zone strength (Ben-Zion, 1996) They include models with uniform properties (U), a Parkfield-type asperity (A), fractal brittle properties (F), and multi-size-scale heterogeneities (M). The results show that the degree of regularity or predictability in the assumed fault properties, based on both the Akaike information criterion and simulations, follows the order U, F, A, and M, which is in good agreement with that obtained by pattern recognition techniques applied to the full set of synthetic data. Data simulated from the best fitting stress release models reproduce, both visually and in distributional terms, the main features of the original catalogs. The differences in character and the quality of prediction between the four cases are shown to be dependent on two main aspects: the parameter controlling the sensitivity to departures from the mean stress level and the frequency-magnitude distribution, which differs substantially between the four cases. In particular, it is shown that the predictability of the data is strongly affected by the form of frequency-magnitude distribution, being greatly reduced if a pure Gutenburg-Richter form is assumed to hold out to high magnitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The architecture of model predictive control (MPC), with its explicit internal model and constrained optimization is presented. Since MPC relies on an explicit internal model, one can imagine dealing with failures by updating the internal model, and letting the on-line optimizer work out how to control the system in its new condition. This aspects rely on assumptions such that the nature of the fault can be located, and the model can be updated automatically. A standard form of MPC, with linear inequality constraints on inputs and outputs, linear internal model, and quadriatic cost function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The various aspects of fault-tolerant control systems that have the ability to survive major equipment failures or damages are discussed. Model predictive control (MPC) offers a promising basis for fault-tolerant control. Failures can be dealt with by updating internal models and letting the on-line optimizer control the system in its new condition. Fault detection and isolation (FDI) and the management of complex models are two emerging technologies in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

YBaCuO-coated conductors offer great potential in terms of performance and cost-saving for superconducting fault current limiter (SFCL). A resistive SFCL based on coated conductors can be made from several tapes connected in parallel or in series. Ideally, the current and voltage are shared uniformly by the tapes when quench occurs. However, due to the non-uniformity of property of the tapes and the relative positions of the tapes, the currents and the voltages of the tapes are different. In this paper, a numerical model is developed to investigate the current and voltage sharing problem for the resistive SFCL. This model is able to simulate the dynamic response of YBCO tapes in normal and quench conditions. Firstly, four tapes with different Jc 's and n values in E-J power law are connected in parallel to carry the fault current. The model demonstrates how the currents are distributed among the four tapes. These four tapes are then connected in series to withstand the line voltage. In this case, the model investigates the voltage sharing between the tapes. Several factors that would affect the process of quenches are discussed including the field dependency of Jc, the magnetic coupling between the tapes and the relative positions of the tapes. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces the notion of M-step robust fault tolerance for discrete-time systems where finite-time completion of a control manoeuvre is desired. It considers a scenario with two distinct objectives; a primary and secondary target are specified as sets to be reached in finite-time, whilst satisfying operating constraints on the states and inputs. The primary target is switched to the secondary target when a fault affects the system. As it is unknown when or if the fault will occur, the trajectory to the primary target is constrained to ensure reachability of the secondary target within M steps. A variable-horizon linear MPC formulation is developed to illustrate the concept. The formulation is then extended to provide robustness to bounded disturbances by use of tightened constraints. Simulations demonstrate the efficacy of the controller formulation on a double-integrator model. © 2011 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce nonlinear relationships between the recorded engine variables, the paper proposes the use of nonlinear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new nonlinear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady-state operating conditions. More precisely, nonlinear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause of abnormal engine behaviour. The paper shows that this can lead to (i) an enhanced identification of potential root causes of abnormal events and (ii) the masking of faulty sensor readings. The effectiveness of the enhanced NLPCA based monitoring scheme is illustrated by its application to a sensor fault and a process fault. The sensor fault relates to a drift in the fuel flow reading, whilst the process fault relates to a partial blockage of the intercooler. These faults are introduced to a Volkswagen TDI 1.9 Litre diesel engine mounted on an experimental engine test bench facility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tailpipe emissions from automotive engines have been subject to steadily reducing legislative limits. This reduction has been achieved through the addition of sub-systems to the basic four-stroke engine which thereby increases its complexity. To ensure the entire system functions correctly, each system and / or sub-systems needs to be continuously monitored for the presence of any faults or malfunctions. This is a requirement detailed within the On-Board Diagnostic (OBD) legislation. To date, a physical model approach has been adopted by me automotive industry for the monitoring requirement of OBD legislation. However, this approach has restrictions from the available knowledge base and computational load required. A neural network technique incorporating Multivariant Statistical Process Control (MSPC) has been proposed as an alternative method of building interrelationships between the measured variables and monitoring the correct operation of the engine. Building upon earlier work for steady state fault detection, this paper details the use of non-linear models based on an Auto-associate Neural Network (ANN) for fault detection under transient engine operation. The theory and use of the technique is shown in this paper with the application to the detection of air leaks within the inlet manifold system of a modern gasoline engine whilst operated on a pseudo-drive cycle. Copyright © 2007 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adoption of each new level of automotive emissions legislation often requires the introduction of additional emissions reduction techniques or the development of existing emissions control systems. This, in turn, usually requires the implementation of new sensors and hardware which must subsequently be monitored by the on-board fault detection systems. The reliable detection and diagnosis of faults in these systems or sensors, which result in the tailpipe emissions rising above the progressively lower failure thresholds, provides enormous challenges for OBD engineers. This paper gives a review of the field of fault detection and diagnostics as used in the automotive industry. Previous work is discussed and particular emphasis is placed on the various strategies and techniques employed. Methodologies such as state estimation, parity equations and parameter estimation are explained with their application within a physical model diagnostic structure. The utilization of symptoms and residuals in the diagnostic process is also discussed. These traditional physical model based diagnostics are investigated in terms of their limitations. The requirements from the OBD legislation are also addressed. Additionally, novel diagnostic techniques, such as principal component analysis (PCA) are also presented as a potential method of achieving the monitoring requirements of current and future OBD legislation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1903, the eastern slope of Turtle Mountain (Alberta) was affected by a 30 M m3-rockslide named Frank Slide that resulted in more than 70 casualties. Assuming that the main discontinuity sets, including bedding, control part of the slope morphology, the structural features of Turtle Mountain were investigated using a digital elevation model (DEM). Using new landscape analysis techniques, we have identified three main joint and fault sets. These results are in agreement with those sets identified through field observations. Landscape analysis techniques, using a DEM, confirm and refine the most recent geology model of the Frank Slide. The rockslide was initiated along bedding and a fault at the base of the slope and propagated up slope by a regressive process following a surface composed of pre-existing discontinuities. The DEM analysis also permits the identification of important geological structures along the 1903 slide scar. Based on the so called Sloping Local Base Level (SLBL) an estimation was made of the present unstable volumes in the main scar delimited by the cracks, and around the south area of the scar (South Peak). The SLBL is a method permitting a geometric interpretation of the failure surface based on a DEM. Finally we propose a failure mechanism permitting the progressive failure of the rock mass that considers gentle dipping wedges (30°). The prisms or wedges defined by two discontinuity sets permit the creation of a failure surface by progressive failure. Such structures are more commonly observed in recent rockslides. This method is efficient and is recommended as a preliminary analysis prior to field investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system