980 resultados para Fault ride through


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low Voltage (LV) electricity distribution grid operations can be improved through a combination of new smart metering systems' capabilities based on real time Power Line Communications (PLC) and LV grid topology mapping. This paper presents two novel contributions. The first one is a new methodology developed for smart metering PLC network monitoring and analysis. It can be used to obtain relevant information from the grid, thus adding value to existing smart metering deployments and facilitating utility operational activities. A second contribution describes grid conditioning used to obtain LV feeder and phase identification of all connected smart electric meters. Real time availability of such information may help utilities with grid planning, fault location and a more accurate point of supply management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.

Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.

By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution of buildings towards total worldwide energy consumption in developed countries is between 20% and 40%. Heating Ventilation and Air Conditioning (HVAC), and more specifically Air Handling Units (AHUs) energy consumption accounts on average for 40% of a typical medical device manufacturing or pharmaceutical facility’s energy consumption. Studies have indicated that 20 – 30% energy savings are achievable by recommissioning HVAC systems, and more specifically AHU operations, to rectify faulty operation. Automated Fault Detection and Diagnosis (AFDD) is a process concerned with potentially partially or fully automating the commissioning process through the detection of faults. An expert system is a knowledge-based system, which employs Artificial Intelligence (AI) methods to replicate the knowledge of a human subject matter expert, in a particular field, such as engineering, medicine, finance and marketing, to name a few. This thesis details the research and development work undertaken in the development and testing of a new AFDD expert system for AHUs which can be installed in minimal set up time on a large cross section of AHU types in a building management system vendor neutral manner. Both simulated and extensive field testing was undertaken against a widely available and industry known expert set of rules known as the Air Handling Unit Performance Assessment Rules (APAR) (and a later more developed version known as APAR_extended) in order to prove its effectiveness. Specifically, in tests against a dataset of 52 simulated faults, this new AFDD expert system identified all 52 derived issues whereas the APAR ruleset identified just 10. In tests using actual field data from 5 operating AHUs in 4 manufacturing facilities, the newly developed AFDD expert system for AHUs was shown to identify four individual fault case categories that the APAR method did not, as well as showing improvements made in the area of fault diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tailpipe emissions from automotive engines have been subject to steadily reducing legislative limits. This reduction has been achieved through the addition of sub-systems to the basic four-stroke engine which thereby increases its complexity. To ensure the entire system functions correctly, each system and / or sub-systems needs to be continuously monitored for the presence of any faults or malfunctions. This is a requirement detailed within the On-Board Diagnostic (OBD) legislation. To date, a physical model approach has been adopted by me automotive industry for the monitoring requirement of OBD legislation. However, this approach has restrictions from the available knowledge base and computational load required. A neural network technique incorporating Multivariant Statistical Process Control (MSPC) has been proposed as an alternative method of building interrelationships between the measured variables and monitoring the correct operation of the engine. Building upon earlier work for steady state fault detection, this paper details the use of non-linear models based on an Auto-associate Neural Network (ANN) for fault detection under transient engine operation. The theory and use of the technique is shown in this paper with the application to the detection of air leaks within the inlet manifold system of a modern gasoline engine whilst operated on a pseudo-drive cycle. Copyright © 2007 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the application of multivariate regression techniques to the Tennessee Eastman benchmark process for modelling and fault detection. Two methods are applied : linear partial least squares, and a nonlinear variant of this procedure using a radial basis function inner relation. The performance of the RBF networks is enhanced through the use of a recently developed training algorithm which uses quasi-Newton optimization to ensure an efficient and parsimonious network; details of this algorithm can be found in this paper. The PLS and PLS/RBF methods are then used to create on-line inferential models of delayed process measurements. As these measurements relate to the final product composition, these models suggest that on-line statistical quality control analysis should be possible for this plant. The generation of `soft sensors' for these measurements has the further effect of introducing a redundant element into the system, redundancy which can then be used to generate a fault detection and isolation scheme for these sensors. This is achieved by arranging the sensors and models in a manner comparable to the dedicated estimator scheme of Clarke et al. 1975, IEEE Trans. Pero. Elect. Sys., AES-14R, 465-473. The effectiveness of this scheme is demonstrated on a series of simulated sensor and process faults, with full detection and isolation shown to be possible for sensor malfunctions, and detection feasible in the case of process faults. Suggestions for enhancing the diagnostic capacity in the latter case are covered towards the end of the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault and fracture systems are the most important store and pathway for groundwater in Ireland’s bedrock aquifers, either directly as conductive flow structures, or indirectly as the locus for the development of dolomitised limestone and karst. This article presents the preliminary results of a study involving the quantitative analysis of fault and fracture systems in the broad range of Irish bedrock types and a consideration of their impact on groundwater flow. The principal aims of the project are to develop generic conceptual models for different fault/fracture systems in different lithologies and at different depths, and to link them to observed groundwater behaviour. Here we briefly describe the geometrical characteristics of the main post-Devonian fault/fracture systems controlling groundwater flow from field observations at outcrops, quarries and mines. The structures range from Lower Carboniferous normal faults through to Variscan-related faults and veins, with the most recent structures including Tertiary strike-slip faults and ubiquitous uplift-related joint systems. The geometrical characteristics of different fault/fracture systems combined with observations of groundwater behaviour in both quarry and mine localities, can be linked to general flow and transport conceptualisations of Irish fractured bedrock. Most importantly they also provide a basis for relating groundwater flow to particular fault/fracture systems and their expression with depth and within different lithological sequences, as well as their regional variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimised placement of control and protective devices in distribution networks allows for a better operation and improvement of the reliability indices of the system. Control devices (used to reconfigure the feeders) are placed in distribution networks to obtain an optimal operation strategy to facilitate power supply restoration in the case of a contingency. Protective devices (used to isolate faults) are placed in distribution systems to improve the reliability and continuity of the power supply, significantly reducing the impacts that a fault can have in terms of customer outages, and the time needed for fault location and system restoration. This paper presents a novel technique to optimally place both control and protective devices in the same optimisation process on radial distribution feeders. The problem is modelled through mixed integer non-linear programming (MINLP) with real and binary variables. The reactive tabu search algorithm (RTS) is proposed to solve this problem. Results and optimised strategies for placing control and protective devices considering a practical feeder are presented. (c) 2007 Elsevier B.V. All rights reserved.