878 resultados para Fault-proneness
Resumo:
Diagnostics is based on the characterization of mechanical system condition and allows early detection of a possible fault. Signal processing is an approach widely used in diagnostics, since it allows directly characterizing the state of the system. Several types of advanced signal processing techniques have been proposed in the last decades and added to more conventional ones. Seldom, these techniques are able to consider non-stationary operations. Diagnostics of roller bearings is not an exception of this framework. In this paper, a new vibration signal processing tool, able to perform roller bearing diagnostics in whatever working condition and noise level, is developed on the basis of two data-adaptive techniques as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED), coupled by means of the mathematics related to the Hilbert transform. The effectiveness of the new signal processing tool is proven by means of experimental data measured in a test-rig that employs high power industrial size components.
Resumo:
The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.
Resumo:
BACKGROUND: The objective of this study was to determine whether it is possible to predict driving safety in individuals with homonymous hemianopia or quadrantanopia based upon a clinical review of neuro-images that are routinely available in clinical practice. METHODS: Two experienced neuro-ophthalmologists viewed a summary report of the CT/MRI scans of 16 participants with homonymous hemianopic or quadrantanopic field defects which provided information regarding the site and extent of the lesion and made predictions regarding whether they would be safe/unsafe to drive. Driving safety was defined using two independent measures: (1) The potential for safe driving was defined based upon whether the participant was rated as having the potential for safe driving, determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist conducted just prior and (2) state recorded motor vehicle crashes (all crashes and at-fault). Driving safety was independently defined at the time of the study by state recorded motor vehicle crashes (all crashes and at-fault) recorded over the previous 5 years, as well as whether the participant was rated as having the potential for safe driving, determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist. RESULTS: The ability to predict driving safety was highly variable regardless of the driving outcome measure, ranging from 31% to 63% (kappa levels ranged from -0.29 to 0.04). The level of agreement between the neuro-ophthalmologists was also only fair (kappa =0.28). CONCLUSIONS: The findings suggest that clinical evaluation of summary reports currently available neuro-images by neuro-ophthalmologists is not predictive of driving safety. Future research should be directed at identifying and/or developing alternative tests or strategies to better enable clinicians to make these predictions.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.
Resumo:
This paper proposes an approach to achieve resilient navigation for indoor mobile robots. Resilient navigation seeks to mitigate the impact of control, localisation, or map errors on the safety of the platform while enforcing the robot’s ability to achieve its goal. We show that resilience to unpredictable errors can be achieved by combining the benefits of independent and complementary algorithmic approaches to navigation, or modalities, each tuned to a particular type of environment or situation. In this paper, the modalities comprise a path planning method and a reactive motion strategy. While the robot navigates, a Hidden Markov Model continually estimates the most appropriate modality based on two types of information: context (information known a priori) and monitoring (evaluating unpredictable aspects of the current situation). The robot then uses the recommended modality, switching between one and another dynamically. Experimental validation with a SegwayRMP- based platform in an office environment shows that our approach enables failure mitigation while maintaining the safety of the platform. The robot is shown to reach its goal in the presence of: 1) unpredicted control errors, 2) unexpected map errors and 3) a large injected localisation fault.
Resumo:
Considering the wide spectrum of situations that it may encounter, a robot navigating autonomously in outdoor environments needs to be endowed with several operating modes, for robustness and efficiency reasons. Indeed, the terrain it has to traverse may be composed of flat or rough areas, low cohesive soils such as sand dunes, concrete road etc. . .Traversing these various kinds of environment calls for different navigation and/or locomotion functionalities, especially if the robot is endowed with different locomotion abilities, such as the robots WorkPartner, Hylos [4], Nomad or the Marsokhod rovers. Numerous rover navigation techniques have been proposed, each of them being suited to a particular environment context (e.g. path following, obstacle avoidance in more or less cluttered environments, rough terrain traverses...). However, seldom contributions in the literature tackle the problem of selecting autonomously the most suited mode [3]. Most of the existing work is indeed devoted to the passive analysis of a single navigation mode, as in [2]. Fault detection is of course essential: one can imagine that a proper monitoring of the Mars Exploration Rover Opportunity could have avoided the rover to be stuck during several weeks in a dune, by detecting non-nominal behavior of some parameters. But the ability to recover the anticipated problem by switching to a better suited navigation mode would bring higher autonomy abilities, and therefore a better overall efficiency. We propose here a probabilistic framework to achieve this, which fuses environment related and robot related information in order to actively control the rover operations.
Resumo:
The purpose of this paper is to introduce the concept of hydraulic damage and its numerical integration. Unlike the common phenomenological continuum damage mechanics approaches, the procedure introduced in this paper relies on mature concepts of homogenization, linear fracture mechanics, and thermodynamics. The model is applied to the problem of fault reactivation within resource reservoirs. The results show that propagation of weaknesses is highly driven by the contrasts of properties in porous media. In particular, it is affected by the fracture toughness of host rocks. Hydraulic damage is diffused when it takes place within extended geological units and localized at interfaces and faults.
Resumo:
The Warburton-Cooper basins, central Australia, include a multitude of reactivated fracture-fault networks related to a complex, and poorly understood, tectonic evolution. We investigated authigenic illites from a granitic intrusion and sedimentary rocks associated with prominent structural features (Gidgealpa-Merrimelia-Innamincka Ridge and the Nappamerri Trough). These were analysed by 40Ar-39Ar, 87Rb-87Sr and 147Sm-143Nd geochronology to explore the thermal and tectonic histories of central Australian basins. The combined age data provide evidence for three major periods of fault reactivation throughout the Phanerozoic. While Carboniferous (323.3 ± 9.4 Ma) and Late Triassic ages (201.7 ± 9.3 Ma) derive from basin-wide hydrothermal circulation, Cretaceous ages (~128 to ~86 Ma) reflect episodic fluid flow events restricted to the synclinal Nappamerri Trough. Such events result from regional extensional tectonism derived from the transferral of far-field stresses to mechanically and thermally weakened regions of the Australian continent. Specifically, Cretaceous ages reflect continent-wide transmission of tensional stress from a > 2500 km long rifting event on the Eastern (and southern) Australian margin associated with break-up of Gondwana and opening of the Tasman Sea. By integrating 40Ar-39Ar, 87Rb-87Sr and 147Sm-143Nd dating, this study highlights the use of authigenic illite in temporally constraining the tectonic evolution of intracontinental basins that would otherwise remain unknown. Furthermore, combining Sr- and Ar-isotopic systems enables more accurate dating of authigenesis whilst significantly reducing geochemical pitfalls commonly associated with these radioisotopic dating methods.
Resumo:
Geoscientists are confronted with the challenge of assessing nonlinear phenomena that result from multiphysics coupling across multiple scales from the quantum level to the scale of the earth and from femtoseconds to the 4.5 Ga of history of our planet. We neglect in this review electromagnetic modelling of the processes in the Earth’s core, and focus on four types of couplings that underpin fundamental instabilities in the Earth. These are thermal (T), hydraulic (H), mechanical (M) and chemical (C) processes which are driven and controlled by the transfer of heat to the Earth’s surface. Instabilities appear as faults, folds, compaction bands, shear/fault zones, plate boundaries and convective patterns. Convective patterns emerge from buoyancy overcoming viscous drag at a critical Rayleigh number. All other processes emerge from non-conservative thermodynamic forces with a critical critical dissipative source term, which can be characterised by the modified Gruntfest number Gr. These dissipative processes reach a quasi-steady state when, at maximum dissipation, THMC diffusion (Fourier, Darcy, Biot, Fick) balance the source term. The emerging steady state dissipative patterns are defined by the respective diffusion length scales. These length scales provide a fundamental thermodynamic yardstick for measuring instabilities in the Earth. The implementation of a fully coupled THMC multiscale theoretical framework into an applied workflow is still in its early stages. This is largely owing to the four fundamentally different lengths of the THMC diffusion yardsticks spanning micro-metre to tens of kilometres compounded by the additional necessity to consider microstructure information in the formulation of enriched continua for THMC feedback simulations (i.e., micro-structure enriched continuum formulation). Another challenge is to consider the important factor time which implies that the geomaterial often is very far away from initial yield and flowing on a time scale that cannot be accessed in the laboratory. This leads to the requirement of adopting a thermodynamic framework in conjunction with flow theories of plasticity. This framework allows, unlike consistency plasticity, the description of both solid mechanical and fluid dynamic instabilities. In the applications we show the similarity of THMC feedback patterns across scales such as brittle and ductile folds and faults. A particular interesting case is discussed in detail, where out of the fluid dynamic solution, ductile compaction bands appear which are akin and can be confused with their brittle siblings. The main difference is that they require the factor time and also a much lower driving forces to emerge. These low stress solutions cannot be obtained on short laboratory time scales and they are therefore much more likely to appear in nature than in the laboratory. We finish with a multiscale description of a seminal structure in the Swiss Alps, the Glarus thrust, which puzzled geologists for more than 100 years. Along the Glarus thrust, a km-scale package of rocks (nappe) has been pushed 40 km over its footwall as a solid rock body. The thrust itself is a m-wide ductile shear zone, while in turn the centre of the thrust shows a mm-cm wide central slip zone experiencing periodic extreme deformation akin to a stick-slip event. The m-wide creeping zone is consistent with the THM feedback length scale of solid mechanics, while the ultralocalised central slip zones is most likely a fluid dynamic instability.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
Effective machine fault prognostic technologies can lead to elimination of unscheduled downtime and increase machine useful life and consequently lead to reduction of maintenance costs as well as prevention of human casualties in real engineering asset management. This paper presents a technique for accurate assessment of the remnant life of machines based on health state probability estimation technique and historical failure knowledge embedded in the closed loop diagnostic and prognostic system. To estimate a discrete machine degradation state which can represent the complex nature of machine degradation effectively, the proposed prognostic model employed a classification algorithm which can use a number of damage sensitive features compared to conventional time series analysis techniques for accurate long-term prediction. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for the comparison of intelligent diagnostic test using five different classification algorithms. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state probability using the Support Vector Machine (SVM) classifier. The results obtained were very encouraging and showed that the proposed prognostics system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Industrial transformer is one of the most critical assets in the power and heavy industry. Failures of transformers can cause enormous losses. The poor joints of the electrical circuit on transformers can cause overheating and results in stress concentration on the structure which is the major cause of catastrophic failure. Few researches have been focused on the mechanical properties of industrial transformers under overheating thermal conditions. In this paper, both mechanical and thermal properties of industrial transformers are jointly investigated using Finite Element Analysis (FEA). Dynamic response analysis is conducted on a modified transformer FEA model, and the computational results are compared with experimental results from literature to validate this simulation model. Based on the FEA model, thermal stress is calculated under different temperature conditions. These analysis results can provide insights to the understanding of the failure of transformers due to overheating, therefore are significant to assess winding fault, especially to the manufacturing and maintenance of large transformers.
Resumo:
Современный этап развития комплексов автоматического управления и навигации малогабаритными БЛА многократного применения предъявляет высокие требования к автономности, точности и миниатюрности данных систем. Противоречивость требований диктует использование функционального и алгоритмического объединения нескольких разнотипных источников навигационной информации в едином вычислительном процессе на основе методов оптимальной фильтрации. Получили широкое развитие бесплатформенные инерциальные навигационные системы (БИНС) на основе комплексирования данных микромеханических датчиков инерциальной информации и датчиков параметров движения в воздушном потоке с данными спутниковых навигационных систем (СНС). Однако в современных условиях такой подход не в полной мере реализует требования к помехозащищённости, автономности и точности получаемой навигационной информации. Одновременно с этим достигли значительного прогресса навигационные системы, использующие принципы корреляционно экстремальной навигации по оптическим ориентирам и цифровым картам местности. Предлагается схема построения автономной автоматической навигационной системы (АНС) для БЛА многоразового применения на основе объединения алгоритмов БИНС, спутниковой навигационной системы и оптической навигационной системы. The modern stage of automatic control and guidance systems development for small unmanned aerial vehicles (UAV) is determined by advanced requirements for autonomy, accuracy and size of the systems. The contradictory of the requirements dictates novel functional and algorithmic tight coupling of several different onboard sensors into one computational process, which is based on methods of optimal filtering. Nowadays, data fusion of micro-electro mechanical sensors of inertial measurement units, barometric pressure sensors, and signals of global navigation satellite systems (GNSS) receivers is widely used in numerous strap down inertial navigation systems (INS). However, the systems do not fully comply with such requirements as jamming immunity, fault tolerance, autonomy, and accuracy of navigation. At the same time, the significant progress has been recently demonstrated by the navigation systems, which use the correlation extremal principle applied for optical data flow and digital maps. This article proposes a new architecture of automatic navigation management system (ANMS) for small UAV, which combines algorithms of strap down INS, satellite navigation and optical navigation system.
Resumo:
This paper evaluates and proposes various compensation methods for three-level Z-source inverters under semiconductor-failure conditions. Unlike the fault-tolerant techniques used in traditional three-level inverters, where either an extra phase-leg or collective switching states are used, the proposed methods for three-level Z-source inverters simply reconfigure their relevant gating signals so as to ride-through the failed semiconductor conditions smoothly without any significant decrease in their ac-output quality and amplitude. These features are partly attributed to the inherent boost characteristics of a Z-source inverter, in addition to its usual voltage-buck operation. By focusing on specific types of three-level Z-source inverters, it can also be shown that, for the dual Z-source inverters, a unique feature accompanying it is its extra ability to force common-mode voltage to zero even under semiconductor-failure conditions. For verifying these described performance features, PLECS simulation and experimental testing were performed with some results captured and shown in a later section for visual confirmation.
Resumo:
Since 1995 the eruption of the andesitic Soufrière Hills Volcano (SHV), Montserrat, has been studied in substantial detail. As an important contribution to this effort, the Seismic Experiment with Airgunsource-Caribbean Andesitic Lava Island Precision Seismo-geodetic Observatory (SEA-CALIPSO) experiment was devised to image the arc crust underlying Montserrat, and, if possible, the magma system at SHV using tomography and reflection seismology. Field operations were carried out in October–December 2007, with deployment of 238 seismometers on land supplementing seven volcano observatory stations, and with an array of 10 ocean-bottom seismometers deployed offshore. The RRS James Cook on NERC cruise JC19 towed a tuned airgun array plus a digital 48-channel streamer on encircling and radial tracks for 77 h about Montserrat during December 2007, firing 4414 airgun shots and yielding about 47 Gb of data. The main objecctives of the experiment were achieved. Preliminary analyses of these data published in 2010 generated images of heterogeneous high-velocity bodies representing the cores of volcanoes and subjacent intrusions, and shallow areas of low velocity on the flanks of the island that reflect volcaniclastic deposits and hydrothermal alteration. The resolution of this preliminary work did not extend beyond 5 km depth. An improved three-dimensional (3D) seismic velocity model was then obtained by inversion of 181 665 first-arrival travel times from a more-complete sampling of the dataset, yielding clear images to 7.5 km depth of a low-velocity volume that was interpreted as the magma chamber which feeds the current eruption, with an estimated volume 13 km3. Coupled thermal and seismic modelling revealed properties of the partly crystallized magma. Seismic reflection analyses aimed at imaging structures under southern Montserrat had limited success, and suggest subhorizontal layering interpreted as sills at a depth of between 6 and 19 km. Seismic reflection profiles collected offshore reveal deep fans of volcaniclastic debris and fault offsets, leading to new tectonic interpretations. This chapter presents the project goals and planning concepts, describes in detail the campaigns at sea and on land, summarizes the major results, and identifies the key lessons learned.