40 resultados para Detection and fault location
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this work a hybrid technique that includes probabilistic and optimization based methods is presented. The method is applied, both in simulation and by means of real-time experiments, to the heating unit of a Heating, Ventilation Air Conditioning (HVAC) system. It is shown that the addition of the probabilistic approach improves the fault diagnosis accuracy.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.
Resumo:
A technique is presented for locating and tracking objects in cluttered environments. Agents are randomly distributed across the image, and subsequently grouped around targets. Each agent uses a weightless neural network and a histogram intersection technique to score its location. The system has been used to locate and track a head in 320x240 resolution video at up to 15fps.
Resumo:
To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system aid for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset—the period 1989–2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
Resumo:
Techniques for obtaining quantitative values of the temperatures and concentrations of remote hot gaseous effluents from their measured passive emission spectra have been examined in laboratory experiments. The high sensitivity of the spectrometer in the vicinity of the 2397 cm-1 band head region of CO2 has allowed the gas temperature to be calculated from the relative intensity of the observed rotational lines. The spatial distribution of the CO2 in a methane flame has been reconstructed tomographically using a matrix inversion technique. The spectrometer has been calibrated against a black body source at different temperatures and a self absorption correction has been applied to the data avoiding the need to measure the transmission directly. Reconstruction artifacts have been reduced by applying a smoothing routine to the inversion matrix.
Resumo:
The distribution of sulphate-reducing bacteria (SRB) in the sediments of the Colne River estuary, Essex, UK covering different saline concentrations of sediment porewater was investigated by the use of quantitative competitive PCR. Here, we show that a new PCR primer set and a new quantitative method using PCR are useful tools for the detection and the enumeration of SRB in natural environments. A PCR primer set selective for the dissimilatory sulphite reductase gene (dsr) of SRB was designed. PCR amplification using the single set of dsr-specific primers resulted in PCR products of the expected size from all 27 SRB strains tested, including Gram-negative and positive species. Sixty clones derived from sediment DNA using the primers were sequenced and all were closely related with the predicted dsr of SRB. These results indicate that PCR using the newly designed primer set are useful for the selective detection of SRB from a natural sample. This primer set was used to estimate cell numbers by dsr selective competitive PCR using a competitor, which was about 20% shorter than the targeted region of dsr. This procedure was applied to sediment samples from the River Colne estuary, Essex, UK together with simultaneous measurement of in situ rates of sulphate reduction. High densities of SRB ranging from 0.2 - 5.7 × 108 cells ml-1 wet sediment were estimated by the competitive PCR assuming that all SRB have a single copy of dsr. Using these estimates cell specific sulphate reduction rates of 10-17 to 10-15 mol of SO42- cell-1 day-1 were calculated, which is within the range of, or lower than, those previously reported for pure cultures of SRB. Our results show that the newly developed competitive PCR technique targeted to dsr is a powerful tool for rapid and reproducible estimation of SRB numbers in situ and is superior to the use of culture-dependent techniques.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
Resumo:
This paper specifically examines the implantation of a microelectrode array into the median nerve of the left arm of a healthy male volunteer. The objective was to establish a bi-directional link between the human nervous system and a computer, via a unique interface module. This is the first time that such a device has been used with a healthy human. The aim of the study was to assess the efficacy, compatibility, and long term operability of the neural implant in allowing the subject to perceive feedback stimulation and for neural activity to be detected and processed such that the subject could interact with remote technologies. A case study demonstrating real-time control of an instrumented prosthetic hand by means of the bi-directional link is given. The implantation did not result in infection, and scanning electron microscope images of the implant post extraction have not indicated significant rejection of the implant by the body. No perceivable loss of hand sensation or motion control was experienced by the subject while the implant was in place, and further testing of the subject following the removal of the implant has not indicated any measurable long term defects. The implant was extracted after 96 days. Copyright © 2004 John Wiley & Sons, Ltd.