192 resultados para Fault location algorithms
em CentAUR: Central Archive University of Reading - UK
Resumo:
To ensure minimum loss of system security and revenue it is essential that faults on underground cable systems be located and repaired rapidly. Currently in the UK, the impulse current method is used to prelocate faults, prior to using acoustic methods to pinpoint the fault location. The impulse current method is heavily dependent on the engineer's knowledge and experience in recognising/interpreting the transient waveforms produced by the fault. The development of a prototype real-time expert system aid for the prelocation of cable faults is described. Results from the prototype demonstrate the feasibility and benefits of the expert system as an aid for the diagnosis and location of faults on underground cable systems.
Resumo:
A multi-layered architecture of self-organizing neural networks is being developed as part of an intelligent alarm processor to analyse a stream of power grid fault messages and provide a suggested diagnosis of the fault location. Feedback concerning the accuracy of the diagnosis is provided by an object-oriented grid simulator which acts as an external supervisor to the learning system. The utilization of artificial neural networks within this environment should result in a powerful generic alarm processor which will not require extensive training by a human expert to produce accurate results.
Resumo:
The authors describe a learning classifier system (LCS) which employs genetic algorithms (GA) for adaptive online diagnosis of power transmission network faults. The system monitors switchgear indications produced by a transmission network, reporting fault diagnoses on any patterns indicative of faulted components. The system evaluates the accuracy of diagnoses via a fault simulator developed by National Grid Co. and adapts to reflect the current network topology by use of genetic algorithms.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset—the period 1989–2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.
Resumo:
Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.
Resumo:
Many algorithms have been developed to achieve motion segmentation for video surveillance. The algorithms produce varying performances under the infinite amount of changing conditions. It has been recognised that individually these algorithms have useful properties. Fusing the statistical result of these algorithms is investigated, with robust motion segmentation in mind.
Resumo:
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling of cyanobacteria in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes and rivers. In this article, a new deterministic model is introduced which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the four major factors that affect the cyanobacterial bloom formation in freshwaters: light, nutrients, temperature and river flow. The model consists of two sub-models: a vertical migration model with respect to growth of cyanobacteria in relation to light, nutrients and temperature; and a hydraulic model to simulate the horizontal movement of the bloom. This article presents the model algorithms and highlights some important model results. The effects of nutrient limitation, varying illumination and river flow characteristics on cyanobacterial movement are simulated. The results indicate that under high light intensities and in nutrient-rich waters colonies sink further as a result of carbohydrate accumulation in the cells. In turbulent environments, vertical migration is retarded by vertical velocity component generated by turbulent shear stress. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We report evidence for a major ice stream that operated over the northwestern Canadian Shield in the Keewatin Sector of the Laurentide Ice Sheet during the last deglaciation 9000-8200 (uncalibrated) yr BP. It is reconstructed at 450 km in length, 140 km in width, and had an estimated catchment area of 190000 km. Mapping from satellite imagery reveals a suite of bedforms ('flow-set') characterized by a highly convergent onset zone, abrupt lateral margins, and where flow was presumed to have been fastest, a remarkably coherent pattern of mega-scale glacial lineations with lengths approaching 13 km and elongation ratios in excess of 40:1. Spatial variations in bedform elongation within the flow-set match the expected velocity field of a terrestrial ice stream. The flow pattern does not appear to be steered by topography and its location on the hard bedrock of the Canadian Shield is surprising. A soft sedimentary basin may have influenced ice-stream activity by lubricating the bed over the downstream crystalline bedrock, but it is unlikely that it operated over a pervasively deforming till layer. The location of the ice stream challenges the view that they only arise in deep bedrock troughs or over thick deposits of 'soft' fine-grained sediments. We speculate that fast ice flow may have been triggered when a steep ice sheet surface gradient with high driving stresses contacted a proglacial lake. An increase in velocity through calving could have propagated fast ice flow upstream (in the vicinity of the Keewatin Ice Divide) through a series of thermomechanical feedback mechanisms. It exerted a considerable impact on the Laurentide Ice Sheet, forcing the demise of one of the last major ice centres.
Resumo:
During deglaciation of the North American Laurentide Ice Sheet large proglacial lakes developed in positions where proglacial drainage was impeded by the ice margin. For some of these lakes, it is known that subsequent drainage had an abrupt and widespread impact on North Atlantic Ocean circulation and climate, but less is known about the impact that the lakes exerted on ice sheet dynamics. This paper reports palaeogeographic reconstructions of the evolution of proglacial lakes during deglaciation across the northwestern Canadian Shield, covering an area in excess of 1,000,000 km(2) as the ice sheet retreated some 600 km. The interactions between proglacial lakes and ice sheet flow are explored, with a particular emphasis on whether the disposition of lakes may have influenced the location of the Dubawnt Lake ice stream. This ice stream falls outside the existing paradigm for ice streams in the Laurentide Ice Sheet because it did not operate over fined-grained till or lie in a topographic trough. Ice margin positions and a digital elevation model are utilised to predict the geometry and depth of proglacial takes impounded at the margin at 30-km increments during deglaciation. Palaeogeographic reconstructions match well with previous independent estimates of lake coverage inferred from field evidence, and results suggest that the development of a deep lake in the Thelon drainage basin may have been influential in initiating the ice stream by inducing calving, drawing down ice and triggering fast ice flow. This is the only location alongside this sector of the ice sheet where large (>3000 km(2)), deep lakes (similar to120 m) are impounded for a significant length of time and exactly matches the location of the ice stream. It is speculated that the commencement of calving at the ice sheet margin may have taken the system beyond a threshold and was sufficient to trigger rapid motion but that once initiated, calving processes and losses were insignificant to the functioning of the ice stream. It is thus concluded that proglacial lakes are likely to have been an important control on ice sheet dynamics during deglaciation of the Laurentide Ice Sheet. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The authors present a systolic design for a simple GA mechanism which provides high throughput and unidirectional pipelining by exploiting the inherent parallelism in the genetic operators. The design computes in O(N+G) time steps using O(N2) cells where N is the population size and G is the chromosome length. The area of the device is independent of the chromosome length and so can be easily scaled by replicating the arrays or by employing fine-grain migration. The array is generic in the sense that it does not rely on the fitness function and can be used as an accelerator for any GA application using uniform crossover between pairs of chromosomes. The design can also be used in hybrid systems as an add-on to complement existing designs and methods for fitness function acceleration and island-style population management
Resumo:
In April–July 2008, intensive measurements were made of atmospheric composition and chemistry in Sabah, Malaysia, as part of the "Oxidant and particle photochemical processes above a South-East Asian tropical rainforest" (OP3) project. Fluxes and concentrations of trace gases and particles were made from and above the rainforest canopy at the Bukit Atur Global Atmosphere Watch station and at the nearby Sabahmas oil palm plantation, using both ground-based and airborne measurements. Here, the measurement and modelling strategies used, the characteristics of the sites and an overview of data obtained are described. Composition measurements show that the rainforest site was not significantly impacted by anthropogenic pollution, and this is confirmed by satellite retrievals of NO2 and HCHO. The dominant modulators of atmospheric chemistry at the rainforest site were therefore emissions of BVOCs and soil emissions of reactive nitrogen oxides. At the observed BVOC:NOx volume mixing ratio (~100 pptv/pptv), current chemical models suggest that daytime maximum OH concentrations should be ca. 105 radicals cm−3, but observed OH concentrations were an order of magnitude greater than this. We confirm, therefore, previous measurements that suggest that an unexplained source of OH must exist above tropical rainforest and we continue to interrogate the data to find explanations for this.