821 resultados para Fault detection, fail-safety, fault tolerance, UAV
Resumo:
The role of T-cells within the immune system is to confirm and assess anomalous situations and then either respond to or tolerate the source of the effect. To illustrate how these mechanisms can be harnessed to solve real-world problems, we present the blueprint of a T-cell inspired algorithm for computer security worm detection. We show how the three central T-cell processes, namely T-cell maturation, differentiation and proliferation, naturally map into this domain and further illustrate how such an algorithm fits into a complete immune inspired computer security system and framework.
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
The system grounding method option has a direct influence on the overall performance of the entire medium voltage network as well as on the ground fault current magnitude. For any kind of grounding systems: ungrounded system, solidly and low impedance grounded and resonant grounded, we can find advantages and disadvantages. A thorough study is necessary to choose the most appropriate grounding protection system. The power distribution utilities justify their choices based on economic and technical criteria, according to the specific characteristics of each distribution network. In this paper we present a medium voltage Portuguese substation case study and a study of neutral system with Petersen coil, isolated neutral and impedance grounded.
Resumo:
We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new ‘Danger Theory’ (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of ‘grounding’ the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
Faults form quickly, geologically speaking, with sharp, crisp step-like profiles. Logic dictates that erosion wears away this "sharpness" or angularity creating more rounded features. As erosion occurs, debris accumulates at the base of the scarp slope. The stable end point of this process is when the scarp slope approaches an ideal sigmoid shape. This theory of fault end process, in combination with a new method developed in this report for fault profile delineation, has the potential to enable observation and categorization of fault profiles over large, diverse swaths of fault formation-- in remote areas such as the Southern Kenyan Rift Valley. This up-to date method uses remote sensing data and the digitizer tool in Global Mapper to create shape files of fault segments. This method can provide further evidence to support the notion that sigmoidal- shaped profiles represent a natural endpoint of the erosional process of fault scarps. Over time, faults of many different ages would exist in this similar shape over a wide region. However, keeping in mind that other processes can be at work on scarps-- most notably drainage patterns, when anomalies in profiles are observed, reactivation in some form possibly has occurred.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Design Optimization of Modern Machine-drive Systems for Maximum Fault Tolerant and Optimal Operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.
Resumo:
FAULT LINE examines the fragile humanity connected to the themes of sexuality, violence, addiction, family dynamics, and death. The book is not broken into sections; rather, as poems build upon one another to explore a narrative arc, FAULT LINE tracks a single speaker’s experience from girlhood to the verge of independent womanhood. The speaker employs formal structures such as the prose poem, sestina, and particularly the list poem to examine the fluidity of inner experience and also the culture at large while challenging the narrow definitions of femininity and masculinity. FAULT LINE works to not only address the question of blame but also the literal breaks in lines of poetry. By looking at a single speaker’s struggle, the book, like life, is both humorous and horrifying.
Resumo:
Improving the performance of a incident detection system was essential to minimize the effect of incidents. A new method of incident detection was brought forward in this paper based on an in-car terminal which consisted of GPS module, GSM module and control module as well as some optional parts such as airbag sensors, mobile phone positioning system (MPPS) module, etc. When a driver or vehicle discovered the freeway incident and initiated an alarm report the incident location information located by GPS, MPPS or both would be automatically send to a transport management center (TMC), then the TMC would confirm the accident with a closed-circuit television (CCTV) or other approaches. In this method, detection rate (DR), time to detect (TTD) and false alarm rate (FAR) were more important performance targets. Finally, some feasible means such as management mode, education mode and suitable accident confirming approaches had been put forward to improve these targets.
Resumo:
In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.