177 resultados para Burglar alarms.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a practical recursive fault detection and diagnosis (FDD) scheme for online identification of actuator faults for unmanned aerial systems (UASs) based on the unscented Kalman filtering (UKF) method. The proposed FDD algorithm aims to monitor health status of actuators and provide indication of actuator faults with reliability, offering necessary information for the design of fault-tolerant flight control systems to compensate for side-effects and improve fail-safe capability when actuator faults occur. The fault detection is conducted by designing separate UKFs to detect aileron and elevator faults using a nonlinear six degree-of-freedom (DOF) UAS model. The fault diagnosis is achieved by isolating true faults by using the Bayesian Classifier (BC) method together with a decision criterion to avoid false alarms. High-fidelity simulations with and without measurement noise are conducted with practical constraints considered for typical actuator fault scenarios, and the proposed FDD exhibits consistent effectiveness in identifying occurrence of actuator faults, verifying its suitability for integration into the design of fault-tolerant flight control systems for emergency landing of UASs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a three-layered framework to monitor the positioning performance requirements of Real-time Relative Positioning (RRP) systems of the Cooperative Intelligent Transport Systems (C-ITS) that support Cooperative Collision Warning (CCW) applications. These applications exploit state data of surrounding vehicles obtained solely from the Global Positioning System (GPS) and Dedicated Short-Range Communications (DSRC) units without using other sensors. To this end, the paper argues the need for the GPS/DSRC-based RRP systems to have an autonomous monitoring mechanism, since the operation of CCW applications is meant to augment safety on roads. The advantages of autonomous integrity monitoring are essential and integral to any safety-of-life system. The autonomous integrity monitoring framework proposed necessitates the RRP systems to detect/predict the unavailability of their sub-systems and of the integrity monitoring module itself, and, if available, to account for effects of data link delays and breakages of DSRC links, as well as of faulty measurement sources of GPS and/or integrated augmentation positioning systems, before the information used for safety warnings/alarms becomes unavailable, unreliable, inaccurate or misleading. Hence, a monitoring framework using a tight integration and correlation approach is proposed for instantaneous reliability assessment of the RRP systems. Ultimately, using the proposed framework, the RRP systems will provide timely alerts to users when the RRP solutions cannot be trusted or used for the intended operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: The aim of this survey was to assess registered nurse’s perceptions of alarm setting and management in an Australian Regional Critical Care Unit. Background: The setting and management of alarms within the critical care environment is one of the key responsibilities of the nurse in this area. However, with up to 99% of alarms potentially being false-positives it is easy for the nurse to become desensitised or fatigued by incessant alarms; in some cases up to 400 per patient per day. Inadvertently ignoring, silencing or disabling alarms can have deleterious implications for the patient and nurse. Method: A total population sample of 48 nursing staff from a 13 bedded ICU/HDU/CCU within regional Australia were asked to participate. A 10 item open-ended and multiple choice questionnaire was distributed to determine their perceptions and attitudes of alarm setting and management within this clinical area. Results: Two key themes were identified from the open-ended questions: attitudes towards inappropriate alarm settings and annoyance at delayed responses to alarms. A significant number of respondents (93%) agreed that alarm fatigue can result in alarm desensitisation and the disabling of alarms, whilst 81% suggested the key factors are those associated with false-positive alarms and inappropriately set alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of tracking a maneuvering target in clutter. In such an environment, missed detections and false alarms make it impossible to decide, with certainty, the origin of received echoes. Processing radar returns in cluttered environments consists of three functions: 1) target detection and plot formation, 2) plot-to-track association, and 3) track updating. Two inadequacies of the present approaches are 1) Optimization of detection characteristics have not been considered and 2) features that can be used in the plot-to-track correlation process are restricted to a specific class. This paper presents a new approach to overcome these limitations. This approach facilitates tracking of a maneuvering target in clutter and improves tracking performance for weak targets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluates the effectiveness and social implications of home monitoring of 31 infants at risk of sudden infant death syndrome (SIDS). Thirteen siblings of children dying of SIDS, nine near miss SIDS infants and nine preterm infants with apnoea persisting beyond 40 weeks post conceptual age were monitored from a mean age of 15 days to a mean of 10 months. Chest movement detection monitors were used in 27 and thoracic impedance monitors in four. Genuine apnoeic episodes were reported by 21 families, and 13 infants required resuscitation. Apnoeic episodes occurred in all nine preterm infants but in only five (38%) of the siblings of SIDS (P<0.05). Troublesome false alarms were a major problem occurring with 61% of the infants and were more common with the preterm infants than the siblings of SIDS. All but two couples stated that the monitor decreased anxiety and improved their quality of life. Most parents accepted that the social restrictions imposed by the monitor were part of the caring process but four couples were highly resentful of the changes imposed on their lifestyle. The monitors used were far from ideal with malfunction occurring in 17, necessitating replacement in six, repair in six and cessation of monitoring in three. The parents became ingenious in modifying the monitors to their own individual requirements Although none of these 31 ‘at risk’ infants died the study sample was far too small to conclude whether home monitoring prevented any cases of SIDS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault-free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault-free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analysis of the retrospective predictions by seven coupled ocean atmosphere models from major forecasting centres of Europe and USA, aimed at assessing their ability in predicting the interannual variation of the Indian summer monsoon rainfall (ISMR), particularly the extremes (i.e. droughts and excess rainfall seasons) is presented in this article. On the whole, the skill in prediction of extremes is not bad since most of the models are able to predict the sign of the ISMR anomaly for a majority of the extremes. There is a remarkable coherence between the models in successes and failures of the predictions, with all the models generating loud false alarms for the normal monsoon season of 1997 and the excess monsoon season of 1983. It is well known that the El Nino and Southern Oscillation (ENSO) and the Equatorial Indian Ocean Oscillation (EQUINOO) play an important role in the interannual variation of ISMR and particularly the extremes. The prediction of the phases of these modes and their link with the monsoon has also been assessed. It is found that models are able to simulate ENSO-monsoon link realistically, whereas the EQUINOO-ISMR link is simulated realistically by only one model the ECMWF model. Furthermore, it is found that in most models this link is opposite to the observed, with the predicted ISMR being negatively (instead of positively) correlated with the rainfall over the western equatorial Indian Ocean and positively (instead of negatively) correlated with the rainfall over the eastern equatorial Indian Ocean. Analysis of the seasons for which the predictions of almost all the models have large errors has suggested the facets of ENSO and EQUINOO and the links with the monsoon that need to be improved for improving monsoon predictions by these models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.