176 resultados para Alarms


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluates the effectiveness and social implications of home monitoring of 31 infants at risk of sudden infant death syndrome (SIDS). Thirteen siblings of children dying of SIDS, nine near miss SIDS infants and nine preterm infants with apnoea persisting beyond 40 weeks post conceptual age were monitored from a mean age of 15 days to a mean of 10 months. Chest movement detection monitors were used in 27 and thoracic impedance monitors in four. Genuine apnoeic episodes were reported by 21 families, and 13 infants required resuscitation. Apnoeic episodes occurred in all nine preterm infants but in only five (38%) of the siblings of SIDS (P<0.05). Troublesome false alarms were a major problem occurring with 61% of the infants and were more common with the preterm infants than the siblings of SIDS. All but two couples stated that the monitor decreased anxiety and improved their quality of life. Most parents accepted that the social restrictions imposed by the monitor were part of the caring process but four couples were highly resentful of the changes imposed on their lifestyle. The monitors used were far from ideal with malfunction occurring in 17, necessitating replacement in six, repair in six and cessation of monitoring in three. The parents became ingenious in modifying the monitors to their own individual requirements Although none of these 31 ‘at risk’ infants died the study sample was far too small to conclude whether home monitoring prevented any cases of SIDS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault-free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault-free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analysis of the retrospective predictions by seven coupled ocean atmosphere models from major forecasting centres of Europe and USA, aimed at assessing their ability in predicting the interannual variation of the Indian summer monsoon rainfall (ISMR), particularly the extremes (i.e. droughts and excess rainfall seasons) is presented in this article. On the whole, the skill in prediction of extremes is not bad since most of the models are able to predict the sign of the ISMR anomaly for a majority of the extremes. There is a remarkable coherence between the models in successes and failures of the predictions, with all the models generating loud false alarms for the normal monsoon season of 1997 and the excess monsoon season of 1983. It is well known that the El Nino and Southern Oscillation (ENSO) and the Equatorial Indian Ocean Oscillation (EQUINOO) play an important role in the interannual variation of ISMR and particularly the extremes. The prediction of the phases of these modes and their link with the monsoon has also been assessed. It is found that models are able to simulate ENSO-monsoon link realistically, whereas the EQUINOO-ISMR link is simulated realistically by only one model the ECMWF model. Furthermore, it is found that in most models this link is opposite to the observed, with the predicted ISMR being negatively (instead of positively) correlated with the rainfall over the western equatorial Indian Ocean and positively (instead of negatively) correlated with the rainfall over the eastern equatorial Indian Ocean. Analysis of the seasons for which the predictions of almost all the models have large errors has suggested the facets of ENSO and EQUINOO and the links with the monsoon that need to be improved for improving monsoon predictions by these models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Accelerating Moment Release (AMR) preceding earthquakes with magnitude above 5 in Australia that occurred during the last 20 years was analyzed to test the Critical Point Hypothesis. Twelve earthquakes in the catalog were chosen based on a criterion for the number of nearby events. Results show that seven sequences with numerous events recorded leading up to the main earthquake exhibited accelerating moment release. Two occurred near in time and space to other earthquakes preceded by AM R. The remaining three sequences had very few events in the catalog so the lack of AMR detected in the analysis may be related to catalog incompleteness. Spatio-temporal scanning of AMR parameters shows that 80% of the areas in which AMR occurred experienced large events. In areas of similar background seismicity with no large events, 10 out of 12 cases exhibit no AMR, and two others are false alarms where AMR was observed but no large event followed. The relationship between AMR and Load-Unload Response Ratio (LURR) was studied. Both methods predict similar critical region sizes, however, the critical point time using AMR is slightly earlier than the time of the critical point LURR anomaly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an increasing number of Ambient Intelligence (AmI) systems that are time-sensitive and resource-aware. From healthcare to building and even home/office automation, it is now common to find systems combining interactive and sensing multimedia traffic with relatively simple sensors and actuators (door locks, presence detectors, RFIDs, HVAC, information panels, etc.). Many of these are today known as Cyber-Physical Systems (CPS). Quite frequently, these systems must be capable of (1) prioritizing different traffic flows (process data, alarms, non-critical data, etc.), (2) synchronizing actions in several distributed devices and, to certain degree, (3) easing resource management (e.g., detecting faulty nodes, managing battery levels, handling overloads, etc.). This work presents FTT-MA, a high-level middleware architecture aimed at easing the design, deployment and operation of such AmI systems. FTT-MA ensures that both functional and non-functional aspects of the applications are met even during reconfiguration stages. The paper also proposes a methodology, together with a design tool, to create this kind of systems. Finally, a sample case study is presented that illustrates the use of the middleware and the methodology proposed in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O processo civil precisa de ordem, simplicidade e eficiência para atingir o seu escopo de prestação de uma tutela jurisdicional adequada, justa e célere. Para tanto, o ordenamento processual tem sofrido relevantes modificações com o objetivo de se adaptar às novas exigências sociais e jurídicas, em que o formalismo deve servir para proteger, e não para derrubar. Além disso, variadas técnicas processuais têm sido utilizadas para conferir mais efetividade à tutela jurisdicional, sem prejuízo da necessária segurança jurídica. Nesse contexto se insere a ordem pública processual, que embora possa ter uma interessante abordagem principiológica, atua no processo como técnica de controle da regularidade de atos e do procedimento. Por sua vez, o papel do magistrado na gestão dessa técnica se mostra fundamental para ela atinja seu objetivo, que é eliminar do processo os defeitos capazes de macular a sua integridade, bem como a legitimidade da tutela judicial. O controle adequado e tempestivo da regularidade dos atos e do procedimento é um dever do juiz e também uma garantia das partes. Dessa forma, a tese busca identificar as questões processuais passíveis de controle, de acordo com o grau de interesse público que cada uma revela, sendo certo que a lei, a doutrina e a jurisprudência servem de fonte e ainda podem modular a relevância da matéria conforme tempo e espaço em que se observam. Por sua vez, a importância da avaliação do interesse público de cada questão processual reflete no regime jurídico que será estabelecido e as consequências que se estabelecem para os eventuais defeitos com base nas particularidades do caso concreto. Ademais, identificada a irregularidade, o processo civil oferece variadas técnicas de superação, convalidação e flexibilização do vício antes de se declarar a nulidade de atos processuais ou de se inadmitir o procedimento adotado pela parte, numa forma de preservar ao máximo o processo. Já no âmbito recursal, embora haja requisitos específicos de admissibilidade, os vícios detectados em primeiro grau de jurisdição perdem força em segundo grau e perante os Tribunais Superiores, haja vista a necessidade casa vez maior de se proporcionar ao jurisdicionado a entrega da prestação jurisdicional completa, ou seja, com o exame do mérito. Registre-se, ainda, a possibilidade de controle judicial nos meios alternativos de resolução de conflitos, uma vez que também devem se submeter a certos requisitos, para que sejam chancelados e legitimados. Como se observa, a abrangência do tema da ordem pública processual faz com que o ele seja extenso e complexo, o que normalmente assusta os operadores do direito. Portanto, o intento deste estudo é não só descrever o assunto, mas também adotar uma linguagem diferenciada, proporcionando uma nova forma de abordar e sistematizar o que ainda parece ser um dogma em nosso sistema processual.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Voice alarm plays an important role in emergency evacuation of public place, because it can provide information and instruct evacuation. This paper studied the optimization of acoustic and semantic parameters of voice alarms in emergency evacuation, so that alarm design can improve the evacuation performance. Both method of magnitude estimation and scale were implemented to investigate participants' perceived urgency of the alarms with different parameters. The results indicated that, participants evaluated the alarms with faster speech rate, with greater signal to noise ratio (SNR) and under louder noises more urgent. There was an interaction between noise level and content of voice alarm. Signals with speech rate below 4 characters / second were evaluated as non urgent at all. Intelligibility of the voice alarm was investigated by evaluating the key pointed recognition performance. The results showed that, speech rate’s effect was a marginal significance, and 7 characters / second has the highest intelligibility. It might because that the faster the signal spoken, the more attention was paid. Gender of speaker and SNR did not have a significant effect on the signals’ intelligibility. This paper also investigated impact of voice alarms' content on human behavior in emergency evacuation in a 3-D virtual reality environment. In condition of "telling the occupants what had happened and what to do", the number of participants who succeeded in evacuation was the largest. Further study, in which similar numbers of participants evacuate successfully in three conditions, indicated that the reaction time and evacuation time was the shortest in the aforesaid condition. Although one-way ANOVA shows that the difference was not significant, the results still provided some reference to the alarm design. In sum, parameters of voice alarm in emergency evacuation should be chosen to meet needs from both perceived urgency and intelligibility. Contents of the alarms should include "what had happened and what to do", and should vary according to noise levels in different public places.