884 resultados para detection performance


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20–78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar irradiation, air temperature, or wind speed. The performance indicator, called Performance to Peers (P2P), is constructed from spatial and temporal correlations between the energy output of neighboring and similar PV systems. This method was developed from the analysis of the energy production data of approximately 10,000 BIPV systems located in Europe. The results of our procedure are illustrated on the hourly, daily and monthly data monitored during one year at one BIPV system located in the South of Belgium. Our results confirm that it is possible to carry out automatic fault detection procedures without solar irradiation data. P2P proves to be more stable than PR most of the time, and thus constitutes a more reliable performance indicator for fault detection procedures. We also discuss the main limitations of this novel methodology, and we suggest several future lines of research that seem promising to improve on these procedures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Visual detection performance (d') is usually an accelerating function of stimulus contrast, which could imply a smooth, threshold-like nonlinearity in the sensory response. Alternatively, Pelli (1985 Journal of the Optical Society of America A 2 1508 - 1532) developed the 'uncertainty model' in which responses were linear with contrast, but the observer was uncertain about which of many noisy channels contained the signal. Such internal uncertainty effectively adds noise to weak signals, and predicts the nonlinear psychometric function. We re-examined these ideas by plotting psychometric functions (as z-scores) for two observers (SAW, PRM) with high precision. The task was to detect a single, vertical, blurred line at the fixation point, or identify its polarity (light vs dark). Detection of a known polarity was nearly linear for SAW but very nonlinear for PRM. Randomly interleaving light and dark trials reduced performance and rendered it non-linear for SAW, but had little effect for PRM. This occurred for both single-interval and 2AFC procedures. The whole pattern of results was well predicted by our Monte Carlo simulation of Pelli's model, with only two free parameters. SAW (highly practised) had very low uncertainty. PRM (with little prior practice) had much greater uncertainty, resulting in lower contrast sensitivity, nonlinear performance, and no effect of external (polarity) uncertainty. For SAW, identification was about v2 better than detection, implying statistically independent channels for stimuli of opposite polarity, rather than an opponent (light - dark) channel. These findings strongly suggest that noise and uncertainty, rather than sensory nonlinearity, limit visual detection.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Insulated gate bipolar transistor (IGBT) modules are important safety critical components in electrical power systems. Bond wire lift-off, a plastic deformation between wire bond and adjacent layers of a device caused by repeated power/thermal cycles, is the most common failure mechanism in IGBT modules. For the early detection and characterization of such failures, it is important to constantly detect or monitor the health state of IGBT modules, and the state of bond wires in particular. This paper introduces eddy current pulsed thermography (ECPT), a nondestructive evaluation technique, for the state detection and characterization of bond wire lift-off in IGBT modules. After the introduction of the experimental ECPT system, numerical simulation work is reported. The presented simulations are based on the 3-D electromagnetic-thermal coupling finite-element method and analyze transient temperature distribution within the bond wires. This paper illustrates the thermal patterns of bond wires using inductive heating with different wire statuses (lifted-off or well bonded) under two excitation conditions: nonuniform and uniform magnetic field excitations. Experimental results show that uniform excitation of healthy bonding wires, using a Helmholtz coil, provides the same eddy currents on each, while different eddy currents are seen on faulty wires. Both experimental and numerical results show that ECPT can be used for the detection and characterization of bond wires in power semiconductors through the analysis of the transient heating patterns of the wires. The main impact of this paper is that it is the first time electromagnetic induction thermography, so-called ECPT, has been employed on power/electronic devices. Because of its capability of contactless inspection of multiple wires in a single pass, and as such it opens a wide field of investigation in power/electronic devices for failure detection, performance characterization, and health monitoring.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE. To measure tear film surface quality in healthy and dry eye subjects using three noninvasive techniques of tear film quality assessment and to establish the ability of these noninvasive techniques to predict dry eye. METHODS. Thirty four subjects participated in the study, and were classified as dry eye or normal, based on standard clinical assessments. Three non-invasive techniques were applied for measurement of tear film surface quality: dynamic-area high-speed videokeratoscopy (HSV), wavefront sensing (DWS) and lateral shearing interferometry (LSI). The measurements were performed in both natural blinking conditions (NBC) and in suppressed blinking conditions (SBC). RESULTS. In order to investigate the capability of each method to discriminate dry eye subjects from normal subjects, the receiver operating curve (ROC) was calculated and then the area under the curve (AUC) was extracted. The best result was obtained for the LSI technique (AUC=0.80 in SBC and AUC=0.73 in NBC), which was followed by HSV (AUC=0.72 in SBC and AUC=0.71 in NBC). The best result for DWS was AUC=0.64 obtained for changes in vertical coma in suppressed blinking conditions, while for normal blinking conditions the results were poorer. CONCLUSIONS. Non-invasive techniques of tear film surface assessment can be used for predicting dry eye and this can be achieved in natural blinking as well as suppressed blinking conditions. In this study, LSI showed the best detection performance, closely followed by the dynamic-area HSV. The wavefront sensing technique was less powerful, particularly in natural blinking conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.