84 resultados para Gabor wavelet filters
Resumo:
The internet by its very nature challenges an individual’s notions of propriety, moral acuity and social correctness. A tension will always exist between the censorship of obscene and sensitive information and the freedom to publish and/or access such information. Freedom of expression and communication on the internet is not a static concept: ‘Its continual regeneration is the product of particular combinations of political, legal, cultural and philosophical conditions’.
Resumo:
Occlusion is a big challenge for facial expression recognition (FER) in real-world situations. Previous FER efforts to address occlusion suffer from loss of appearance features and are largely limited to a few occlusion types and single testing strategy. This paper presents a robust approach for FER in occluded images and addresses these issues. A set of Gabor based templates is extracted from images in the gallery using a Monte Carlo algorithm. These templates are converted into distance features using template matching. The resulting feature vectors are robust to occlusion. Occluded eyes and mouth regions and randomly places occlusion patches are used for testing. Two testing strategies analyze the effects of these occlusions on the overall recognition performance as well as each facial expression. Experimental results on the Cohn-Kanade database confirm the high robustness of our approach and provide useful insights about the effects of occlusion on FER. Performance is also compared with previous approaches.
Resumo:
This paper demonstrates the capabilities of wavelet transform (WT) for analyzing important features related to bottleneck activations and traffic oscillations in congested traffic in a systematic manner. In particular, the analysis of loop detector data from a freeway shows that the use of wavelet-based energy can effectively identify the location of an active bottleneck, the arrival time of the resulting queue at each upstream sensor location, and the start and end of a transition during the onset of a queue. Vehicle trajectories were also analyzed using WT and our analysis shows that the wavelet-based energies of individual vehicles can effectively detect the origins of deceleration waves and shed light on possible triggers (e.g., lane-changing). The spatiotemporal propagations of oscillations identified by tracing wavelet-based energy peaks from vehicle to vehicle enable analysis of oscillation amplitude, duration and intensity.
Resumo:
In this paper we identify the origins of stop-and-go (or slow-and-go) driving and measure microscopic features of their propagations by analyzing vehicle trajectories via Wavelet Transform. Based on 53 oscillation cases analyzed, we find that oscillations can be originated by either lane-changing maneuvers (LCMs) or car-following behavior (CF). LCMs were predominantly responsible for oscillation formations in the absence of considerable horizontal or vertical curves, whereas oscillations formed spontaneously near roadside work on an uphill segment. Regardless of the trigger, the features of oscillation propagations were similar in terms of propagation speed, oscillation duration, and amplitude. All observed cases initially exhibited a precursor phase, in which slow-and-go motions were localized. Some of them eventually transitioned into a well developed phase, in which oscillations propagated upstream in queue. LCMs were primarily responsible for the transition, although some transitions occurred without LCMs. Our findings also suggest that an oscillation has a regressive effect on car following behavior: a deceleration wave of an oscillation affects a timid driver (with larger response time and minimum spacing) to become less timid and an aggressive driver less aggressive, although this change may be short-lived. An extended framework of Newell’s CF is able to describe the regressive effects with two additional parameters with reasonable accuracy, as verified using vehicle trajectory data.
Resumo:
Circuit breaker restrikes are unwanted occurrence, which can ultimately lead to breaker. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks. In 2008 a non-intrusive radiometric restrike measurement method, as well a restrike hardware detection algorithm was developed. The limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current detection methods and algorithms required the use of wide bandwidth current transformers and voltage dividers. A novel non-intrusive restrike diagnostic algorithm using ATP (Alternative Transient Program) and wavelet transforms is proposed. Wavelet transforms are the most common use in signal processing, which is divided into two tests, i.e. restrike detection and energy level based on deteriorated waveforms in different types of restrike. A ‘db5’ wavelet was selected in the tests as it gave a 97% correct diagnostic rate evaluated using a database of diagnostic signatures. This was also tested using restrike waveforms simulated under different network parameters which gave a 92% correct diagnostic responses. The diagnostic technique and methodology developed in this research can be applied to any power monitoring system with slight modification for restrike detection.
Resumo:
This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Resumo:
The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
Serving as a powerful tool for extracting localized variations in non-stationary signals, applications of wavelet transforms (WTs) in traffic engineering have been introduced; however, lacking in some important theoretical fundamentals. In particular, there is little guidance provided on selecting an appropriate WT across potential transport applications. This research described in this paper contributes uniquely to the literature by first describing a numerical experiment to demonstrate the shortcomings of commonly-used data processing techniques in traffic engineering (i.e., averaging, moving averaging, second-order difference, oblique cumulative curve, and short-time Fourier transform). It then mathematically describes WT’s ability to detect singularities in traffic data. Next, selecting a suitable WT for a particular research topic in traffic engineering is discussed in detail by objectively and quantitatively comparing candidate wavelets’ performances using a numerical experiment. Finally, based on several case studies using both loop detector data and vehicle trajectories, it is shown that selecting a suitable wavelet largely depends on the specific research topic, and that the Mexican hat wavelet generally gives a satisfactory performance in detecting singularities in traffic and vehicular data.
Resumo:
Hybrid system representations have been exploited in a number of challenging modelling situations, including situations where the original nonlinear dynamics are too complex (or too imprecisely known) to be directly filtered. Unfortunately, the question of how to best design suitable hybrid system models has not yet been fully addressed, particularly in the situations involving model uncertainty. This paper proposes a novel joint state-measurement relative entropy rate based approach for design of hybrid system filters in the presence of (parameterised) model uncertainty. We also present a design approach suitable for suboptimal hybrid system filters. The benefits of our proposed approaches are illustrated through design examples and simulation studies.