914 resultados para CENTERBAND-ONLY DETECTION
Resumo:
A blind nonlinear interference cancellation receiver for code-division multiple-access- (CDMA-) based communication systems operating over Rayleigh flat-fading channels is proposed. The receiver which assumes knowledge of the signature waveforms of all the users is implemented in an asynchronous CDMA environment. Unlike the conventional MMSE receiver, the proposed blind ICA multiuser detector is shown to be robust without training sequences and with only knowledge of the signature waveforms. It has achieved nearly the same performance of the conventional training-based MMSE receiver. Several comparisons and experiments are performed based on examining BER performance in AWGN and Rayleigh fading in order to verify the validity of the proposed blind ICA multiuser detector.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
Previous research in visual search indicates that animal fear-relevant deviants, snakes/spiders, are found faster among non fear-relevant backgrounds, flowers/mushrooms, than vice versa. Moreover, deviant absence was indicated faster among snakes/spiders and detection time for flower/mushroom deviants, but not for snake/spider deviants, increased in larger arrays. The current research indicates that the latter 2 results do not reflect on fear-relevance, but are found only with flower/mushroom controls. These findings may reflect on factors such as background homogeneity, deviant homogeneity, or background-deviant similarity. The current research removes contradictions between previous studies that used animal and social fear-relevant stimuli and indicates that apparent search advantages for fear-relevant deviants seem likely to reflect on delayed attentional disengagement from fear-relevance on control trials.
Resumo:
Aims: To elucidate whether a dominant uncultured clostridial (Clostridium thermocellum-like) species in an environmental sample (landfill leachate), possesses an autoinducing peptide (AIP) quorum-sensing (QS) gene, although it may not be functional. Methods and Results: A modified AIP accessory gene regulator (agr)C PCR protocol was performed on extracted DNA from a landfill leachate sample (also characterized by 16S rRNA gene cloning) and the PCR products were cloned, sequenced and phylogenetically analysed. It appeared that two agrC gene phylotypes existed, most closely related to the C. thermocellum agrC gene, differing by only 1 bp. Conclusions: It is possible to specifically identify and characterize the agrC AIP QS gene from uncultured Firmicutes (C. thermocellum-like) bacteria derived from environmental (landfill leachate) sample. Significance and Impact of the Study: This is the first successful attempt at identifying AIP QS genes from a cellulolytic environment (landfill). The agrC gene was identified as being most closely related to the C. thermocellum agrC gene, the same bacterium identified as being dominant, according to 16S rRNA gene cloning and subsequently fluorescence in situ hybridization analyses, in the same biomass.
Resumo:
A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.
Resumo:
Feature detection is a crucial stage of visual processing. In previous feature-marking experiments we found that peaks in the 3rd derivative of the luminance profile can signify edges where there are no 1st derivative peaks nor 2nd derivative zero-crossings (Wallis and George 'Mach edges' (the edges of Mach bands) were nicely predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test of the model, we now use a new class of stimuli, formed by adding a linear luminance ramp to the blurred triangle waves used previously. The ramp has no effect on the second or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing only one edge as the added ramp gradient increases. In experiment 1, subjects judged whether one or two edges were visible on each trial. In experiment 2, subjects used a cursor to mark perceived edges and bars. The position and polarity of the marked edges were close to model predictions. Both experiments produced the predicted shift from two to one Mach edge, but the shift was less complete than predicted. We conclude that the model is a useful predictor of edge perception, but needs some modification.
Resumo:
The aim of this work was to design and build an equipment which can detect ferrous and non-ferrous objects in conveyed commodities, discriminate between them and locate the object along the belt and on the width of the belt. The magnetic induction mechanism was used as a means of achieving the objectives of this research. In order to choose the appropriate geometry and size of the induction field source, the field distributions of different source geometries and sizes were studied in detail. From these investigations it was found the square loop geometry is the most appropriate as a field generating source for the purpose of this project. The phenomena of field distribution in the conductors was also investigated. An equipment was designed and built at the preliminary stages of thework based on a flux-gate magnetometer with the ability to detect only ferrous objects.The instrument was designed such that it could be used to detect ferrous objects in the coal conveyors of power stations. The advantages of employing this detector in the power industry over the present ferrous metal electromagnetic separators were also considered. The objectives of this project culminated in the design and construction of a ferrous and non-ferrous detector with the ability to discriminate between ferrous and non-ferrous metals and to locate the objects on the conveying system. An experimental study was carried out to test the performance of the equipment in the detection of ferrous and non-ferrous objects of a given size carried on the conveyor belt. The ability of the equipment to discriminate between the types of metals and to locate the object on the belt was also evaluated experimentally. The benefits which can be gained from the industrial implementations of the equipment were considered. Further topics which may be investigated as an extension of this work are given.
Resumo:
This thesis consisted of two major parts, one determining the masking characteristics of pixel noise and the other investigating the properties of the detection filter employed by the visual system. The theoretical cut-off frequency of white pixel noise can be defined from the size of the noise pixel. The empirical cut-off frequency, i.e. the largest size of noise pixels that mimics the effect of white noise in detection, was determined by measuring contrast energy thresholds for grating stimuli in the presence of spatial noise consisting of noise pixels of various sizes and shapes. The critical i.e. minimum number of noise pixels per grating cycle needed to mimic the effect of white noise in detection was found to decrease with the bandwidth of the stimulus. The shape of the noise pixels did not have any effect on the whiteness of pixel noise as long as there was at least the minimum number of noise pixels in all spatial dimensions. Furthermore, the masking power of white pixel noise is best described when the spectral density is calculated by taking into account all the dimensions of noise pixels, i.e. width, height, and duration, even when there is random luminance only in one of these dimensions. The properties of the detection mechanism employed by the visual system were studied by measuring contrast energy thresholds for complex spatial patterns as a function of area in the presence of white pixel noise. Human detection efficiency was obtained by comparing human performance with an ideal detector. The stimuli consisted of band-pass filtered symbols, uniform and patched gratings, and point stimuli with randomised phase spectra. In agreement with the existing literature, the detection performance was found to decline with the increasing amount of detail and contour in the stimulus. A measure of image complexity was developed and successfully applied to the data. The accuracy of the detection mechanism seems to depend on the spatial structure of the stimulus and the spatial spread of contrast energy.
Resumo:
We propose a novel recursive-algorithm based maximum a posteriori probability (MAP) detector in spectrally-efficient coherent wavelength division multiplexing (CoWDM) systems, and investigate its performance in a 1-bit/s/Hz on-off keyed (OOK) system limited by optical-signal-to-noise ratio. The proposed method decodes each sub-channel using the signal levels not only of the particular sub-channel but also of its adjacent sub-channels, and therefore can effectively compensate deterministic inter-sub-channel crosstalk as well as inter-symbol interference arising from narrow-band filtering and chromatic dispersion (CD). Numerical simulation of a five-channel OOK-based CoWDM system with 10Gbit/s per channel using either direct or coherent detection shows that the MAP decoder can eliminate the need for phase control of each optical carrier (which is necessarily required in a conventional CoWDM system), and greatly relaxes the spectral design of the demultiplexing filter at the receiver. It also significantly improves back-to-back sensitivity and CD tolerance of the system.
Resumo:
We demonstrate the first experimental implementation of a 3.9-Gb/s differential binary phase-shift keying (DBPSK)-based double sideband (DSB) optical fast orthogonal frequency-division-multiplexing (FOFDM) system with a reduced subcarrier spacing equal to half the symbol rate over 300m of multimode fiber (MMF) using intensity-modulation and direct-detection (IM/DD). The required received optical power at a bit-error rate (BER) of 10(-3) was measured to be similar to -14.2 dBm with a receiver sensitivity penalty of only similar to 0.2 dB when compared to the back-to-back case. Experimental results agree very well with the theoretical predictions.
Resumo:
Web APIs have gained increasing popularity in recent Web service technology development owing to its simplicity of technology stack and the proliferation of mashups. However, efficiently discovering Web APIs and the relevant documentations on the Web is still a challenging task even with the best resources available on the Web. In this paper we cast the problem of detecting the Web API documentations as a text classification problem of classifying a given Web page as Web API associated or not. We propose a supervised generative topic model called feature latent Dirichlet allocation (feaLDA) which offers a generic probabilistic framework for automatic detection of Web APIs. feaLDA not only captures the correspondence between data and the associated class labels, but also provides a mechanism for incorporating side information such as labelled features automatically learned from data that can effectively help improving classification performance. Extensive experiments on our Web APIs documentation dataset shows that the feaLDA model outperforms three strong supervised baselines including naive Bayes, support vector machines, and the maximum entropy model, by over 3% in classification accuracy. In addition, feaLDA also gives superior performance when compared against other existing supervised topic models.
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
In the face of global population growth and the uneven distribution of water supply, a better knowledge of the spatial and temporal distribution of surface water resources is critical. Remote sensing provides a synoptic view of ongoing processes, which addresses the intricate nature of water surfaces and allows an assessment of the pressures placed on aquatic ecosystems. However, the main challenge in identifying water surfaces from remotely sensed data is the high variability of spectral signatures, both in space and time. In the last 10 years only a few operational methods have been proposed to map or monitor surface water at continental or global scale, and each of them show limitations. The objective of this study is to develop and demonstrate the adequacy of a generic multi-temporal and multi-spectral image analysis method to detect water surfaces automatically, and to monitor them in near-real-time. The proposed approach, based on a transformation of the RGB color space into HSV, provides dynamic information at the continental scale. The validation of the algorithm showed very few omission errors and no commission errors. It demonstrates the ability of the proposed algorithm to perform as effectively as human interpretation of the images. The validation of the permanent water surface product with an independent dataset derived from high resolution imagery, showed an accuracy of 91.5% and few commission errors. Potential applications of the proposed method have been identified and discussed. The methodology that has been developed 27 is generic: it can be applied to sensors with similar bands with good reliability, and minimal effort. Moreover, this experiment at continental scale showed that the methodology is efficient for a large range of environmental conditions. Additional preliminary tests over other continents indicate that the proposed methodology could also be applied at the global scale without too many difficulties
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT