34 resultados para signal detection theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work explores the relevance of semantic and linguistic description to translation, theory and practice. It is aimed towards a practical model of approach to texts to translate. As literary texts [poetry mainly] are the focus of attention, so are stylistic matters. Note, however, that 'style', and, to some extent, the conclusions of the work, are not limited to so-called literary texts. The study of semantic description reveals that most translation problems do not stem from the cognitive (langue-related), but rather from the contextual (parole-related) aspects of meaning. Thus, any linguistic model that fails to account for the latter is bound to fall short. T.G.G. does, whereas Systemics, concerned with both the 'Iangue' and 'parole' (stylistic and sociolinguistic mainly) aspects of meaning, provides a useful framework of approach to texts to translate. Two essential semantic principles for translation are: that meaning is the property of a language (Firth); and the 'relativity of meaning assignments' (Tymoczko). Both imply that meaning can only be assessed, correctly, in the relevant socio-cultural background. Translation is seen as a restricted creation, and the translator's encroach as a three-dimensional critical one. To encompass the most technical to the most literary text, and account for variations in emphasis in any text, translation theory must be based on typology of function Halliday's ideational, interpersonal and textual, or, Buhler's symbol, signal, symptom, Functions3. Function Coverall and specific] will dictate aims and method, and also provide the critic with criteria to assess translation Faithfulness. Translation can never be reduced to purely objective methods, however. Intuitive procedures intervene, in textual interpretation and analysis, in the choice of equivalents, and in the reception of a translation. Ultimately, translation, theory and practice, may perhaps constitute the touchstone as regards the validity of linguistic and semantic theories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of investigation of the present research is the use of smart hydrogels with fibre optic sensor technology. The aim was to develop a costeffective sensor platform for the detection of water in hydrocarbon media, and of dissolved inorganic analytes, namely potassium, calcium and aluminium. The fibre optic sensors in this work depend upon the use of hydrogels to either entrap chemotropic agents or to respond to external environmental changes, by changing their inherent properties, such as refractive index (RI). A review of current fibre optic technology for sensing outlined that the main principles utilised are either the measurement of signal loss or a change in wavelength of the light transmitted through the system. The signal loss principle relies on changing the conditions required for total internal reflection to occur. Hydrogels are cross-linked polymer networks that swell but do not dissolve in aqueous environments. Smart hydrogels are synthetic materials that exhibit additional properties to those inherent in their structure. In order to control the non-inherent properties, the hydrogels were fabricated with the addition of chemotropic agents. For the detection of water, hydrogels of low refractive index were synthesized using fluorinated monomers. Sulfonated monomers were used for their extreme hydrophilicity as a means of water sensing through an RI change. To enhance the sensing capability of the hydrogel, chemotropic agents, such as pH indicators and cobalt salts, were used. The system comprises of the smart hydrogel coated onto an exposed section of the fibre optic core, connected to the interrogation system measuring the difference in the signal. Information obtained was analysed using a purpose designed software. The developed sensor platform showed that an increase in the target species caused an increase in the signal lost from the sensor system, allowing for a detection of the target species. The system has potential applications in areas such as clinical point of care, water detection in fuels and the detection of dissolved ions in the water industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A theoretical analysis of two-wave mixing in a BSO crystal is developed in the undepleted-pump approximation for a modulated signal beam. It is shown that, for a modulation of high enough frequency, significant ac amplification is possible at three distinct values of pump-beam detuning. A signal beam that is amplitude modulated by a square wave is analyzed by means of the theory, and experimental results are presented in confirmation of the analysis. Finally, it is shown that in the presence of absorption the optimum detunings for dc and ac amplification are different.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A preliminary study by Freeman et al (1996b) has suggested that when complex patterns of motion elicit impressions of 2-dimensionality, odd-item-out detection improves given targets can be differentiated on the basis of surface properties. Their results can be accounted for, it if is supposed that observers are permitted efficient access to 3-D surface descriptions but access to 2-D motion descriptions is restricted. To test the hypothesis, a standard search technique was employed, in which targets could be discussed on the basis of slant sign. In one experiment, slant impressions were induced through the summing of deformation and translation components. In a second theory were induced through the summing of shear and translation components. Neither showed any evidence of efficient access. A third experiment explored the possibility that access to these representations may have been hindered by a lack of grouping between the stimuli. Attempts to improve grouping failed to produce convincing evidence in support of life. An alternative explanation is that complex patterns of motion are simply not processed simultaneously. Psychophysical and physiological studies have, however, suggested that multiple mechanisms selective for complex motion do exist. Using a subthreshold summation technique I found evidence supporting the notion that complex motions are processed in parallel. Furthermore, in a spatial summation experiment, coherence thresholds were measured for displays containing different numbers of complex motion patches. Consistent with the idea that complex motion processing proceeds in parallel, increases in the number of motion patches were seen to decrease thresholds, both for expansion and rotation. Moreover, the rates of decrease were higher than those typically expected from probability summation, thus implying mechanisms are available, which can pool signals from spatially distinct complex motion flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel recursive-algorithm based maximum a posteriori probability (MAP) detector in spectrally-efficient coherent wavelength division multiplexing (CoWDM) systems, and investigate its performance in a 1-bit/s/Hz on-off keyed (OOK) system limited by optical-signal-to-noise ratio. The proposed method decodes each sub-channel using the signal levels not only of the particular sub-channel but also of its adjacent sub-channels, and therefore can effectively compensate deterministic inter-sub-channel crosstalk as well as inter-symbol interference arising from narrow-band filtering and chromatic dispersion (CD). Numerical simulation of a five-channel OOK-based CoWDM system with 10Gbit/s per channel using either direct or coherent detection shows that the MAP decoder can eliminate the need for phase control of each optical carrier (which is necessarily required in a conventional CoWDM system), and greatly relaxes the spectral design of the demultiplexing filter at the receiver. It also significantly improves back-to-back sensitivity and CD tolerance of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bacterial lipoproteins have many important functions and represent a class of possible vaccine candidates. The prediction of lipoproteins from sequence is thus an important task for computational vaccinology. Naïve-Bayesian networks were trained to identify SpaseII cleavage sites and their preceding signal sequences using a set of 199 distinct lipoprotein sequences. A comprehensive range of sequence models was used to identify the best model for lipoprotein signal sequences. The best performing sequence model was found to be 10-residues in length, including the conserved cysteine lipid attachment site and the nine residues prior to it. The sensitivity of prediction for LipPred was 0.979, while the specificity was 0.742. Here, we describe LipPred, a web server for lipoprotein prediction; available at the URL: http://www.jenner.ac.uk/LipPred/. LipPred is the most accurate method available for the detection of SpaseIIcleaved lipoprotein signal sequences and the prediction of their cleavage sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CONCLUSIONS: The new HSA protocol used in the mfVEP testing can be applied to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. PURPOSE: Multifocal visual evoked potential (mfVEP) is a newly introduced method used for objective visual field assessment. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard automated perimetry (SAP) visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. METHODS: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey field analyzer (HFA) test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the hemifield sector analysis (HSA) protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the three groups in the mean signal to noise ratio (ANOVA test, p < 0.001 with a 95% confidence interval). The difference between superior and inferior hemispheres in all subjects were statistically significant in the glaucoma patient group in all 11 sectors (t-test, p < 0.001), partially significant in 5 / 11 (t-test, p < 0.01), and no statistical difference in most sectors of the normal group (1 / 11 sectors was significant, t-test, p < 0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86%, respectively, and for glaucoma suspect patients the values were 89% and 79%, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, that can arise since the first months of life. Pathogenesis of congenital nystagmus is still under investigation. In general, CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, image stabilisation is still achieved during the short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions add information to typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focus on robust detection of CN patients' foveations. Specifically, it proposes a method to recognize the exact signal tracts in which a subject foveates, This paper also analyses foveation sequences. About 50 eyemovement recordings, either infrared-oculographic or electrooculographic, from different CN subjects were acquired. Results suggest that an exponential interpolation for the slow phases of nystagmus could improve foveation time computing and reduce influence of breaking saccades and data noise. Moreover a concise description of foveation sequence variability can be achieved using non-fitting splines. © 2009 Springer Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multifunctional properties of carbon nanotubes (CNTs) make them a powerful platform for unprecedented innovations in a variety of practical applications. As a result of the surging growth of nanotechnology, nanotubes present a potential problem as an environmental pollutant, and as such, an efficient method for their rapid detection must be established. Here, we propose a novel type of ionic sensor complex for detecting CNTs – an organic dye that responds sensitively and selectively to CNTs with a photoluminescent signal. The complexes are formed through Coulomb attractions between dye molecules with uncompensated charges and CNTs covered with an ionic surfactant in water. We demonstrate that the photoluminescent excitation of the dye can be transferred to the nanotubes, resulting in selective and strong amplification (up to a factor of 6) of the light emission from the excitonic levels of CNTs in the near-infrared spectral range, as experimentally observed via excitation-emission photoluminescence (PL) mapping. The chirality of the nanotubes and the type of ionic surfactant used to disperse the nanotubes both strongly affect the amplification; thus, the complexation provides sensing selectivity towards specific CNTs. Additionally, neither similar uncharged dyes nor CNTs covered with neutral surfactant form such complexes. As model organic molecules, we use a family of polymethine dyes with an easily tailorable molecular structure and, consequently, tunable absorbance and PL characteristics. This provides us with a versatile tool for the controllable photonic and electronic engineering of an efficient probe for CNT detection.