955 resultados para Signal-to-noise ratio approximation
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.
Resumo:
Acute acoustic trauma (AAT) is a sudden sensorineural hearing loss caused by exposure of the hearing organ to acoustic overstimulation, typically an intense sound impulse, hyperbaric oxygen therapy (HOT), which favors repair of the microcirculation, can be potentially used to treat it. Hence, this study aimed to assess the effects of HOT on guinea pigs exposed to acoustic trauma. Fifteen guinea pigs were exposed to noise in the 4-kHz range with intensity of 110 dB sound level pressure for 72 h. They were assessed by brainstem auditory evoked potential (BAEP) and by distortion product otoacoustic emission (DPOAE) before and after exposure and after HOT at 2.0 absolute atmospheres for 1 h. The cochleae were then analyzed using scanning electron microscopy (SEM). There was a statistically significant difference in the signal-to-noise ratio of the DPOAE amplitudes for the 1- to 4-kHz frequencies and the SEM findings revealed damaged outer hair cells (OHC) after exposure to noise, with recovery after HOT (p = 0.0159), which did not occur on thresholds and amplitudes to BAEP (p = 0.1593). The electrophysiological BAEP data did not demonstrate effectiveness of HOT against AAT damage. However, there was improvement of the anatomical pattern of damage detected by SEM, with a significant reduction of the number of injured cochlear OHC and their functionality detected by DPOAE.
Resumo:
Subtractive imaging in confocal fluorescence light microscopy is based on the subtraction of a suitably weighted widefield image from a confocal image. An approximation to a widefield image can be obtained by detection with an opened confocal pinhole. The subtraction of images enhances the resolution in-plane as well as along the optic axis. Due to the linearity of the approach, the effect of subtractive imaging in Fourier-space corresponds to a reduction of low spatial frequency contributions leading to a relative enhancement of the high frequencies. Along the direction of the optic axis this also results in an improved sectioning. Image processing can achieve a similar effect. However, a 3D volume dataset must be acquired and processed, yielding a result essentially identical to subtractive imaging but superior in signal-to-noise ratio. The latter can be increased further with the technique of weighted averaging in Fourier-space. A comparison of 2D and 3D experimental data analysed with subtractive imaging, the equivalent Fourier-space processing of the confocal data only, and Fourier-space weighted averaging is presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Aim - A quantative primary study to determine whether increasing source to image distance (SID), with and without the use of automatic exposure control (AEC) for antero-posterior (AP) pelvis imaging, reduces dose whilst still producing an image of diagnostic quality. Methods - Using a computed radiography (CR) system, an anthropomorphic pelvic phantom was positioned for an AP examination using the table bucky. SID was initially set at 110 cm, with tube potential set at a constant 75 kVp, with two outer chambers selected and a fine focal spot of 0.6 mm. SID was then varied from 90 cm to 140 cm with two exposures made at each 5 cm interval, one using the AEC and another with a constant 16 mAs derived from the initial exposure. Effective dose (E) and entrance surface dose (ESD) were calculated for each acquisition. Seven experienced observers blindly graded image quality using a 5-point Likert scale and 2 Alternative Forced Choice software. Signal-to-Noise Ratio (SNR) was calculated for comparison. For each acquisition, femoral head diameter was also measured for magnification indication. Results - Results demonstrated that when increasing SID from 110 cm to 140 cm, both E and ESD reduced by 3.7% and 17.3% respectively when using AEC and 50.13% and 41.79% respectively, when the constant mAs was used. No significant statistical (T-test) difference (p = 0.967) between image quality was detected when increasing SID, with an intra-observer correlation of 0.77 (95% confidence level). SNR reduced slightly for both AEC (38%) and no AEC (36%) with increasing SID. Conclusion - For CR, increasing SID significantly reduces both E and ESD for AP pelvis imaging without adversely affecting image quality.
Resumo:
Electrocardiographic (ECG) signals are emerging as a recent trend in the field of biometrics. In this paper, we propose a novel ECG biometric system that combines clustering and classification methodologies. Our approach is based on dominant-set clustering, and provides a framework for outlier removal and template selection. It enhances the typical workflows, by making them better suited to new ECG acquisition paradigms that use fingers or hand palms, which lead to signals with lower signal to noise ratio, and more prone to noise artifacts. Preliminary results show the potential of the approach, helping to further validate the highly usable setups and ECG signals as a complementary biometric modality.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Recently, the spin-echo full-intensity acquired localized (SPECIAL) spectroscopy technique was proposed to unite the advantages of short TEs on the order of milliseconds (ms) with full sensitivity and applied to in vivo rat brain. In the present study, SPECIAL was adapted and optimized for use on a clinical platform at 3T and 7T by combining interleaved water suppression (WS) and outer volume saturation (OVS), optimized sequence timing, and improved shimming using FASTMAP. High-quality single voxel spectra of human brain were acquired at TEs below or equal to 6 ms on a clinical 3T and 7T system for six volunteers. Narrow linewidths (6.6 +/- 0.6 Hz at 3T and 12.1 +/- 1.0 Hz at 7T for water) and the high signal-to-noise ratio (SNR) of the artifact-free spectra enabled the quantification of a neurochemical profile consisting of 18 metabolites with Cramér-Rao lower bounds (CRLBs) below 20% at both field strengths. The enhanced sensitivity and increased spectral resolution at 7T compared to 3T allowed a two-fold reduction in scan time, an increased precision of quantification for 12 metabolites, and the additional quantification of lactate with CRLB below 20%. Improved sensitivity at 7T was also demonstrated by a 1.7-fold increase in average SNR (= peak height/root mean square [RMS]-of-noise) per unit-time.
Resumo:
RATIONALE AND OBJECTIVES: Recent developments of magnetic resonance imaging enabled free-breathing coronary MRA (cMRA) using steady-state-free-precession (SSFP) for endogenous contrast. The purpose of this study was a systematic comparison of SSFP cMRA with standard T2-prepared gradient-echo and spiral cMRA. METHODS: Navigator-gated free-breathing T2-prepared SSFP-, T2-prepared gradient-echo- and T2-prepared spiral cMRA was performed in 18 healthy swine (45-68 kg body-weight). Image quality was investigated subjectively and signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and vessel sharpness were compared. RESULTS: SSFP cMRA allowed for high quality cMRA during free breathing with substantial improvements in SNR, CNR and vessel sharpness when compared with standard T2-prepared gradient-echo imaging. Spiral imaging demonstrated the highest SNR while image quality score and vessel definition was best for SSFP imaging. CONCLUSION: Navigator-gated free-breathing T2-prepared SSFP cMRA is a promising new imaging approach for high signal and high contrast imaging of the coronary arteries with improved vessel border definition.
Resumo:
PURPOSE: As the magnetic susceptibility induced frequency shift increases linearly with magnetic field strength, the present work evaluates manganese as a phase imaging contrast agent and investigates the dose dependence of brain enhancement in comparison to T1 -weighted imaging after intravenous administration of MnCl2 . METHODS: Experiments were carried out on 12 Sprague-Dawley rats. MnCl2 was infused intravenously with the following doses: 25, 75, 125 mg/kg (n=4). Phase, T1 -weighted images and T1 maps were acquired before and 24h post MnCl2 administration at 14.1 Tesla. RESULTS: Manganese enhancement was manifested in phase imaging by an increase in frequency shift differences between regions rich in calcium gated channels and other tissues, together with local increase in signal to noise ratio (from the T1 reduction). Such contrast improvement allowed a better visualization of brain cytoarchitecture. The measured T1 decrease observed across different manganese doses and in different brain regions were consistent with the increase in the contrast to noise ratio (CNR) measured by both T1 -weighted and phase imaging, with the strongest variations being observed in the dentate gyrus and olfactory bulb. CONCLUSION: Overall from its high sensitivity to manganese combined with excellent CNR, phase imaging is a promising alternative imaging protocol to assess manganese enhanced MRI at ultra high field. Magn Reson Med 72:1246-1256, 2014. © 2013 Wiley Periodicals, Inc.
Resumo:
We analyze the consequences that the choice of the output of the system has in the efficiency of signal detection. It is shown that the output signal and the signal-to-noise ratio (SNR), used to characterize the phenomenon of stochastic resonance, strongly depend on the form of the output. In particular, the SNR may be enhanced for an adequate output.
Resumo:
We use temperature tuning to control signal propagation in simple one-dimensional arrays of masses connected by hard anharmonic springs and with no local potentials. In our numerical model a sustained signal is applied at one site of a chain immersed in a thermal environment and the signal-to-noise ratio is measured at each oscillator. We show that raising the temperature can lead to enhanced signal propagation along the chain, resulting in thermal resonance effects akin to the resonance observed in arrays of bistable systems.
Resumo:
Background: b-value is the parameter characterizing the intensity of the diffusion weighting during image acquisition. Data acquisition is usually performed with low b value (b~1000 s/mm2). Evidence shows that high b-values (b>2000 s/mm2) are more sensitive to the slow diffusion compartment (SDC) and maybe more sensitive in detecting white matter (WM) anomalies in schizophrenia.Methods: 12 male patients with schizophrenia (mean age 35 +/-3 years) and 16 healthy male controls matched for age were scanned with a low b-value (1000 s/mm2) and a high b-value (4000 s/mm2) protocol. Apparent diffusion coefficient (ADC) is a measure of the average diffusion distance of water molecules per time unit (mm2/s). ADC maps were generated for all individuals. 8 region of interests (frontal and parietal region bilaterally, centrum semi-ovale bilaterally and anterior and posterior corpus callosum) were manually traced blind to diagnosis.Results: ADC measures acquired with high b-value imaging were more sensitive in detecting differences between schizophrenia patients and healthy controls than low b-value imaging with a gain in significance by a factor of 20- 100 times despite the lower image Signal-to-noise ratio (SNR). Increased ADC was identified in patient's WM (p=0.00015) with major contributions from left and right centrum semi-ovale and to a lesser extent right parietal region.Conclusions: Our results may be related to the sensitivity of high b-value imaging to the SDC believed to reflect mainly the intra-axonal and myelin bound water pool. High b-value imaging might be more sensitive and specific to WM anomalies in schizophrenia than low b-value imaging
Resumo:
In this paper we propose an endpoint detection system based on the use of several features extracted from each speech frame, followed by a robust classifier (i.e Adaboost and Bagging of decision trees, and a multilayer perceptron) and a finite state automata (FSA). We present results for four different classifiers. The FSA module consisted of a 4-state decision logic that filtered false alarms and false positives. We compare the use of four different classifiers in this task. The look ahead of the method that we propose was of 7 frames, which are the number of frames that maximized the accuracy of the system. The system was tested with real signals recorded inside a car, with signal to noise ratio that ranged from 6 dB to 30dB. Finally we present experimental results demonstrating that the system yields robust endpoint detection.
Resumo:
The neuropathology of Alzheimer disease is characterized by senile plaques, neurofibrillary tangles and cell death. These hallmarks develop according to the differential vulnerability of brain networks, senile plaques accumulating preferentially in the associative cortical areas and neurofibrillary tangles in the entorhinal cortex and the hippocampus. We suggest that the main aetiological hypotheses such as the beta-amyloid cascade hypothesis or its variant, the synaptic beta-amyloid hypothesis, will have to consider neural networks not just as targets of degenerative processes but also as contributors of the disease's progression and of its phenotype. Three domains of research are highlighted in this review. First, the cerebral reserve and the redundancy of the network's elements are related to brain vulnerability. Indeed, an enriched environment appears to increase the cerebral reserve as well as the threshold of disease's onset. Second, disease's progression and memory performance cannot be explained by synaptic or neuronal loss only, but also by the presence of compensatory mechanisms, such as synaptic scaling, at the microcircuit level. Third, some phenotypes of Alzheimer disease, such as hallucinations, appear to be related to progressive dysfunction of neural networks as a result, for instance, of a decreased signal to noise ratio, involving a diminished activity of the cholinergic system. Overall, converging results from studies of biological as well as artificial neural networks lead to the conclusion that changes in neural networks contribute strongly to Alzheimer disease's progression.