948 resultados para Low signal-to-noise ratio regime


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging Spectroscopy (IS) is a promising tool for studying soil properties in large spatial domains. Going from point to image spectrometry is not only a journey from micro to macro scales, but also a long stage where problems such as dealing with data having a low signal-to-noise level, contamination of the atmosphere, large data sets, the BRDF effect and more are often encountered. In this paper we provide an up-to-date overview of some of the case studies that have used IS technology for soil science applications. Besides a brief discussion on the advantages and disadvantages of IS for studying soils, the following cases are comprehensively discussed: soil degradation (salinity, erosion, and deposition), soil mapping and classification, soil genesis and formation, soil contamination, soil water content, and soil swelling. We review these case studies and suggest that the 15 data be provided to the end-users as real reflectance and not as raw data and with better signal-to-noise ratios than presently exist. This is because converting the raw data into reflectance is a complicated stage that requires experience, knowledge, and specific infrastructures not available to many users, whereas quantitative spectral models require good quality data. These limitations serve as a barrier that impedes potential end-users, inhibiting researchers from trying this technique for their needs. The paper ends with a general call to the soil science audience to extend the utilization of the IS technique, and it provides some ideas on how to propel this technology forward to enable its widespread adoption in order to achieve a breakthrough in the field of soil science and remote sensing. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acute acoustic trauma (AAT) is a sudden sensorineural hearing loss caused by exposure of the hearing organ to acoustic overstimulation, typically an intense sound impulse, hyperbaric oxygen therapy (HOT), which favors repair of the microcirculation, can be potentially used to treat it. Hence, this study aimed to assess the effects of HOT on guinea pigs exposed to acoustic trauma. Fifteen guinea pigs were exposed to noise in the 4-kHz range with intensity of 110 dB sound level pressure for 72 h. They were assessed by brainstem auditory evoked potential (BAEP) and by distortion product otoacoustic emission (DPOAE) before and after exposure and after HOT at 2.0 absolute atmospheres for 1 h. The cochleae were then analyzed using scanning electron microscopy (SEM). There was a statistically significant difference in the signal-to-noise ratio of the DPOAE amplitudes for the 1- to 4-kHz frequencies and the SEM findings revealed damaged outer hair cells (OHC) after exposure to noise, with recovery after HOT (p = 0.0159), which did not occur on thresholds and amplitudes to BAEP (p = 0.1593). The electrophysiological BAEP data did not demonstrate effectiveness of HOT against AAT damage. However, there was improvement of the anatomical pattern of damage detected by SEM, with a significant reduction of the number of injured cochlear OHC and their functionality detected by DPOAE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subtractive imaging in confocal fluorescence light microscopy is based on the subtraction of a suitably weighted widefield image from a confocal image. An approximation to a widefield image can be obtained by detection with an opened confocal pinhole. The subtraction of images enhances the resolution in-plane as well as along the optic axis. Due to the linearity of the approach, the effect of subtractive imaging in Fourier-space corresponds to a reduction of low spatial frequency contributions leading to a relative enhancement of the high frequencies. Along the direction of the optic axis this also results in an improved sectioning. Image processing can achieve a similar effect. However, a 3D volume dataset must be acquired and processed, yielding a result essentially identical to subtractive imaging but superior in signal-to-noise ratio. The latter can be increased further with the technique of weighted averaging in Fourier-space. A comparison of 2D and 3D experimental data analysed with subtractive imaging, the equivalent Fourier-space processing of the confocal data only, and Fourier-space weighted averaging is presented. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amorphous glass/ZnO-Al/p(a-Si:H)/i(a-Si:H)/n(a-Si1-xCx:H)/Al imagers with different n-layer resistivities were produced by plasma enhanced chemical vapour deposition technique (PE-CVD). An image is projected onto the sensing element and leads to spatially confined depletion regions that can be readout by scanning the photodiode with a low-power modulated laser beam. The essence of the scheme is the analog readout, and the absence of semiconductor arrays or electrode potential manipulations to transfer the information coming from the transducer. The influence of the intensity of the optical image projected onto the sensor surface is correlated with the sensor output characteristics (sensitivity, linearity blooming, resolution and signal-to-noise ratio) are analysed for different material compositions (0.5 < x < 1). The results show that the responsivity and the spatial resolution are limited by the conductivity of the doped layers. An enhancement of one order of magnitude in the image intensity signal and on the spatial resolution are achieved at 0.2 mW cm(-2) light flux by decreasing the n-layer conductivity by the same amount. A physical model supported by electrical simulation gives insight into the image-sensing technique used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado de Radiações aplicadas às Tecnologias da Saúde. Área de especialização: Imagem Digital com Radiação X.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - To develop and validate a psychometric scale for assessing image quality perception for chest X-ray images. Methods - Bandura's theory was used to guide scale development. A review of the literature was undertaken to identify items/factors which could be used to evaluate image quality using a perceptual approach. A draft scale was then created (22 items) and presented to a focus group (student and qualified radiographers). Within the focus group the draft scale was discussed and modified. A series of seven postero-anterior chest images were generated using a phantom with a range of image qualities. Image quality perception was confirmed for the seven images using signal-to-noise ratio (SNR 17.2–36.5). Participants (student and qualified radiographers and radiology trainees) were then invited to independently score each of the seven images using the draft image quality perception scale. Cronbach alpha was used to test interval reliability. Results - Fifty three participants used the scale to grade image quality perception on each of the seven images. Aggregated mean scale score increased with increasing SNR from 42.1 to 87.7 (r = 0.98, P < 0.001). For each of the 22 individual scale items there was clear differentiation of low, mid and high quality images. A Cronbach alpha coefficient of >0.7 was obtained across each of the seven images. Conclusion - This study represents the first development of a chest image quality perception scale based on Bandura's theory. There was excellent correlation between the image quality perception scores derived using the scale and the SNR. Further research will involve a more detailed item and factor analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim - A quantative primary study to determine whether increasing source to image distance (SID), with and without the use of automatic exposure control (AEC) for antero-posterior (AP) pelvis imaging, reduces dose whilst still producing an image of diagnostic quality. Methods - Using a computed radiography (CR) system, an anthropomorphic pelvic phantom was positioned for an AP examination using the table bucky. SID was initially set at 110 cm, with tube potential set at a constant 75 kVp, with two outer chambers selected and a fine focal spot of 0.6 mm. SID was then varied from 90 cm to 140 cm with two exposures made at each 5 cm interval, one using the AEC and another with a constant 16 mAs derived from the initial exposure. Effective dose (E) and entrance surface dose (ESD) were calculated for each acquisition. Seven experienced observers blindly graded image quality using a 5-point Likert scale and 2 Alternative Forced Choice software. Signal-to-Noise Ratio (SNR) was calculated for comparison. For each acquisition, femoral head diameter was also measured for magnification indication. Results - Results demonstrated that when increasing SID from 110 cm to 140 cm, both E and ESD reduced by 3.7% and 17.3% respectively when using AEC and 50.13% and 41.79% respectively, when the constant mAs was used. No significant statistical (T-test) difference (p = 0.967) between image quality was detected when increasing SID, with an intra-observer correlation of 0.77 (95% confidence level). SNR reduced slightly for both AEC (38%) and no AEC (36%) with increasing SID. Conclusion - For CR, increasing SID significantly reduces both E and ESD for AP pelvis imaging without adversely affecting image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrocardiographic (ECG) signals are emerging as a recent trend in the field of biometrics. In this paper, we propose a novel ECG biometric system that combines clustering and classification methodologies. Our approach is based on dominant-set clustering, and provides a framework for outlier removal and template selection. It enhances the typical workflows, by making them better suited to new ECG acquisition paradigms that use fingers or hand palms, which lead to signals with lower signal to noise ratio, and more prone to noise artifacts. Preliminary results show the potential of the approach, helping to further validate the highly usable setups and ECG signals as a complementary biometric modality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breast cancer is the most common type of cancer worldwide. The effectiveness of its treatment depends on early stage detection, as well as on the accuracy of its diagnosis. Recently, diagnosis techniques have been submitted to relevant breakthroughs with the upcoming of Magnetic Resonance Imaging, Ultrasound Sonograms and Positron Emission Tomography (PET) scans, among others. The work presented here is focused on studying the application of a PET system to a Positron Emission Mammography (PEM) system. A PET/PEM system works under the principle that a scintillating crystal will detect a gamma-ray pulse, originated at the cancerous cells, converting it into a correspondent visible light pulse. The latter must then be converted into an electrical current pulse by means of a Photo- -Sensitive Device (PSD). After the PSD there must be a Transimpedance Amplifier (TIA) in order to convert the current pulse into a suitable output voltage, in a time period lower than 40 ns. In this Thesis, the PSD considered is a Silicon Photo-Multiplier (SiPM). The usage of this recently developed type of PSD is impracticable with the conventional TIA topologies, as it will be proven. Therefore, the usage of the Regulated Common-Gate (RCG) topology will be studied in the design of the amplifier. There will be also presented two RCG variations, comprising a noise response improvement and differential operation of the circuit. The mentioned topology will also be tested in a Radio-Frequency front-end, showing the versatility of the RCG. A study comprising a low-voltage self-biasing feedback TIA will also be shown. The proposed circuits will be simulated with standard CMOS technology (UMC 130 nm), using a 1.2 V power supply. A power consumption of 0.34 mW with a signal-to-noise ratio of 43 dB was achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A transimpedance amplifier (TIA) is used, in radiation detectors like the positron emission tomography(PET), to transform the current pulse produced by a photo-sensitive device into an output voltage pulse with a desired amplitude and shape. The TIA must have the lowest noise possible to maximize the output. To achieve a low noise, a circuit topology is proposed where an auxiliary path is added to the feedback TIA input, In this auxiliary path a differential transconductance block is used to transform the node voltage in to a current, this current is then converted to a voltage pulse by a second feedback TIA complementary to the first one, with the same amplitude but 180º out of phase with the first feedback TIA. With this circuit the input signal of the TIA appears differential at the output, this is used to try an reduced the circuit noise. The circuit is tested with two different devices, the Avalanche photodiodes (APD) and the Silicon photomultiplier (SIPMs). From the simulations we find that when using s SIPM with Rx=20kΩ and Cx=50fF the signal to noise ratio is increased from 59 when using only one feedback TIA to 68.3 when we use an auxiliary path in conjunction with the feedback TIA. This values where achieved with a total power consumption of 4.82mv. While the signal to noise ratio in the case of the SIPM is increased with some penalty in power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, the spin-echo full-intensity acquired localized (SPECIAL) spectroscopy technique was proposed to unite the advantages of short TEs on the order of milliseconds (ms) with full sensitivity and applied to in vivo rat brain. In the present study, SPECIAL was adapted and optimized for use on a clinical platform at 3T and 7T by combining interleaved water suppression (WS) and outer volume saturation (OVS), optimized sequence timing, and improved shimming using FASTMAP. High-quality single voxel spectra of human brain were acquired at TEs below or equal to 6 ms on a clinical 3T and 7T system for six volunteers. Narrow linewidths (6.6 +/- 0.6 Hz at 3T and 12.1 +/- 1.0 Hz at 7T for water) and the high signal-to-noise ratio (SNR) of the artifact-free spectra enabled the quantification of a neurochemical profile consisting of 18 metabolites with Cramér-Rao lower bounds (CRLBs) below 20% at both field strengths. The enhanced sensitivity and increased spectral resolution at 7T compared to 3T allowed a two-fold reduction in scan time, an increased precision of quantification for 12 metabolites, and the additional quantification of lactate with CRLB below 20%. Improved sensitivity at 7T was also demonstrated by a 1.7-fold increase in average SNR (= peak height/root mean square [RMS]-of-noise) per unit-time.