872 resultados para Signal to noise ratio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Swallowing dynamics involves the coordination and interaction of several muscles and nerves which allow correct food transport from mouth to stomach without laryngotracheal penetration or aspiration. Clinical swallowing assessment depends on the evaluator's knowledge of anatomic structures and of neurophysiological processes involved in swallowing. Any alteration in those steps is denominated oropharyngeal dysphagia, which may have many causes, such as neurological or mechanical disorders. Videofluoroscopy of swallowing is presently considered to be the best exam to objectively assess the dynamics of swallowing, but the exam needs to be conducted under certain restrictions, due to patient's exposure to radiation, which limits periodical repetition for monitoring swallowing therapy. Another method, called cervical auscultation, is a promising new diagnostic tool for the assessment of swallowing disorders. The potential to diagnose dysphagia in a noninvasive manner by assessing the sounds of swallowing is a highly attractive option for the dysphagia clinician. Even so, the captured sound has an amount of noise, which can hamper the evaluator's decision. In that way, the present paper proposes the use of a filter to improve the quality of audible sound and facilitate the perception of examination. The wavelet denoising approach is used to decompose the noisy signal. The signal to noise ratio was evaluated to demonstrate the quantitative results of the proposed methodology. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Absorbance detection in capillary electrophoresis (CE), offers an excellent mass sensitivity, but poor concentration detection limits owing to very small injection volumes (normally I to 10 nL). This aspect can be a limiting factor in the applicability of CE/UV to detect species at trace levels, particularly pesticide residues. In the present work, the optical path length of an on-column detection cell was increased through a proper connection of the column (75 mu m i.d.) to a capillary detection cell of 180 mu m optical path length in order to improve detectability. It is shown that the cell with an extended optical path length results in a significant gain in terms of signal to noise ratio. The effect of the increase in the optical path length has been evaluated for six pesticides, namely, carbendazim, thiabendazole, imazalil, procymidone triadimefon, and prochloraz. The resulting optical enhancement of the detection cell provided detection limits of ca. 0.3 mu g/mL for the studied compounds, thus enabling the residue analysis by CE/UV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a new method is proposed for noise reduction in speech signals in the wavelet domain. The method for signal processing makes use of a transfer function, obtained as a polynomial combination of three processings, denominated operators. The proposed method has the objective of overcoming the deficiencies of the thresholding methods and the effective processing of speech corrupted by real noises. Using the method, two speech signals are processed, contaminated by white noise and colored noises. To verify the quality of the processed signals, two evaluation measures are used: signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biosensors find wide application in clinical diagnostics, bioprocess control and environmental monitoring. They should not only show high specificity and reproducibility but also a high sensitivity and stability of the signal. Therefore, I introduce a novel sensor technology based on plasmonic nanoparticles which overcomes both of these limitations. Plasmonic nanoparticles exhibit strong absorption and scattering in the visible and near-infrared spectral range. The plasmon resonance, the collective coherent oscillation mode of the conduction band electrons against the positively charged ionic lattice, is sensitive to the local environment of the particle. I monitor these changes in the resonance wavelength by a new dark-field spectroscopy technique. Due to a strong light source and a highly sensitive detector a temporal resolution in the microsecond regime is possible in combination with a high spectral stability. This opens a window to investigate dynamics on the molecular level and to gain knowledge about fundamental biological processes.rnFirst, I investigate adsorption at the non-equilibrium as well as at the equilibrium state. I show the temporal evolution of single adsorption events of fibrinogen on the surface of the sensor on a millisecond timescale. Fibrinogen is a blood plasma protein with a unique shape that plays a central role in blood coagulation and is always involved in cell-biomaterial interactions. Further, I monitor equilibrium coverage fluctuations of sodium dodecyl sulfate and demonstrate a new approach to quantify the characteristic rate constants which is independent of mass transfer interference and long term drifts of the measured signal. This method has been investigated theoretically by Monte-Carlo simulations but so far there has been no sensor technology with a sufficient signal-to-noise ratio.rnSecond, I apply plasmonic nanoparticles as sensors for the determination of diffusion coefficients. Thereby, the sensing volume of a single, immobilized nanorod is used as detection volume. When a diffusing particle enters the detection volume a shift in the resonance wavelength is introduced. As no labeling of the analyte is necessary the hydrodynamic radius and thus the diffusion properties are not altered and can be studied in their natural form. In comparison to the conventional Fluorescence Correlation Spectroscopy technique a volume reduction by a factor of 5000-10000 is reached.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-uniform sampling (NUS) has been established as a route to obtaining true sensitivity enhancements when recording indirect dimensions of decaying signals in the same total experimental time as traditional uniform incrementation of the indirect evolution period. Theory and experiments have shown that NUS can yield up to two-fold improvements in the intrinsic signal-to-noise ratio (SNR) of each dimension, while even conservative protocols can yield 20-40 % improvements in the intrinsic SNR of NMR data. Applications of biological NMR that can benefit from these improvements are emerging, and in this work we develop some practical aspects of applying NUS nD-NMR to studies that approach the traditional detection limit of nD-NMR spectroscopy. Conditions for obtaining high NUS sensitivity enhancements are considered here in the context of enabling H-1,N-15-HSQC experiments on natural abundance protein samples and H-1,C-13-HMBC experiments on a challenging natural product. Through systematic studies we arrive at more precise guidelines to contrast sensitivity enhancements with reduced line shape constraints, and report an alternative sampling density based on a quarter-wave sinusoidal distribution that returns the highest fidelity we have seen to date in line shapes obtained by maximum entropy processing of non-uniformly sampled data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We performed a Rey visual design learning test (RVDLT) in 17 subjects and measured intervoxel coherence (IC) by DTI as an indication of connectivity to investigate if visual memory performance would depend on white matter structure in healthy persons. IC considers the orientation of the adjacent voxels and has a better signal-to-noise ratio than the commonly used fractional anisotropy index. Voxel-based t-test analysis of the IC values was used to identify neighboring voxel clusters with significant differences between 7 low and 10 high test performers. We detected 9 circumscribed significant clusters (p< .01) with lower IC values in low performers than in high performers, with centers of gravity located in left and right superior temporal region, corpus callosum, left superior longitudinal fascicle, and left optic radiation. Using non-parametric correlation analysis, IC and memory performance were significantly correlated in each of the 9 clusters (r< .61 to r< .81; df=15, p< .01 to p< .0001). The findings provide in vivo evidence for the contribution of white matter structure to visual memory in healthy people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Users of cochlear implants (auditory aids, which stimulate the auditory nerve electrically at the inner ear) often suffer from poor speech understanding in noise. We evaluate a small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. The system is evaluated in simulated and real, anechoic and reverberant environments. Results from simulations show improvements of 3.4 to 9.3 dB in signal to noise ratio for rooms with realistic reverberation and more than 18 dB under anechoic conditions. Speech understanding in noise is measured in 6 adult cochlear implant users in a reverberant room, showing average improvements of 7.9–9.6 dB, when compared to a single omnidirectional microphone or 1.3–5.6 dB, when compared to a simple directional two-microphone device. Subjective evaluation in a cafeteria at lunchtime shows a preference of the cochlear implant users for the evaluated device in terms of speech understanding and sound quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To prospectively quantify in vitro the influence of gadopentetate dimeglumine and ioversol on the magnetic resonance (MR) imaging signal observed with a variety of musculoskeletal pulse sequences to predict optimum gadolinium concentrations for direct MR arthrography at 1.5 and 3.0 T. MATERIALS AND METHODS: In an in vitro study, T1 and T2 relaxation times of three dilution series of gadopentetate dimeglumine (concentration, 0-20.0 mmol gadolinium per liter) at ioversol concentrations with iodine concentration of 0, 236.4, and 1182 mmol iodine per liter (corresponding to 0, 30, and 150 mg of iodine per milliliter) were measured at 1.5 and 3.0 T. The relaxation rate dependence on concentrations of gadolinium and iodine was analytically modeled, and continuous profiles of signal versus gadolinium concentration were calculated for 10 pulse sequences used in current musculoskeletal imaging. After fitting to experimental discrete profiles, maximum signal-to-noise ratio (SNR), gadolinium concentration with maximum SNR, and range of gadolinium concentration with 90% of maximum SNR were derived. The overall influence of field strength and iodine concentration on these parameters was assessed by using t tests. The deviation of simulated from experimental signal-response profiles was assessed with the autocorrelation of the residuals. RESULTS: The model reproduced relaxation rates of 0.37-38.24 sec(-1), with a mean error of 4.5%. Calculated SNR profiles matched the discrete experimental profiles, with autocorrelation of the residuals divided by the mean of less than 5.0. Admixture of ioversol consistently reduced T1 and T2, narrowed optimum gadolinium concentration ranges (P = .004-.006), and reduced maximum SNR (P < .001 to not significant). Optimum gadolinium concentration was 0.7-3.4 mmol/L at both field strengths. At 3.0 T, maximum SNR was up to 75% higher than at 1.5 T. CONCLUSION: Admixture of ioversol to gadopentetate dimeglumine solutions results in a consistent additional relaxation enhancement, which can be analytically modeled to allow a near-quantitative a priori optimized match of contrast media concentrations and imaging protocol for a broad variety of pulse sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the spatial and temporal distribution of hydrogen energetic neutral atoms (ENAs) from the heliosheath observed with the IBEX-Lo sensor of the Interstellar Boundary EXplorer (IBEX) from solar wind energies down to the lowest available energy (15 eV). All available IBEX-Lo data from 2009 January until 2013 June were included. The sky regions imaged when the spacecraft was outside of Earth's magnetosphere and when the Earth was moving toward the direction of observation offer a sufficient signal-to-noise ratio even at very low energies. We find that the ENA ribbon—a 20° wide region of high ENA intensities—is most prominent at solar wind energies whereas it fades at lower energies. The maximum emission in the ribbon is located near the poles for 2 keV and closer to the ecliptic plane for energies below 1 keV. This shift is an evidence that the ENA ribbon originates from the solar wind. Below 0.1 keV, the ribbon can no longer be identified against the globally distributed ENA signal. The ENA measurements in the downwind direction are affected by magnetospheric contamination below 0.5 keV, but a region of very low ENA intensities can be identified from 0.1 keV to 2 keV. The energy spectra of heliospheric ENAs follow a uniform power law down to 0.1 keV. Below this energy, they seem to become flatter, which is consistent with predictions. Due to the subtraction of local background, the ENA intensities measured with IBEX agree with the upper limit derived from Lyα observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION The Rondo is a single-unit cochlear implant (CI) audio processor comprising the identical components as its behind-the-ear predecessor, the Opus 2. An interchange of the Opus 2 with the Rondo leads to a shift of the microphone position toward the back of the head. This study aimed to investigate the influence of the Rondo wearing position on speech intelligibility in noise. METHODS Speech intelligibility in noise was measured in 4 spatial configurations with 12 experienced CI users using the German adaptive Oldenburg sentence test. A physical model and a numerical model were used to enable a comparison of the observations. RESULTS No statistically significant differences of the speech intelligibility were found in the situations in which the signal came from the front and the noise came from the frontal, ipsilateral, or contralateral side. The signal-to-noise ratio (SNR) was significantly better with the Opus 2 in the case with the noise presented from the back (4.4 dB, p < 0.001). The differences in the SNR were significantly worse with the Rondo processors placed further behind the ear than closer to the ear. CONCLUSION The study indicates that CI users with the receiver/stimulator implanted in positions further behind the ear are expected to have higher difficulties in noisy situations when wearing the single-unit audio processor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These data are from a field experiment conducted in a shallow alluvial aquifer along the Colorado River in Rifle, Colorado, USA. In this experiment, bicarbonate-promoted uranium desorption and acetate amendment were combined and compared to an acetate amendment-only experiment in the same experimental plot. Data include names and location data for boreholes, geochemical data for all the boreholes between June 1, 2010 and January 1, 2011, microarray data provided as signal to noise ratio (SNR) for individual microarray probes, microarray data provided as signal to noise ratio (SNR) by Genus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.