870 resultados para Contrast-to-noise ratio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION The Rondo is a single-unit cochlear implant (CI) audio processor comprising the identical components as its behind-the-ear predecessor, the Opus 2. An interchange of the Opus 2 with the Rondo leads to a shift of the microphone position toward the back of the head. This study aimed to investigate the influence of the Rondo wearing position on speech intelligibility in noise. METHODS Speech intelligibility in noise was measured in 4 spatial configurations with 12 experienced CI users using the German adaptive Oldenburg sentence test. A physical model and a numerical model were used to enable a comparison of the observations. RESULTS No statistically significant differences of the speech intelligibility were found in the situations in which the signal came from the front and the noise came from the frontal, ipsilateral, or contralateral side. The signal-to-noise ratio (SNR) was significantly better with the Opus 2 in the case with the noise presented from the back (4.4 dB, p < 0.001). The differences in the SNR were significantly worse with the Rondo processors placed further behind the ear than closer to the ear. CONCLUSION The study indicates that CI users with the receiver/stimulator implanted in positions further behind the ear are expected to have higher difficulties in noisy situations when wearing the single-unit audio processor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Apical surgery is an important treatment option for teeth with post-treatment periodontitis. Although apical surgery involves root-end resection, no morphometric data are yet available about root-end resection and its impact on the root-to-crown ratio (RCR). The present study assessed the length of apicectomy and calculated the loss of root length and changes of RCR after apical surgery. METHODS In a prospective clinical study, cone-beam computed tomography scans were taken preoperatively and postoperatively. From these images, the crown and root lengths of 61 roots (54 teeth in 47 patients) were measured before and after apical surgery. Data were collected relative to the cementoenamel junction (CEJ) as well as to the crestal bone level (CBL). One observer took all measurements twice (to calculate the intraobserver variability), and the means were used for further analysis. The following parameters were assessed for all treated teeth as well as for specific tooth groups: length of root-end resection and percentage change of root length, preoperative and postoperative RCRs, and percentage change of RCR after apical surgery. RESULTS The mean length of root-end resection was 3.58 ± 1.43 mm (relative to the CBL). This amounted to a loss of 33.2% of clinical and 26% of anatomic root length. There was an overall significant difference between the tooth groups (P < .05). There was also a statistically significant difference comparing mandibular and maxillary teeth (P < .05), but not for incisors/canines versus premolars/molars (P = .125). The mean preoperative and postoperative RCRs (relative to CEJ) were 1.83 and 1.35, respectively (P < .001). With regard to the CBL reference, the mean preoperative and postoperative RCRs were 1.08 and 0.71 (CBL), respectively (P < .001). The calculated changes of RCR after apical surgery were 24.8% relative to CEJ and 33.3% relative to CBL (P < .001). Across the different tooth groups, the mean RCR was not significantly different (P = .244 for CEJ and 0.114 for CBL). CONCLUSIONS This CBCT-based study demonstrated that the RCR is significantly changed after root-end resection in apical surgery irrespective of the clinical (CBL) or anatomic (CEJ) reference levels. The lowest, and thus clinically most critical, postoperative RCR was observed in maxillary incisors. Future clinical studies need to show the impact of resection length and RCR changes on the outcome of apical surgery.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) magnetic resonance imaging (MRI) technique has inherent advantages over other fast imaging methods, including robust motion correction, reduced image distortion, and resistance to off-resonance effects. These features make PROPELLER highly desirable for T2*-sensitive imaging, high-resolution diffusion imaging, and many other applications. However, PROPELLER has been predominantly implemented as a fast spin-echo (FSE) technique, which is insensitive to T2* contrast, and requires time-inefficient signal averaging to achieve adequate signal-to-noise ratio (SNR) for many applications. These issues presently constrain the potential clinical utility of FSE-based PROPELLER. ^ In this research, our aim was to extend and enhance the potential applications of PROPELLER MRI by developing a novel multiple gradient echo PROPELLER (MGREP) technique that can overcome the aforementioned limitations. The MGREP pulse sequence was designed to acquire multiple gradient-echo images simultaneously, without any increase in total scan time or RF energy deposition relative to FSE-based PROPELLER. A new parameter was also introduced for direct user-control over gradient echo spacing, to allow variable sensitivity to T2* contrast. In parallel to pulse sequence development, an improved algorithm for motion correction was also developed and evaluated against the established method through extensive simulations. The potential advantages of MGREP over FSE-based PROPELLER were illustrated via three specific applications: (1) quantitative T2* measurement, (2) time-efficient signal averaging, and (3) high-resolution diffusion imaging. Relative to the FSE-PROPELLER method, the MGREP sequence was found to yield quantitative T2* values, increase SNR by ∼40% without any increase in acquisition time or RF energy deposition, and noticeably improve image quality in high-resolution diffusion maps. In addition, the new motion algorithm was found to improve the performance considerably in motion-artifact reduction. ^ Overall, this work demonstrated a number of enhancements and extensions to existing PROPELLER techniques. The new technical capabilities of PROPELLER imaging, developed in this thesis research, are expected to serve as the foundation for further expanding the scope of PROPELLER applications. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance imaging (MRI) is a non-invasive technique that offers excellent soft tissue contrast for characterizing soft tissue pathologies. Diffusion tensor imaging (DTI) is an MRI technique that has shown to have the sensitivity to detect subtle pathology that is not evident on conventional MRI. ^ Rats are commonly used as animal models in characterizing the spinal cord pathologies including spinal cord injury (SCI), cancer, multiple sclerosis, etc. These pathologies could affect both thoracic and cervical regions and complete characterization of these pathologies using MRI requires DTI characterization in both the thoracic and cervical regions. Prior to the application of DTI for investigating the pathologic changes in the spinal cord, it is essential to establish DTI metrics in normal animals. ^ To date, in-vivo DTI studies of rat spinal cord have used implantable coils for high signal-to-noise ratio (SNR) and spin-echo pulse sequences for reduced geometric distortions. Implantable coils have several disadvantages including: (1) the invasive nature of implantation, (2) loss of SNR due to frequency shift with time in the longitudinal studies, and (3) difficulty in imaging the cervical region. While echo planar imaging (EPI) offers much shorter acquisition times compared to spin-echo imaging, EPI is very sensitive to static magnetic field inhomogeneities and the existing shimming techniques implemented on the MRI scanner do not perform well on spinal cord because of its geometry. ^ In this work, an integrated approach has been implemented for in-vivo DTI characterization of rat spinal cord in the thoracic and cervical regions. A three element phased array coil was developed for improved SNR and extended spatial coverage. A field-map shimming technique was developed for minimizing the geometric distortions in EPI images. Using these techniques, EPI based DWI images were acquired with optimized diffusion encoding scheme from 6 normal rats and the DTI-derived metrics were quantified. ^ The phantom studies indicated higher SNR and smaller bias in the estimated DTI metrics than the previous studies in the cervical region. In-vivo results indicated no statistical difference in the DTI characteristics of either gray matter or white matter between the thoracic and cervical regions. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study deals with the mineralogical variability of siliceous and zeolitic sediments, porcellanites, and cherts at small intervals in the continuously cored sequence of Deep Sea Drilling Project Site 462. Skeletal opal is preserved down to a maximum burial depth of 390 meters (middle Eocene). Below this level, the tests are totally dissolved or replaced and filled by opal-CT, quartz, clinoptilolite, and calcite. Etching of opaline tests does not increase continously with deeper burial. Opal solution accompanied by a conspicuous formation of authigenic clinoptilolite has a local maximum in Core 16 (150 m). A causal relationship with the lower Miocene hiatus at this level is highly probable. Oligocene to Cenomanian sediments represent an intermediate stage of silica diagenesis: the opal-CT/quartz ratios of the silicified rocks are frequently greater than 1, and quartz filling pores or replacing foraminifer tests is more widespread than quartz which converted from an opal-CT precursor. As at other sites, there is a marked discontinuity of the transitions from biogenic opal via opal-CT to quartz with increasing depth of burial. Layers with unaltered opal-A alternate with porcellanite beds; the intensity of the opal-CT-to-quartz transformation changes very rapidly from horizon to horizon and obviously is not correlated with lithologic parameters. The silica for authigenic clinoptilolite was derived from biogenic opal and decaying volcanic components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These data are from a field experiment conducted in a shallow alluvial aquifer along the Colorado River in Rifle, Colorado, USA. In this experiment, bicarbonate-promoted uranium desorption and acetate amendment were combined and compared to an acetate amendment-only experiment in the same experimental plot. Data include names and location data for boreholes, geochemical data for all the boreholes between June 1, 2010 and January 1, 2011, microarray data provided as signal to noise ratio (SNR) for individual microarray probes, microarray data provided as signal to noise ratio (SNR) by Genus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different methods to reduce the noise power in the far-field pattern of an antenna as measured in cylindrical near-field (CNF) are proposed. Both methods are based on the same principle: the data recorded in the CNF measurement, assumed to be corrupted by white Gaussian and space-stationary noise, are transformed into a new domain where it is possible to filter out a portion of noise. Those filtered data are then used to calculate a far-field pattern with less noise power than that one obtained from the measured data without applying any filtering. Statistical analyses are carried out to deduce the expressions of the signal-to-noise ratio improvement achieved with each method. Although the idea of the two alternatives is the same, there are important differences between them. The first one applies a modal filtering, requires an oversampling and improves the far-field pattern in all directions. The second method employs a spatial filtering on the antenna plane, does not require oversampling and the far-field pattern is only improved in the forward hemisphere. Several examples are presented using both simulated and measured near-field data to verify the effectiveness of the methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Output bits from an optical logic cell present noise due to the type of technique used to obtain the Boolean functions of two input data bits. We have simulated the behavior of an optically programmable logic cell working with Fabry Perot-laser diodes of the same type employed in optical communications (1550nm) but working here as amplifiers. We will report in this paper a study of the bit noise generated from the optical non-linearity process allowing the Boolean function operation of two optical input data signals. Two types of optical logic cells will be analyzed. Firstly, a classical "on-off" behavior, with transmission operation of LD amplifier and, secondly, a more complicated configuration with two LD amplifiers, one working on transmission and the other one in reflection mode. This last configuration has nonlinear behavior emulating SEED-like properties. In both cases, depending on the value of a "1" input data signals to be processed, a different logic function can be obtained. Also a CW signal, known as control signal, may be apply to fix the type of logic function. The signal to noise ratio will be analyzed for different parameters, as wavelength signals and the hysteresis cycles regions associated to the device, in relation with the signals power level applied. With this study we will try to obtain a better understanding of the possible effects present on an optical logic gate with Laser Diodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The water time constant and mechanical time constant greatly influences the power and speed oscillations of hydro-turbine-generator unit. This paper discusses the turbine power transients in response to different nature and changes in the gate position. The work presented here analyses the characteristics of hydraulic system with an emphasis on changes in the above time constants. The simulation study is based on mathematical first-, second-, third- and fourth-order transfer function models. The study is further extended to identify discrete time-domain models and their characteristic representation without noise and with noise content of 10 & 20 dB signal-to-noise ratio (SNR). The use of self-tuned control approach in minimising the speed deviation under plant parameter changes and disturbances is also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance microscopy (MRM) theoretically provides the spatial resolution and signal-to-noise ratio needed to resolve neuritic plaques, the neuropathological hallmark of Alzheimer’s disease (AD). Two previously unexplored MR contrast parameters, T2* and diffusion, are tested for plaque-specific contrast to noise. Autopsy specimens from nondemented controls (n = 3) and patients with AD (n = 5) were used. Three-dimensional T2* and diffusion MR images with voxel sizes ranging from 3 × 10−3 mm3 to 5.9 × 10−5 mm3 were acquired. After imaging, specimens were cut and stained with a microwave king silver stain to demonstrate neuritic plaques. From controls, the alveus, fimbria, pyramidal cell layer, hippocampal sulcus, and granule cell layer were detected by either T2* or diffusion contrast. These structures were used as landmarks when correlating MRMs with histological sections. At a voxel resolution of 5.9 × 10−5 mm3, neuritic plaques could be detected by T2*. The neuritic plaques emerged as black, spherical elements on T2* MRMs and could be distinguished from vessels only in cross-section when presented in three dimension. Here we provide MR images of neuritic plaques in vitro. The MRM results reported provide a new direction for applying this technology in vivo. Clearly, the ability to detect and follow the early progression of amyloid-positive brain lesions will greatly aid and simplify the many possibilities to intervene pharmacologically in AD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deterministic chaos has been implicated in numerous natural and man-made complex phenomena ranging from quantum to astronomical scales and in disciplines as diverse as meteorology, physiology, ecology, and economics. However, the lack of a definitive test of chaos vs. random noise in experimental time series has led to considerable controversy in many fields. Here we propose a numerical titration procedure as a simple “litmus test” for highly sensitive, specific, and robust detection of chaos in short noisy data without the need for intensive surrogate data testing. We show that the controlled addition of white or colored noise to a signal with a preexisting noise floor results in a titration index that: (i) faithfully tracks the onset of deterministic chaos in all standard bifurcation routes to chaos; and (ii) gives a relative measure of chaos intensity. Such reliable detection and quantification of chaos under severe conditions of relatively low signal-to-noise ratio is of great interest, as it may open potential practical ways of identifying, forecasting, and controlling complex behaviors in a wide variety of physical, biomedical, and socioeconomic systems.