10 resultados para Noise Reduction

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different methods to reduce the noise power in the far-field pattern of an antenna as measured in cylindrical near-field (CNF) are proposed. Both methods are based on the same principle: the data recorded in the CNF measurement, assumed to be corrupted by white Gaussian and space-stationary noise, are transformed into a new domain where it is possible to filter out a portion of noise. Those filtered data are then used to calculate a far-field pattern with less noise power than that one obtained from the measured data without applying any filtering. Statistical analyses are carried out to deduce the expressions of the signal-to-noise ratio improvement achieved with each method. Although the idea of the two alternatives is the same, there are important differences between them. The first one applies a modal filtering, requires an oversampling and improves the far-field pattern in all directions. The second method employs a spatial filtering on the antenna plane, does not require oversampling and the far-field pattern is only improved in the forward hemisphere. Several examples are presented using both simulated and measured near-field data to verify the effectiveness of the methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three different methods to reduce the noise power in the far-field pattern of an antenna when it is measured in a cylindrical near field system are presented and compared. The first one is based on a modal filtering while the other two are based on spatial filtering, either on an antenna plane or either on a cylinder of smaller radius. Simulated and measured results will be presented.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The horizontal visibility algorithm was recently introduced as a mapping between time series and networks. The challenge lies in characterizing the structure of time series (and the processes that generated those series) using the powerful tools of graph theory. Recent works have shown that the visibility graphs inherit several degrees of correlations from their associated series, and therefore such graph theoretical characterization is in principle possible. However, both the mathematical grounding of this promising theory and its applications are in its infancy. Following this line, here we address the question of detecting hidden periodicity in series polluted with a certain amount of noise. We first put forward some generic properties of horizontal visibility graphs which allow us to define a (graph theoretical) noise reduction filter. Accordingly, we evaluate its performance for the task of calculating the period of noisy periodic signals, and compare our results with standard time domain (autocorrelation) methods. Finally, potentials, limitations and applications are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The influence of applying European default traffic values to the making of a noise map was evaluated in a typical environment like Palma de Mallorca. To assess these default traffic values, a first model has been created and compared with measured noise levels. Subsequently a second traffic model, improving the input data used for the first one, has been created and validated according to the deviations. Different methodologies were also examined for collecting model input data that would be of higher quality, by analysing the improvement generated in the reduction in the uncertainty of the noise map introduced by the road traffic noise emission

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flat or worn wheels rolling on rough or corrugated tracks can provoke airborne noise and ground-borne vibration, which can be a serious concern for nearby neighbours of urban rail transit lines. Among the various treatments used to reduce vibration and noise, resilient wheels play an important role. In conventional resilient wheels, a slightly prestressed V­shaped rubber ring is mounted between the steel wheel centre and tyre. The elastic layer enhances rolling noise and vibration suppression, as well as impact reduction on the track. In this paper the effectiveness of resilient wheels in underground lines, in comparison to monobloc ones, is assessed. The analysed resilient wheel is able to carry greater loads than standard resilient wheels used for light vehicles. It also presents a greater radial resiliency and a higher axial stiffness than conventional V­wheels. The finite element method was used in this study. A quarter car model was defined, in which the wheelset was modelled as an elastic body. Several simulations were performed in order to assess the vibrational behaviour of elastic wheels, including modal, harmonic and random vibration analysis, the latter allowing the introduction of realistic vertical track irregularities, as well as the influence of the running speed. Due to numerical problems some simplifications were needed. Parametric variations were also performed, in which the sensitivity of the whole system to variations of rubber prestress and Poisson’s ratio of the elastic material was assessed.Results are presented in the frequency domain, showing a better performance of the resilient wheels for frequencies over 200 Hz. This result reveals the ability of the analyzed design to mitigate rolling noise, but not structural vibrations, which are primarily found in the lower frequency range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a quiet zone probing approach which deals with low dynamic range quiet zone acquisitions. Lack of dynamic range is a feature of millimeter and sub-millimeter wavelength technologies. It is consequence of the gradually smaller power generated by the instrumentation, that follows a f^α law with frequency, being α≥1 variable depending on the signal source’s technology. The proposed approach is based on an optimal data reduction scenario which redounds in a maximum signal to noise ratio increase for the signal pattern, with minimum information losses. After theoretical formulation, practical applications of the technique are proposed.