985 resultados para Noise signal


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents the results of the in-depth study of the Barkhausen effect signal properties for the plastically deformed Fe-2%Si samples. The investigated samples have been deformed by cold rolling up to plastic strain epsilon(p) = 8%. The first approach consisted of time-domain-resolved pulse and frequency analysis of the Barkhausen noise signals whereas the complementary study consisted of the time-resolved pulse count analysis as well as a total pulse count. The latter included determination of time distribution of pulses for different threshold voltage levels as well as the total pulse count as a function of both the amplitude and the duration time of the pulses. The obtained results suggest that the observed increase in the Barkhausen noise signal intensity as a function of deformation level is mainly due to the increase in the number of bigger pulses.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We report on the detection of the transport Barkhausen-like noise (TBN) in polycrystalline samples of Bi(1.65)Pb(0.35)Sr(2)Ca(2) Cu(3)O(10+delta) (Bi-2223) which were subjected to different uniaxial compacting pressures. The transport Barkhausen-like noise was measured when the sample was subjected to an ac triangular-shape magnetic field (f similar to 1 Hz) with maximum amplitude B(max) approximate to 5.5 mT, in order to avoid the flux penetration within the superconducting grains. Analysis of the TBN signal, measured for several values of excitation current density, indicated that the applied magnetic field in which the noise signal first appears, B(a)(t(i)), is closely related to the magnetic-flux pinning capability of the material. The combined results are consistent with the existence of three different superconducting levels within the samples: (i) the superconducting grains; (ii) the superconducting clusters; and (iii) the weak-links. We finally argue that TBN measurements constitute a powerful tool for probing features of the intergranular transport properties in polycrystalline samples of high-T(c) superconductors. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Radiações Aplicadas às Tecnologias da Saúde - Ramo de especialização: Imagem Digital com Radiação X

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Risk assessment is one of the main pillars of the framework directive and other directives in respect of health and safety. It is also the basis of an effective management of safety and health as it is essential to reduce work-related accidents and occupational diseases. To survey the hazards eventually present in the workplaces the usual procedures are i) gathering information about tasks/activities, employees, equipment, legislation and standards; ii) observation of the tasks and; iii) quantification of respective risks through the most adequate risk assessment among the methodologies available. From this preliminary evaluation of a welding plant and, from the different measurable parameters, noise was considered the most critical. This paper focus not only the usual way of risk assessment for noise but also another approach that may allow us to identify the technique with which a weld is being performed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Typically MEG source reconstruction is used to estimate the distribution of current flow on a single anatomically derived cortical surface model. In this study we use two such models representing superficial and deep cortical laminae. We establish how well we can discriminate between these two different cortical layer models based on the same MEG data in the presence of different levels of co-registration noise, Signal-to-Noise Ratio (SNR) and cortical patch size. We demonstrate that it is possible to make a distinction between superficial and deep cortical laminae for levels of co-registration noise of less than 2mm translation and 2° rotation at SNR>11dB. We also show that an incorrect estimate of cortical patch size will tend to bias layer estimates. We then use a 3D printed head-cast (Troebinger et al., 2014) to achieve comparable levels of co-registration noise, in an auditory evoked response paradigm, and show that it is possible to discriminate between these cortical layer models in real data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Iterative reconstruction (IR) techniques reduce image noise in multidetector computed tomography (MDCT) imaging. They can therefore be used to reduce radiation dose while maintaining diagnostic image quality nearly constant. However, CT manufacturers offer several strength levels of IR to choose from. PURPOSE: To determine the optimal strength level of IR in low-dose MDCT of the cervical spine. MATERIAL AND METHODS: Thirty consecutive patients investigated by low-dose cervical spine MDCT were prospectively studied. Raw data were reconstructed using filtered back-projection and sinogram-affirmed IR (SAFIRE, strength levels 1 to 5) techniques. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were measured at C3-C4 and C6-C7 levels. Two radiologists independently and blindly evaluated various anatomical structures (both dense and soft tissues) using a 4-point scale. They also rated the overall diagnostic image quality using a 10-point scale. RESULTS: As IR strength levels increased, image noise decreased linearly, while SNR and CNR both increased linearly at C3-C4 and C6-C7 levels (P < 0.001). For the intervertebral discs, the content of neural foramina and dural sac, and for the ligaments, subjective image quality scores increased linearly with increasing IR strength level (P ≤ 0.03). Conversely, for the soft tissues and trabecular bone, the scores decreased linearly with increasing IR strength level (P < 0.001). Finally, the overall diagnostic image quality scores increased linearly with increasing IR strength level (P < 0.001). CONCLUSION: The optimal strength level of IR in low-dose cervical spine MDCT depends on the anatomical structure to be analyzed. For the intervertebral discs and the content of neural foramina, high strength levels of IR are recommended.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). MATERIALS AND METHODS: Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. RESULTS: For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. CONCLUSION: LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a spatial filtering technique forthe reception of pilot-aided multirate multicode direct-sequencecode division multiple access (DS/CDMA) systems such as widebandCDMA (WCDMA). These systems introduce a code-multiplexedpilot sequence that can be used for the estimation of thefilter weights, but the presence of the traffic signal (transmittedat the same time as the pilot sequence) corrupts that estimationand degrades the performance of the filter significantly. This iscaused by the fact that although the traffic and pilot signals areusually designed to be orthogonal, the frequency selectivity of thechannel degrades this orthogonality at hte receiving end. Here,we propose a semi-blind technique that eliminates the self-noisecaused by the code-multiplexing of the pilot. We derive analyticallythe asymptotic performance of both the training-only andthe semi-blind techniques and compare them with the actual simulatedperformance. It is shown, both analytically and via simulation,that high gains can be achieved with respect to training-onlybasedtechniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The best way to detect breast cancer is by screening mammography. The mammography equipments are dedicated and require a rigorous quality control in order to have a good quality image and to early detect this disease. The digital equipment is relatively new in the market and there isn’t a national rule for quality control for several types of digital detectors. This study has proposed to compare two different tests manuals for quality control provided by the manufacturers of digital mammography equipments, and also compare them to the “European guidelines for quality assurance in breast cancer screening and diagnosis “(2006). The studied equipments were: Senographe 2000D from General Electric (GE) and the Hologic Selenia Lorad. Both were digital mammography equipments, the GE unit presents an indirect digital system and the other presents a direct digital system. Physical parameters of the image have been studied, such as spatial resolution, contrast resolution, noise, signal-tonoise ratio, contrast-to-noise ratio and modulation transfer function. After that, a study of the importance of quality control and the requirement to implement a Quality Assurance Program has been done. One data collection was done to compare those manual, it was done by checking which tests are indicated and the minimum frequency which they should be conducted in accordance with each manufacturer. The tests were performed by different methodologies and the results were compared. The examined tests were: the breast entrance skin dose, mean glandular dose, contrast-to-noise ratio, signal-to-noise ratio, automatic exposure control and automatic control of density, modulation transfer function, equipment resolution, homogeneity and ghost

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a CMOS Amplifier with High Common Mode rejection designed in UMC 130nm technology. The goal is to achieve a high amplification factor for a wide range of biological signals (with frequencies in the range of 10Hz-1KHz) and to reject the common-mode noise signal. It is here presented a Data Acquisition System, composed of a Delta-Sigma-like Modulator and an antenna, that is the core of a portable low-complexity radio system; the amplifier is designed in order to interface the data acquisition system with a sensor that acquires the electrical signal. The Modulator asynchronously acquires and samples human muscle activity, by sending a Quasi-Digital pattern that encodes the acquired signal. There is only a minor loss of information translating the muscle activity using this pattern, compared to an encoding technique which uses astandard digital signal via Impulse-Radio Ultra-Wide Band (IR-UWB). The biological signals, needed for Electromyographic analysis, have an amplitude of 10-100μV and need to be highly amplified and separated from the overwhelming 50mV common mode noise signal. Various tests of the firmness of the concept are presented, as well the proof that the design works even with different sensors, such as Radiation measurement for Dosimetry studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A system to evaluate nanoparticles efficiency in hyperthermia applications is presented. The method allows a direct measurement of the power dissipated by the nanoparticles through the determination of the first harmonic component of the in quadrature magnetic moment induced by the applied field. The magnetic moment is measured by using an induction method. To avoid errors and reduce the noise signal a double in phase demodulation technique is used. To test the system viability we have measured nanowires, nanoparticles and copper samples of different volumes to prove by comparing experimental and modeled results

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The introduction of open-plan offices in the 1960s with the intent of making the workplace more flexible, efficient, and team-oriented resulted in a higher noise floor level, which not only made concentrated work more difficult, but also caused physiological problems, such as increased stress, in addition to a loss of speech privacy. Irrelevant background human speech, in particular, has proven to be a major factor in disrupting concentration and lowering performance. Therefore, reducing the intelligibility of speech and has been a goal of increasing importance in recent years. One method employed to do so is the use of masking noises, which consists in emitting a continuous noise signal over a loudspeaker system that conceals the perturbing speech. Studies have shown that while effective, the maskers employed to date – normally filtered pink noise – are generally poorly accepted by users. The collaborative "Private Workspace" project, within the scope of which this thesis was carried out, attempts to develop a coupled, adaptive noise masking system along with a physical structure to be used for open-plan offices so as to combat these issues. There is evidence to suggest that nature sounds might be more accepted as masker, in part because they can have a visual object that acts as the source for the sound. Direct audio recordings are not recommended for various reasons, and thus the nature sounds must be synthesized. This work done consists of the synthesis of a sound texture to be used as a masker as well as its evaluation. The sound texture is composed of two parts: a wind-like noise synthesized with subtractive synthesis, and a leaf-like noise synthesized through granular synthesis. Different combinations of these two noises produced five variations of the masker, which were evaluated at different levels along with white noise and pink noise using a modified version of an Oldenburger Satztest to test for an affect on speech intelligibility and a questionnaire to asses its subjective acceptance. The goal was to find which of the synthesized noises works best as a speech masker. This thesis first uses a theoretical introduction to establish the basics of sound perception, psychoacoustic masking, and sound texture synthesis. The design of each of the noises, as well as their respective implementations in MATLAB, is explained, followed by the procedures used to evaluate the maskers. The results obtained in the evaluation are analyzed. Lastly, conclusions are drawn and future work is and modifications to the masker are proposed. RESUMEN. La introducción de las oficinas abiertas en los años 60 tenía como objeto flexibilizar el ambiente laboral, hacerlo más eficiente y que estuviera más orientado al trabajo en equipo. Como consecuencia, subió el nivel de ruido de fondo, que no sólo dificulta la concentración, sino que causa problemas fisiológicos, como el aumento del estrés, además de reducir la privacidad. Hay estudios que prueban que las conversaciones de fondo en particular tienen un efecto negativo en el nivel de concentración y disminuyen el rendimiento de los trabajadores. Por lo tanto, reducir la inteligibilidad del habla es uno de los principales objetivos en la actualidad. Un método empleado para hacerlo ha sido el uso de ruido enmascarante, que consiste en reproducir señales continuas de ruido a través de un sistema de altavoces que enmascare el habla. Aunque diversos estudios demuestran que es un método eficaz, los ruidos utilizados hasta la fecha (normalmente ruido rosa filtrado), no son muy bien aceptados por los usuarios. El proyecto colaborativo "Private Workspace", dentro del cual se engloba el trabajo realizado en este Proyecto Fin de Grado, tiene por objeto desarrollar un sistema de ruido enmascarador acoplado y adaptativo, además de una estructura física, para su uso en oficinas abiertas con el fin de combatir los problemas descritos anteriormente. Existen indicios de que los sonidos naturales son mejor aceptados, en parte porque pueden tener una estructura física que simule ser la fuente de los mismos. La utilización de grabaciones directas de estos sonidos no está recomendada por varios motivos, y por lo tanto los sonidos naturales deben ser sintetizados. El presente trabajo consiste en la síntesis de una textura de sonido (en inglés sound texture) para ser usada como ruido enmascarador, además de su evaluación. La textura está compuesta de dos partes: un sonido de viento sintetizado mediante síntesis sustractiva y un sonido de hojas sintetizado mediante síntesis granular. Diferentes combinaciones de estos dos sonidos producen cinco variaciones de ruido enmascarador. Estos cinco ruidos han sido evaluados a diferentes niveles, junto con ruido blanco y ruido rosa, mediante una versión modificada de un Oldenburger Satztest para comprobar cómo afectan a la inteligibilidad del habla, y mediante un cuestionario para una evaluación subjetiva de su aceptación. El objetivo era encontrar qué ruido de los que se han sintetizado funciona mejor como enmascarador del habla. El proyecto consiste en una introducción teórica que establece las bases de la percepción del sonido, el enmascaramiento psicoacústico, y la síntesis de texturas de sonido. Se explica a continuación el diseño de cada uno de los ruidos, así como su implementación en MATLAB. Posteriormente se detallan los procedimientos empleados para evaluarlos. Los resultados obtenidos se analizan y se extraen conclusiones. Por último, se propone un posible trabajo futuro y mejoras al ruido sintetizado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis recoje un trabajo experimental centrado en profundizar sobre el conocimiento de los bloques detectores monolíticos como alternativa a los detectores segmentados para tomografía por emisión de positrones (Positron Emission Tomography, PET). El trabajo llevado a cabo incluye el desarrollo, la caracterización, la puesta a punto y la evaluación de prototipos demostradores PET utilizando bloques monolíticos de ortosilicato de lutecio ytrio dopado con cerio (Cerium-Doped Lutetium Yttrium Orthosilicate, LYSO:Ce) usando sensores compatibles con altos campos magnéticos, tanto fotodiodos de avalancha (Avalanche Photodiodes, APDs) como fotomultiplicadores de silicio (Silicon Photomultipliers, SiPMs). Los prototipos implementados con APDs se construyeron para estudiar la viabilidad de un prototipo PET de alta sensibilidad previamente simulado, denominado BrainPET. En esta memoria se describe y caracteriza la electrónica frontal integrada utilizada en estos prototipos junto con la electrónica de lectura desarrollada específicamente para los mismos. Se muestran los montajes experimentales para la obtención de las imágenes tomográficas PET y para el entrenamiento de los algoritmos de red neuronal utilizados para la estimación de las posiciones de incidencia de los fotones γ sobre la superficie de los bloques monolíticos. Con el prototipo BrainPET se obtuvieron resultados satisfactorios de resolución energética (13 % FWHM), precisión espacial de los bloques monolíticos (~ 2 mm FWHM) y resolución espacial de la imagen PET de 1,5 - 1,7 mm FWHM. Además se demostró una capacidad resolutiva en la imagen PET de ~ 2 mm al adquirir simultáneamente imágenes de fuentes radiactivas separadas a distancias conocidas. Sin embargo, con este prototipo se detectaron también dos limitaciones importantes. En primer lugar, se constató una falta de flexibilidad a la hora de trabajar con un circuito integrado de aplicación específica (Application Specific Integrated Circuit, ASIC) cuyo diseño electrónico no era propio sino comercial, unido al elevado coste que requieren las modificaciones del diseño de un ASIC con tales características. Por otra parte, la caracterización final de la electrónica integrada del BrainPET mostró una resolución temporal con amplio margen de mejora (~ 13 ns FWHM). Tomando en cuenta estas limitaciones obtenidas con los prototipos BrainPET, junto con la evolución tecnológica hacia matrices de SiPM, el conocimiento adquirido con los bloques monolíticos se trasladó a la nueva tecnología de sensores disponible, los SiPMs. A su vez se inició una nueva estrategia para la electrónica frontal, con el ASIC FlexToT, un ASIC de diseño propio basado en un esquema de medida del tiempo sobre umbral (Time over Threshold, ToT), en donde la duración del pulso de salida es proporcional a la energía depositada. Una de las características más interesantes de este esquema es la posibilidad de manejar directamente señales de pulsos digitales, en lugar de procesar la amplitud de las señales analógicas. Con esta arquitectura electrónica se sustituyen los conversores analógicos digitales (Analog to Digital Converter, ADCs) por conversores de tiempo digitales (Time to Digital Converter, TDCs), pudiendo implementar éstos de forma sencilla en matrices de puertas programmable ‘in situ’ (Field Programmable Gate Array, FPGA), reduciendo con ello el consumo y la complejidad del diseño. Se construyó un nuevo prototipo demostrador FlexToT para validar dicho ASIC para bloques monolíticos o segmentados. Se ha llevado a cabo el diseño y caracterización de la electrónica frontal necesaria para la lectura del ASIC FlexToT, evaluando su linealidad y rango dinámico, el comportamiento frente a ruido así como la no linealidad diferencial obtenida con los TDCs implementados en la FPGA. Además, la electrónica presentada en este trabajo es capaz de trabajar con altas tasas de actividad y de discriminar diferentes centelleadores para aplicaciones phoswich. El ASIC FlexToT proporciona una excelente resolución temporal en coincidencia para los eventos correspondientes con el fotopico de 511 keV (128 ps FWHM), solventando las limitaciones de resolución temporal del prototipo BrainPET. Por otra parte, la resolución energética con bloques monolíticos leidos por ASICs FlexToT proporciona una resolución energética de 15,4 % FWHM a 511 keV. Finalmente, se obtuvieron buenos resultados en la calidad de la imagen PET y en la capacidad resolutiva del demostrador FlexToT, proporcionando resoluciones espaciales en el centro del FoV en torno a 1,4 mm FWHM. ABSTRACT This thesis is focused on the development of experimental activities used to deepen the knowledge of monolithic detector blocks as an alternative to segmented detectors for Positron Emission Tomography (PET). It includes the development, characterization, setting up, running and evaluation of PET demonstrator prototypes with monolithic detector blocks of Cerium-doped Lutetium Yttrium Orthosilicate (LYSO:Ce) using magnetically compatible sensors such as Avalanche Photodiodes (APDs) and Silicon Photomultipliers (SiPMs). The prototypes implemented with APDs were constructed to validate the viability of a high-sensitivity PET prototype that had previously been simulated, denominated BrainPET. This work describes and characterizes the integrated front-end electronics used in these prototypes, as well as the electronic readout system developed especially for them. It shows the experimental set-ups to obtain the tomographic PET images and to train neural networks algorithms used for position estimation of photons impinging on the surface of monolithic blocks. Using the BrainPET prototype, satisfactory energy resolution (13 % FWHM), spatial precision of monolithic blocks (~ 2 mm FWHM) and spatial resolution of the PET image (1.5 – 1.7 mm FWHM) in the center of the Field of View (FoV) were obtained. Moreover, we proved the imaging capabilities of this demonstrator with extended sources, considering the acquisition of two simultaneous sources of 1 mm diameter placed at known distances. However, some important limitations were also detected with the BrainPET prototype. In the first place, it was confirmed that there was a lack of flexibility working with an Application Specific Integrated Circuit (ASIC) whose electronic design was not own but commercial, along with the high cost required to modify an ASIC design with such features. Furthermore, the final characterization of the BrainPET ASIC showed a timing resolution with room for improvement (~ 13 ns FWHM). Taking into consideration the limitations obtained with the BrainPET prototype, along with the technological evolution in magnetically compatible devices, the knowledge acquired with the monolithic blocks were transferred to the new technology available, the SiPMs. Moreover, we opted for a new strategy in the front-end electronics, the FlexToT ASIC, an own design ASIC based on a Time over Threshold (ToT) scheme. One of the most interesting features underlying a ToT architecture is the encoding of the analog input signal amplitude information into the duration of the output signals, delivering directly digital pulses. The electronic architecture helps substitute the Analog to Digital Converters (ADCs) for Time to Digital Converters (TDCs), and they are easily implemented in Field Programmable Gate Arrays (FPGA), reducing the consumption and the complexity of the design. A new prototype demonstrator based on SiPMs was implemented to validate the FlexToT ASIC for monolithic or segmented blocks. The design and characterization of the necessary front-end electronic to read-out the signals from the ASIC was carried out by evaluating its linearity and dynamic range, its performance with an external noise signal, as well as the differential nonlinearity obtained with the TDCs implemented in the FPGA. Furthermore, the electronic presented in this work is capable of working at high count rates and discriminates different phoswich scintillators. The FlexToT ASIC provides an excellent coincidence time resolution for events that correspond to 511 keV photopeak (128 ps FWHM), resolving the limitations of the poor timing resolution of the BrainPET prototype. Furthermore, the energy resolution with monolithic blocks read by FlexToT ASICs provides an energy resolution of 15.4 % FWHM at 511 keV. Finally, good results were obtained in the quality of the PET image and the resolving power of the FlexToT demonstrator, providing spatial resolutions in the centre of the FoV at about 1.4 mm FWHM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El control, o cancelación activa de ruido, consiste en la atenuación del ruido presente en un entorno acústico mediante la emisión de una señal igual y en oposición de fase al ruido que se desea atenuar. La suma de ambas señales en el medio acústico produce una cancelación mutua, de forma que el nivel de ruido resultante es mucho menor al inicial. El funcionamiento de estos sistemas se basa en los principios de comportamiento de los fenómenos ondulatorios descubiertos por Augustin-Jean Fresnel, Christiaan Huygens y Thomas Young entre otros. Desde la década de 1930, se han desarrollado prototipos de sistemas de control activo de ruido, aunque estas primeras ideas eran irrealizables en la práctica o requerían de ajustes manuales cada poco tiempo que hacían inviable su uso. En la década de 1970, el investigador estadounidense Bernard Widrow desarrolla la teoría de procesado adaptativo de señales y el algoritmo de mínimos cuadrados LMS. De este modo, es posible implementar filtros digitales cuya respuesta se adapte de forma dinámica a las condiciones variables del entorno. Con la aparición de los procesadores digitales de señal en la década de 1980 y su evolución posterior, se abre la puerta para el desarrollo de sistemas de cancelación activa de ruido basados en procesado de señal digital adaptativo. Hoy en día, existen sistemas de control activo de ruido implementados en automóviles, aviones, auriculares o racks de equipamiento profesional. El control activo de ruido se basa en el algoritmo fxlms, una versión modificada del algoritmo LMS de filtrado adaptativo que permite compensar la respuesta acústica del entorno. De este modo, se puede filtrar una señal de referencia de ruido de forma dinámica para emitir la señal adecuada que produzca la cancelación. Como el espacio de cancelación acústica está limitado a unas dimensiones de la décima parte de la longitud de onda, sólo es viable la reducción de ruido en baja frecuencia. Generalmente se acepta que el límite está en torno a 500 Hz. En frecuencias medias y altas deben emplearse métodos pasivos de acondicionamiento y aislamiento, que ofrecen muy buenos resultados. Este proyecto tiene como objetivo el desarrollo de un sistema de cancelación activa de ruidos de carácter periódico, empleando para ello electrónica de consumo y un kit de desarrollo DSP basado en un procesador de muy bajo coste. Se han desarrollado una serie de módulos de código para el DSP escritos en lenguaje C, que realizan el procesado de señal adecuado a la referencia de ruido. Esta señal procesada, una vez emitida, produce la cancelación acústica. Empleando el código implementado, se han realizado pruebas que generan la señal de ruido que se desea eliminar dentro del propio DSP. Esta señal se emite mediante un altavoz que simula la fuente de ruido a cancelar, y mediante otro altavoz se emite una versión filtrada de la misma empleando el algoritmo fxlms. Se han realizado pruebas con distintas versiones del algoritmo, y se han obtenido atenuaciones de entre 20 y 35 dB medidas en márgenes de frecuencia estrechos alrededor de la frecuencia del generador, y de entre 8 y 15 dB medidas en banda ancha. ABSTRACT. Active noise control consists on attenuating the noise in an acoustic environment by emitting a signal equal but phase opposed to the undesired noise. The sum of both signals results in mutual cancellation, so that the residual noise is much lower than the original. The operation of these systems is based on the behavior principles of wave phenomena discovered by Augustin-Jean Fresnel, Christiaan Huygens and Thomas Young. Since the 1930’s, active noise control system prototypes have been developed, though these first ideas were practically unrealizable or required manual adjustments very often, therefore they were unusable. In the 1970’s, American researcher Bernard Widrow develops the adaptive signal processing theory and the Least Mean Squares algorithm (LMS). Thereby, implementing digital filters whose response adapts dynamically to the variable environment conditions, becomes possible. With the emergence of digital signal processors in the 1980’s and their later evolution, active noise cancellation systems based on adaptive signal processing are attained. Nowadays active noise control systems have been successfully implemented on automobiles, planes, headphones or racks for professional equipment. Active noise control is based on the fxlms algorithm, which is actually a modified version of the LMS adaptive filtering algorithm that allows compensation for the acoustic response of the environment. Therefore it is possible to dynamically filter a noise reference signal to obtain the appropriate cancelling signal. As the noise cancellation space is limited to approximately one tenth of the wavelength, noise attenuation is only viable for low frequencies. It is commonly accepted the limit of 500 Hz. For mid and high frequencies, conditioning and isolating passive techniques must be used, as they produce very good results. The objective of this project is to develop a noise cancellation system for periodic noise, by using consumer electronics and a DSP development kit based on a very-low-cost processor. Several C coded modules have been developed for the DSP, implementing the appropriate signal processing to the noise reference. This processed signal, once emitted, results in noise cancellation. The developed code has been tested by generating the undesired noise signal in the DSP. This signal is emitted through a speaker simulating the noise source to be removed, and another speaker emits an fxlms filtered version of the same signal. Several versions of the algorithm have been tested, obtaining attenuation levels around 20 – 35 dB measured in a tight bandwidth around the generator frequency, or around 8 – 15 dB measured in broadband.