887 resultados para fast Fourier-transform algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain mechanisms associated with artistic talents or skills are still not well understood. This exploratory study investigated differences in brain activity of artists and non-artists while drawing previously presented perspective line-drawings from memory and completing other drawing-related tasks. Electroencephalography (EEG) data were analyzed for power in the frequency domain by means of a Fast Fourier Transform (FFT). Low Resolution Brain Electromagnetic Tomography (LORETA) was applied to localize emerging significances. During drawing and related tasks, decreased power was seen in artists compared to non-artists mainly in upper alpha frequency ranges. Decreased alpha power is often associated with an increase in cognitive functioning and may reflect enhanced semantic memory performance and object recognition processes in artists. These assumptions are supported by the behavioral data assessed in this study and complement previous findings showing increased parietal activations in non-artists compared to artists while drawing. However, due to the exploratory nature of the analysis, additional confirmatory studies will be needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

TEMPERA (TEMPERature RAdiometer) is a new ground-based radiometer which measures in a frequency range from 51–57 GHz radiation emitted by the atmosphere. With this instrument it is possible to measure temperature profiles from ground to about 50 km. This is the first ground-based instrument with the capability to retrieve temperature profiles simultaneously for the troposphere and stratosphere. The measurement is done with a filterbank in combination with a digital fast Fourier transform spectrometer. A hot load and a noise diode are used as stable calibration sources. The optics consist of an off-axis parabolic mirror to collect the sky radiation. Due to the Zeeman effect on the emission lines used, the maximum height for the temperature retrieval is about 50 km. The effect is apparent in the measured spectra. The performance of TEMPERA is validated by comparison with nearby radiosonde and satellite data from the Microwave Limb Sounder on the Aura satellite. In this paper we present the design and measurement method of the instrument followed by a description of the retrieval method, together with a validation of TEMPERA data over its first year, 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since November 1994, the GROund-based Millimeter-wave Ozone Spectrometer (GROMOS) measures stratospheric and lower mesospheric ozone in Bern, Switzerland (47.95° N, 7.44° E). GROMOS is part of the Network for the Detection of Atmospheric Composition Change (NDACC). In July 2009, a Fast-Fourier-Transform spectrometer (FFTS) has been added as backend to GROMOS. The new FFTS and the original filter bench (FB) measured parallel for over two years. In October 2011, the FB has been turned off and the FFTS is now used to continue the ozone time series. For a consolidated ozone time series in the frame of NDACC, the quality of the stratospheric ozone profiles obtained with the FFTS has to be assessed. The FFTS results from July 2009 to December 2011 are compared to ozone profiles retrieved by the FB. FFTS and FB of the GROMOS microwave radiometer agree within 5% above 20 hPa. A later harmonization of both time series will be realized by taking the FFTS as benchmark for the FB. Ozone profiles from the FFTS are also compared to coinciding lidar measurements from the Observatoire Haute Provence (OHP), France. For the time period studied a maximum mean difference (lidar – GROMOS FFTS) of +3.8% at 3.1 hPa and a minimum mean difference of +1.4% at 8 hPa is found. Further, intercomparisons with ozone profiles from other independent instruments are performed: satellite measurements include MIPAS onboard ENVISAT, SABER onboard TIMED, MLS onboard EOS Aura and ACE-FTS onboard SCISAT-1. Additionally, ozonesondes launched from Payerne, Switzerland, are used in the lower stratosphere. Mean relative differences of GROMOS FFTS and these independent instruments are less than 10% between 50 and 0.1 hPa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Experience-based adaptation of emotional responses is an important faculty for cognitive and emotional functioning. Professional musicians represent an ideal model in which to elicit experience-driven changes in the emotional processing domain. The changes of the central representation of emotional arousal due to musical expertise are still largely unknown. The aim of the present study was to investigate the electroencephalogram (EEG) correlates of experience-driven changes in the domain of emotional arousal. Therefore, the differences in perceived (subjective arousal via ratings) and physiologically measured (EEG) arousal between amateur and professional musicians were examined. PROCEDURE: A total of 15 professional and 19 amateur musicians listened to the first movement of Ludwig van Beethoven's 5th symphony (duration=∼7.4min), during which a continuous 76-channel EEG was recorded. In a second session, the participants evaluated their emotional arousal during listening. In a tonic analysis, we examined the average EEG data over the time course of the music piece. For a phasic analysis, a fast Fourier transform was performed and covariance maps of spectral power were computed in association with the subjective arousal ratings. RESULTS: The subjective arousal ratings of the professional musicians were more consistent than those of the amateur musicians. In the tonic EEG analysis, a mid-frontal theta activity was observed in the professionals. In the phasic EEG, the professionals exhibited an increase of posterior alpha, central delta, and beta rhythm during high arousal. DISCUSSION: Professionals exhibited different and/or more intense patterns of emotional activation when they listened to the music. The results of the present study underscore the impact of music experience on emotional reactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stratospheric ozone is of major interest as it absorbs most harmful UV radiation from the sun, allowing life on Earth. Ground-based microwave remote sensing is the only method that allows for the measurement of ozone profiles up to the mesopause, over 24 hours and under different weather conditions with high time resolution. In this paper a novel ground-based microwave radiometer is presented. It is called GROMOS-C (GRound based Ozone MOnitoring System for Campaigns), and it has been designed to measure the vertical profile of ozone distribution in the middle atmosphere by observing ozone emission spectra at a frequency of 110.836 GHz. The instrument is designed in a compact way which makes it transportable and suitable for outdoor use in campaigns, an advantageous feature that is lacking in present day ozone radiometers. It is operated through remote control. GROMOS-C is a total power radiometer which uses a pre-amplified heterodyne receiver, and a digital fast Fourier transform spectrometer for the spectral analysis. Among its main new features, the incorporation of different calibration loads stands out; this includes a noise diode and a new type of blackbody target specifically designed for this instrument, based on Peltier elements. The calibration scheme does not depend on the use of liquid nitrogen; therefore GROMOS-C can be operated at remote places with no maintenance requirements. In addition, the instrument can be switched in frequency to observe the CO line at 115 GHz. A description of the main characteristics of GROMOS-C is included in this paper, as well as the results of a first campaign at the High Altitude Research Station at Jungfraujoch (HFSJ), Switzerland. The validation is performed by comparison of the retrieved profiles against equivalent profiles from MLS (Microwave Limb Sounding) satellite data, ECMWF (European Centre for Medium-Range Weather Forecast) model data, as well as our nearby NDACC (Network for the Detection of Atmospheric Composition Change) ozone radiometer measuring at Bern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an $\sp{131}$I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of $-$16.3% to 4.4%. Volume quantitation errors ranged from $-$4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3-D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dataset includes measurements of Microcolpia parreyssii parreyssii (Philippi, 1847) and Microcolpia parreyssii sikorai (Brusina, 1903) from Holocene deposits of Lake Petea near Oradea, Romania. Additionally, the tps-files generated with the program TpsDig2 and containing pairwise x,y-coordinates describing the outlines of the digitized images are supplied. Finally, the matrix of Fourier coefficients resulting from the Fast Fourier Transform is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A morphometric analysis was performed for the late Middle Miocene bivalve species lineage of Polititapes tricuspis (Eichwald, 1829) (Veneridae: Tapetini). Specimens from various localities grouped into two stratigraphically successive biozones, i.e. the upper Ervilia Zone and the Sarmatimactra Zone, were investigated using a multi-method approach. A Generalized Procrustes Analysis was computed for fifteen landmarks, covering characteristics of the hinge, muscle scars, and pallial line. The shell outline was separately quantified by applying the Fast Fourier Transform, which redraws the outline by fitting in a combination of trigonometric curves. Shell size was calculated as centroid size from the landmark configuration. Shell thickness, as not covered by either analysis, was additionally measured at the centroid. The analyses showed significant phenotypic differentiation between specimens from the two biozones. The bivalves become distinctly larger and thicker over geological time and develop circular shells with stronger cardinal teeth and a deeper pallial sinus. Data on the paleoenvironmental changes in the late Middle Miocene Central Paratethys Sea suggest the phenotypic shifts to be functional adaptations. The typical habitats for Polititapes changed to extensive, very shallow shores exposed to high wave action and tidal activity. Caused by the growing need for higher mechanical stability, the bivalves produced larger and thicker shells with stronger cardinal teeth. The latter are additionally shifted towards the hinge center to compensate for the lacking lateral teeth and improve stability. The deepening pallial sinus is related to a deeper burrowing habit, which is considered to impede being washed out in the new high-energy settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By using the spray pyrolysis methodology in its classical configuration we have grown self-assembled MgxZn1−xO quantum dots (size [similar]4–6 nm) in the overall range of compositions 0 ≤ x ≤ 1 on c-sapphire, Si (100) and quartz substrates. Composition of the quantum dots was determined by means of transmission electron microscopy-energy dispersive X-ray analysis (TEM-EDAX) and X-ray photoelectron spectroscopy. Selected area electron diffraction reveals the growth of single phase hexagonal MgxZn1−xO quantum dots with composition 0 ≤ x ≤ 0.32 by using a nominal concentration of Mg in the range 0 to 45%. Onset of Mg concentration about 50% (nominal) forces the hexagonal lattice to undergo a phase transition from hexagonal to a cubic structure which resulted in the growth of hexagonal and cubic phases of MgxZn1−xO in the intermediate range of Mg concentrations 50 to 85% (0.39 ≤ x ≤ 0.77), whereas higher nominal concentration of Mg ≥ 90% (0.81 ≤ x ≤ 1) leads to the growth of single phase cubic MgxZn1−xO quantum dots. High resolution transmission electron microscopy and fast Fourier transform confirm the results and show clearly distinguishable hexagonal and cubic crystal structures of the respective quantum dots. A difference of 0.24 eV was detected between the core levels (Zn 2p and Mg 1s) measured in quantum dots with hexagonal and cubic structures by X-ray photoemission. The shift of these core levels can be explained in the frame of the different coordination of cations in the hexagonal and cubic configurations. Finally, the optical absorption measurements performed on single phase hexagonal MgxZn1−xO QDs exhibited a clear shift in optical energy gap on increasing the Mg concentration from 0 to 40%, which is explained as an effect of substitution of Zn2+ by Mg2+ in the ZnO lattice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic soil-structure interaction has been for a long time one of the most fascinating areas for the engineering profession. The building of large alternating machines and their effects on surrounding structures as well as on their own functional behavior, provided the initial impetus; a large amount of experimental research was done,and the results of the Russian and German groups were especially worthwhile. Analytical results by Reissner and Sehkter were reexamined by Quinlan, Sung, et. al., and finally Veletsos presented the first set of reliable results. Since then, the modeling of the homogeneous, elastic halfspace as a equivalent set of springs and dashpots has become an everyday tool in soil engineering practice, especially after the appearance of the fast Fourier transportation algorithm, which makes possible the treatment of the frequency-dependent characteristics of the equivalent elements in a unified fashion with the general method of analysis of the structure. Extensions to the viscoelastic case, as well as to embedded foundations and complicated geometries, have been presented by various authors. In general, they used the finite element method with the well known problems of geometric truncations and the subsequent use of absorbing boundaries. The properties of boundary integral equation methods are, in our opinion, specially well suited to this problem, and several of the previous results have confirmed our opinion. In what follows we present the general features related to steady-state elastodynamics and a series of results showing the splendid results that the BIEM provided. Especially interesting are the outputs obtained through the use of the so-called singular elements, whose description is incorporated at the end of the paper. The reduction in time spent by the computer and the small number of elements needed to simulate realistically the global properties of the halfspace make this procedure one of the most interesting applications of the BIEM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realistic operation of helicopter flight simulators in complex topographies (such as urban environments) requires appropriate prediction of the incoming wind, and this prediction should be made in real time. Unfortunately, the wind topology around complex topographies shows time-dependent, fully nonlinear, turbulent patterns (i.e., wakes) whose simulation cannot be made using computationally inexpensive tools based on corrected potential approximations. Instead, the full Navier-Stokes plus some kind of turbulent modeling is necessary, which is quite computationally expensive. The complete unsteady flow depends on two parameters, namely the velocity and orientation of the free stream flow. The aim of this MSc thesis is to develop a methodology for the real time simulation of these complex flows. For simplicity, the flow around a single building (20 mx20 m cross section and 100 m height) is considered, with free stream velocity in the range 5-25 m/s. Because of the square cross section, the problem shows two reflection symmetries, which allows for restricting the orientations to the range 0° < a. < 45°. The methodology includes an offline preprocess and the online operation. The preprocess consists in three steps: An appropriate, unstructured mesh is selected in which the flow is sim¬ulated using OpenFOAM, and this is done for 33 combinations of 3 free stream intensities and 11 orientations. For each of these, the simulation proceeds for a sufficiently large time as to eliminate transients. This step is quite computationally expensive. Each flow field is post-processed using a combination of proper orthogonal decomposition, fast Fourier transform, and a convenient optimization tool, which identifies the relevant frequencies (namely, both the basic frequencies and their harmonics) and modes in the computational mesh. This combination includes several new ingredients to filter errors out and identify the relevant spatio-temporal patterns. Note that, in principle, the basic frequencies depend on both the intensity and the orientation of the free stream flow. The outcome of this step is a set of modes (vectors containing the three velocity components at all mesh points) for the various Fourier components, intensities, and orientations, which can be organized as a third order tensor. This step is fairly computationally inexpensive. The above mentioned tensor is treated using a combination of truncated high order singular value, decomposition and appropriate one-dimensional interpolation (as in Lorente, Velazquez, Vega, J. Aircraft, 45 (2008) 1779-1788). The outcome is a tensor representation of both the relevant fre¬quencies and the associated Fourier modes for a given pair of values of the free stream flow intensity and orientation. This step is fairly compu¬tationally inexpensive. The online, operation requires just reconstructing the time-dependent flow field from its Fourier representation, which is extremely computationally inex¬pensive. The whole method is quite robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto consiste en crear una serie de tres pequeños videojuegos incluidos en una sola aplicación, para plataformas móviles Android, que permitan en cualquier lugar entrenar la estética de la voz del paciente con problemas de fonación. Dependiendo de los aspectos de la voz (sonidos sonoros y sordos, el pitch y la intensidad) a trabajar se le asignará un ejercicio u otro. En primer lugar se introduce el concepto de rehabilitación de la voz y en qué casos es necesario. Seguidamente se realiza un trabajo de búsqueda en el que se identifican las distintas plataformas de desarrollo de videojuegos que son compatibles con los sistemas Android, así como para la captura de audio y las librerías de procesado de señal. A continuación se eligen las herramientas que presentan las mejores capacidades y con las que se va a trabajar. Estas son el motor de juego Andengine, para la parte gráfica, el entorno Java específico de Android, para la captura de muestras de audio y la librería JTransforms que realiza transformadas de Fourier permitiendo procesar el audio para la detección de pitch. Al desarrollar y ensamblar los distintos bloques se prioriza el funcionamiento en tiempo real de la aplicación. Las líneas de mejora y conclusiones se comentan en el último capítulo del trabajo así como el manual de usuario para mayor comprensión. ABSTRACT. The main aim of this project is to create an application for mobile devices which includes three small speech therapy videogames for the Android OS. These videogames allow patients to train certain voice parameters (such as voice and unvoiced sounds, pitch and intensity) wherever they want and need to. First, an overview of the concept of voice rehabilitation and its uses for patients with speech disorders is given. Secondly a study has been made to identify the most suitable video game engine for the Android OS, the best possible way to capture audio from the device and the audio processing library which will combine with the latter. Therefore, the chosen tools are exposed. Andengine has been selected regarding the game engine, Android’s Java framework for audio capture and the fast Fourier transform library, JTransforms, for pitch detection. Real time processing is vital for the proper functioning of the application. Lines of improvement and other conclusions are discussed in the last part of this dissertation paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el mundo actual las aplicaciones basadas en sistemas biométricos, es decir, aquellas que miden las señales eléctricas de nuestro organismo, están creciendo a un gran ritmo. Todos estos sistemas incorporan sensores biomédicos, que ayudan a los usuarios a controlar mejor diferentes aspectos de la rutina diaria, como podría ser llevar un seguimiento detallado de una rutina deportiva, o de la calidad de los alimentos que ingerimos. Entre estos sistemas biométricos, los que se basan en la interpretación de las señales cerebrales, mediante ensayos de electroencefalografía o EEG están cogiendo cada vez más fuerza para el futuro, aunque están todavía en una situación bastante incipiente, debido a la elevada complejidad del cerebro humano, muy desconocido para los científicos hasta el siglo XXI. Por estas razones, los dispositivos que utilizan la interfaz cerebro-máquina, también conocida como BCI (Brain Computer Interface), están cogiendo cada vez más popularidad. El funcionamiento de un sistema BCI consiste en la captación de las ondas cerebrales de un sujeto para después procesarlas e intentar obtener una representación de una acción o de un pensamiento del individuo. Estos pensamientos, correctamente interpretados, son posteriormente usados para llevar a cabo una acción. Ejemplos de aplicación de sistemas BCI podrían ser mover el motor de una silla de ruedas eléctrica cuando el sujeto realice, por ejemplo, la acción de cerrar un puño, o abrir la cerradura de tu propia casa usando un patrón cerebral propio. Los sistemas de procesamiento de datos están evolucionando muy rápido con el paso del tiempo. Los principales motivos son la alta velocidad de procesamiento y el bajo consumo energético de las FPGAs (Field Programmable Gate Array). Además, las FPGAs cuentan con una arquitectura reconfigurable, lo que las hace más versátiles y potentes que otras unidades de procesamiento como las CPUs o las GPUs.En el CEI (Centro de Electrónica Industrial), donde se lleva a cabo este TFG, se dispone de experiencia en el diseño de sistemas reconfigurables en FPGAs. Este TFG es el segundo de una línea de proyectos en la cual se busca obtener un sistema capaz de procesar correctamente señales cerebrales, para llegar a un patrón común que nos permita actuar en consecuencia. Más concretamente, se busca detectar cuando una persona está quedándose dormida a través de la captación de unas ondas cerebrales, conocidas como ondas alfa, cuya frecuencia está acotada entre los 8 y los 13 Hz. Estas ondas, que aparecen cuando cerramos los ojos y dejamos la mente en blanco, representan un estado de relajación mental. Por tanto, este proyecto comienza como inicio de un sistema global de BCI, el cual servirá como primera toma de contacto con el procesamiento de las ondas cerebrales, para el posterior uso de hardware reconfigurable sobre el cual se implementarán los algoritmos evolutivos. Por ello se vuelve necesario desarrollar un sistema de procesamiento de datos en una FPGA. Estos datos se procesan siguiendo la metodología de procesamiento digital de señales, y en este caso se realiza un análisis de la frecuencia utilizando la transformada rápida de Fourier, o FFT. Una vez desarrollado el sistema de procesamiento de los datos, se integra con otro sistema que se encarga de captar los datos recogidos por un ADC (Analog to Digital Converter), conocido como ADS1299. Este ADC está especialmente diseñado para captar potenciales del cerebro humano. De esta forma, el sistema final capta los datos mediante el ADS1299, y los envía a la FPGA que se encarga de procesarlos. La interpretación es realizada por los usuarios que analizan posteriormente los datos procesados. Para el desarrollo del sistema de procesamiento de los datos, se dispone primariamente de dos plataformas de estudio, a partir de las cuales se captarán los datos para después realizar el procesamiento: 1. La primera consiste en una herramienta comercial desarrollada y distribuida por OpenBCI, proyecto que se dedica a la venta de hardware para la realización de EEG, así como otros ensayos. Esta herramienta está formada por un microprocesador, un módulo de memoria SD para el almacenamiento de datos, y un módulo de comunicación inalámbrica que transmite los datos por Bluetooth. Además cuenta con el mencionado ADC ADS1299. Esta plataforma ofrece una interfaz gráfica que sirve para realizar la investigación previa al diseño del sistema de procesamiento, al permitir tener una primera toma de contacto con el sistema. 2. La segunda plataforma consiste en un kit de evaluación para el ADS1299, desde la cual se pueden acceder a los diferentes puertos de control a través de los pines de comunicación del ADC. Esta plataforma se conectará con la FPGA en el sistema integrado. Para entender cómo funcionan las ondas más simples del cerebro, así como saber cuáles son los requisitos mínimos en el análisis de ondas EEG se realizaron diferentes consultas con el Dr Ceferino Maestu, neurofisiólogo del Centro de Tecnología Biomédica (CTB) de la UPM. Él se encargó de introducirnos en los distintos procedimientos en el análisis de ondas en electroencefalogramas, así como la forma en que se deben de colocar los electrodos en el cráneo. Para terminar con la investigación previa, se realiza en MATLAB un primer modelo de procesamiento de los datos. Una característica muy importante de las ondas cerebrales es la aleatoriedad de las mismas, de forma que el análisis en el dominio del tiempo se vuelve muy complejo. Por ello, el paso más importante en el procesamiento de los datos es el paso del dominio temporal al dominio de la frecuencia, mediante la aplicación de la transformada rápida de Fourier o FFT (Fast Fourier Transform), donde se pueden analizar con mayor precisión los datos recogidos. El modelo desarrollado en MATLAB se utiliza para obtener los primeros resultados del sistema de procesamiento, el cual sigue los siguientes pasos. 1. Se captan los datos desde los electrodos y se escriben en una tabla de datos. 2. Se leen los datos de la tabla. 3. Se elige el tamaño temporal de la muestra a procesar. 4. Se aplica una ventana para evitar las discontinuidades al principio y al final del bloque analizado. 5. Se completa la muestra a convertir con con zero-padding en el dominio del tiempo. 6. Se aplica la FFT al bloque analizado con ventana y zero-padding. 7. Los resultados se llevan a una gráfica para ser analizados. Llegados a este punto, se observa que la captación de ondas alfas resulta muy viable. Aunque es cierto que se presentan ciertos problemas a la hora de interpretar los datos debido a la baja resolución temporal de la plataforma de OpenBCI, este es un problema que se soluciona en el modelo desarrollado, al permitir el kit de evaluación (sistema de captación de datos) actuar sobre la velocidad de captación de los datos, es decir la frecuencia de muestreo, lo que afectará directamente a esta precisión. Una vez llevado a cabo el primer procesamiento y su posterior análisis de los resultados obtenidos, se procede a realizar un modelo en Hardware que siga los mismos pasos que el desarrollado en MATLAB, en la medida que esto sea útil y viable. Para ello se utiliza el programa XPS (Xilinx Platform Studio) contenido en la herramienta EDK (Embedded Development Kit), que nos permite diseñar un sistema embebido. Este sistema cuenta con: Un microprocesador de tipo soft-core llamado MicroBlaze, que se encarga de gestionar y controlar todo el sistema; Un bloque FFT que se encarga de realizar la transformada rápida Fourier; Cuatro bloques de memoria BRAM, donde se almacenan los datos de entrada y salida del bloque FFT y un multiplicador para aplicar la ventana a los datos de entrada al bloque FFT; Un bus PLB, que consiste en un bus de control que se encarga de comunicar el MicroBlaze con los diferentes elementos del sistema. Tras el diseño Hardware se procede al diseño Software utilizando la herramienta SDK(Software Development Kit).También en esta etapa se integra el sistema de captación de datos, el cual se controla mayoritariamente desde el MicroBlaze. Por tanto, desde este entorno se programa el MicroBlaze para gestionar el Hardware que se ha generado. A través del Software se gestiona la comunicación entre ambos sistemas, el de captación y el de procesamiento de los datos. También se realiza la carga de los datos de la ventana a aplicar en la memoria correspondiente. En las primeras etapas de desarrollo del sistema, se comienza con el testeo del bloque FFT, para poder comprobar el funcionamiento del mismo en Hardware. Para este primer ensayo, se carga en la BRAM los datos de entrada al bloque FFT y en otra BRAM los datos de la ventana aplicada. Los datos procesados saldrán a dos BRAM, una para almacenar los valores reales de la transformada y otra para los imaginarios. Tras comprobar el correcto funcionamiento del bloque FFT, se integra junto al sistema de adquisición de datos. Posteriormente se procede a realizar un ensayo de EEG real, para captar ondas alfa. Por otro lado, y para validar el uso de las FPGAs como unidades ideales de procesamiento, se realiza una medición del tiempo que tarda el bloque FFT en realizar la transformada. Este tiempo se compara con el tiempo que tarda MATLAB en realizar la misma transformada a los mismos datos. Esto significa que el sistema desarrollado en Hardware realiza la transformada rápida de Fourier 27 veces más rápido que lo que tarda MATLAB, por lo que se puede ver aquí la gran ventaja competitiva del Hardware en lo que a tiempos de ejecución se refiere. En lo que al aspecto didáctico se refiere, este TFG engloba diferentes campos. En el campo de la electrónica:  Se han mejorado los conocimientos en MATLAB, así como diferentes herramientas que ofrece como FDATool (Filter Design Analysis Tool).  Se han adquirido conocimientos de técnicas de procesado de señal, y en particular, de análisis espectral.  Se han mejorado los conocimientos en VHDL, así como su uso en el entorno ISE de Xilinx.  Se han reforzado los conocimientos en C mediante la programación del MicroBlaze para el control del sistema.  Se ha aprendido a crear sistemas embebidos usando el entorno de desarrollo de Xilinx usando la herramienta EDK (Embedded Development Kit). En el campo de la neurología, se ha aprendido a realizar ensayos EEG, así como a analizar e interpretar los resultados mostrados en el mismo. En cuanto al impacto social, los sistemas BCI afectan a muchos sectores, donde destaca el volumen de personas con discapacidades físicas, para los cuales, este sistema implica una oportunidad de aumentar su autonomía en el día a día. También otro sector importante es el sector de la investigación médica, donde los sistemas BCIs son aplicables en muchas aplicaciones como, por ejemplo, la detección y estudio de enfermedades cognitivas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bedforms both reflect and influence shallow water hydrodynamics and sediment dynamics. A correct characterization of their spatial distribution and dimensions is required for the understanding, assessment and prediction of numerous coastal processes. A method to parameterize geometrical characteristics using two-dimensional (2D) spectral analysis is presented and tested on seabed elevation data from the Knudedyb tidal inlet in the Danish Wadden Sea, where large compound bedforms are found. The bathymetric data were divided into 20x20 m areas on which a 2D spectral analysis was applied. The most energetic peak of the 2D spectrum was found and its energy, frequency and direction were calculated. A power-law was fitted to the average of slices taken through the 2D spectrum; its slope and y-intercept were calculated. Using these results the test area was morphologically classified into 4 distinct morphological regions. The most energetic peak and the slope and intercept of the power-law showed high values above the crest of the primary bedforms and scour holes, low values in areas without bedforms, and intermediate values in areas with secondary bedforms. The secondary bedform dimensions and orientations were calculated. An area of 700x700 m was used to determine the characteristics of the primary bedforms. However, they were less distinctively characterized compared to the secondary bedforms due to relatively large variations in their orientations and wavelengths. The method is thus appropriate for morphological classification of the seabed and for bedform characterization, being most efficient in areas characterized by bedforms with regular dimensions and directions.