904 resultados para Network Analysis Methods
Bathymetric map of Heron Reef, Australia, derived from airborne hyperspectral data at 1 m resolution
Resumo:
A simple method for efficient inversion of arbitrary radiative transfer models for image analysis is presented. The method operates by representing the shape of the function that maps model parameters to spectral reflectance by an adaptive look-up tree (ALUT) that evenly distributes the discretization error of tabulated reflectances in spectral space. A post-processing step organizes the data into a binary space partitioning tree that facilitates an efficient inversion search algorithm. In an example shallow water remote sensing application, the method performs faster than an implementation of previously published methodology and has the same accuracy in bathymetric retrievals. The method has no user configuration parameters requiring expert knowledge and minimizes the number of forward model runs required, making it highly suitable for routine operational implementation of image analysis methods. For the research community, straightforward and robust inversion allows research to focus on improving the radiative transfer models themselves without the added complication of devising an inversion strategy.
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.
Resumo:
Customer evolution and changes in consumers, determine the fact that the quality of the interface between marketing and sales may represent a true competitive advantage for the firm. Building on multidimensional theoretical and empirical models developed in Europe and on social network analysis, the organizational interface between the marketing and sales departments of a multinational high-growth company with operations in Argentina, Uruguay and Paraguay is studied. Both, attitudinal and social network measures of information exchange are used to make operational the nature and quality of the interface and its impact on performance. Results show the existence of a positive relationship of formalization, joint planning, teamwork, trust and information transfer on interface quality, as well as a positive relationship between interface quality and business performance. We conclude that efficient design and organizational management of the exchange network are essential for the successful performance of consumer goods companies that seek to develop distinctive capabilities to adapt to markets that experience vertiginous changes
Resumo:
Objective The neurodevelopmental–neurodegenerative debate is a basic issue in the field of the neuropathological basis of schizophrenia (SCH). Neurophysiological techniques have been scarcely involved in such debate, but nonlinear analysis methods may contribute to it. Methods Fifteen patients (age range 23–42 years) matching DSM IV-TR criteria for SCH, and 15 sex- and age-matched control subjects (age range 23–42 years) underwent a resting-state magnetoencephalographic evaluation and Lempel–Ziv complexity (LZC) scores were calculated. Results Regression analyses indicated that LZC values were strongly dependent on age. Complexity scores increased as a function of age in controls, while SCH patients exhibited a progressive reduction of LZC values. A logistic model including LZC scores, age and the interaction of both variables allowed the classification of patients and controls with high sensitivity and specificity. Conclusions Results demonstrated that SCH patients failed to follow the “normal” process of complexity increase as a function of age. In addition, SCH patients exhibited a significant reduction of complexity scores as a function of age, thus paralleling the pattern observed in neurodegenerative diseases. Significance Our results support the notion of a progressive defect in SCH, which does not contradict the existence of a basic neurodevelopmental alteration. Highlights ► Schizophrenic patients show higher complexity values as compared to controls. ► Schizophrenic patients showed a tendency to reduced complexity values as a function of age while controls showed the opposite tendency. ► The tendency observed in schizophrenic patients parallels the tendency observed in Alzheimer disease patients.
Resumo:
En este Proyecto se pretende establecer la forma de realizar un análisis correcto y ajustado de las redes SMATV (Satellite Master Antenna Television), incluidas dentro de las ICT (Infraestructura Común de Telecomunicaciones), mediante el método de análisis TDA (Time Domain Analysis). Para ello, en primer lugar se procederá a hacer un estudio teórico sobre las ICT’s y sobre las bases en las que se sustenta el método de análisis TDA que sirva como puente introductorio al tema principal de este proyecto. Este tema es el de, mediante el programa de simulación AWR, caracterizar la señal más adecuada para realizar medidas de calidad en las redes SMATV mediante la técnica del TDA y ser capaz de realizar un estudio conciso de estas. Esto se pretende conseguir mediante la definición más correcta de los parámetros de la señal de entrada que se introduciría en la red en futuras medidas de prueba. Una vez conseguida una señal "tipo", se caracterizarán diferentes dispositivos o elementos que forman las redes SMATV para comprobar que la medida realizada con el método del TDA es igual de válida que realizada con el método de análisis vectorial de redes (VNA). ABSTRACT This project aims to establish how to perform a proper analysis and set of SMATV networks (Satellite Master Antenna Television), included within the ICT (Common Telecommunications Infrastructure) by the method of analysis TDA (Time Domain Analysis). To do this, first it will proceed to make a theoretical study on the ICT's and the basis on which the method of analysis TDA is based, introduction that serve as a bridge to the main issue of this project. This issue is about characterizing the most appropriate signal quality measurements in SMATV networks using the technique of AD through the AWR simulation program, and be able to make a concise study of these. This is intended to achieve through the proper definition of the parameters of the input signal, that would be introduced into the network in future test measures. Once achieved a signal "type", will be characterized different devices or elements forming SMATV networks to check that the measure on the TDA method is as valid as on the method of vector network analysis (VNA) .
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
El objetivo central de la presente investigación es profundizar la interpretación de los parámetros multifractales en el caso de las series de precipitación. Para ello se aborda, en primer lugar, la objetivación de la selección de la parte lineal de las curvas log-log que se encuentra en la base de los métodos de análisis fractal y multifractal; y, en segundo lugar, la generación de series artificiales de precipitación, con características similares a las reales, que permitan manipular los datos y evaluar la influencia de las modificaciones controladas de las series en los resultados de los parámetros multifractales derivados. En cuanto al problema de la selección de la parte lineal de las curvas log-log se desarrollaron dos métodos: a. Cambio de tendencia, que consiste en analizar el cambio de pendiente de las rectas ajustadas a dos subconjuntos consecutivos de los datos. b. Eliminación de casos, que analiza la mejora en el p-valor asociado al coeficiente de correlación al eliminar secuencialmente los puntos finales de la regresión. Los resultados obtenidos respecto a la regresión lineal establecen las siguientes conclusiones: - La metodología estadística de la regresión muestra la dificultad para encontrar el valor de la pendiente de tramos rectos de curvas en el procedimiento base del análisis fractal, indicando que la toma de decisión de los puntos a considerar redunda en diferencias significativas de las pendientes encontradas. - La utilización conjunta de los dos métodos propuestos ayuda a objetivar la toma de decisión sobre la parte lineal de las familias de curvas en el análisis fractal, pero su utilidad sigue dependiendo del número de datos de que se dispone y de las altas significaciones que se obtienen. En cuanto al significado empírico de los parámetros multifratales de la precipitación, se han generado 19 series de precipitación por medio de un simulador de datos diarios en cascada a partir de estimaciones anuales y mensuales, y en base a estadísticos reales de 4 estaciones meteorológicas españolas localizadas en un gradiente de NW a SE. Para todas las series generadas, se calculan los parámetros multifractales siguiendo la técnica de estimación de la DTM (Double Trace Moments - Momentos de Doble Traza) desarrollado por Lavalle et al. (1993) y se observan las modificaciones producidas. Los resultados obtenidos arrojaron las siguientes conclusiones: - La intermitencia, C1, aumenta al concentrar las precipitaciones en menos días, al hacerla más variable, o al incrementar su concentración en los días de máxima, mientras no se ve afectado por la modificación en la variabilidad del número de días de lluvia. - La multifractalidad, α, se ve incrementada con el número de días de lluvia y la variabilidad de la precipitación, tanto anual como mensual, así como también con la concentración de precipitación en el día de máxima. - La singularidad probable máxima, γs, se ve incrementada con la concentración de la lluvia en el día de precipitación máxima mensual y la variabilidad a nivel anual y mensual. - El grado no- conservativo, H, depende del número de los días de lluvia que aparezcan en la serie y secundariamente de la variabilidad general de la lluvia. - El índice de Hurst generalizado se halla muy ligado a la singularidad probable máxima. ABSTRACT The main objective of this research is to interpret the multifractal parameters in the case of precipitation series from an empirical approach. In order to do so the first proposed task was to objectify the selection of the linear part of the log-log curves that is a fundamental step of the fractal and multifractal analysis methods. A second task was to generate precipitation series, with real like features, which allow evaluating the influence of controlled series modifications on the values of the multifractal parameters estimated. Two methods are developed for selecting the linear part of the log-log curves in the fractal and multifractal analysis: A) Tendency change, which means analyzing the change in slope of the fitted lines to two consecutive subsets of data. B) Point elimination, which analyzes the improvement in the p- value associated to the coefficient of correlation when the final regression points are sequentially eliminated. The results indicate the following conclusions: - Statistical methodology of the regression shows the difficulty of finding the slope value of straight sections of curves in the base procedure of the fractal analysis, pointing that the decision on the points to be considered yield significant differences in slopes values. - The simultaneous use of the two proposed methods helps to objectify the decision about the lineal part of a family of curves in fractal analysis, but its usefulness are still depending on the number of data and the statistical significances obtained. Respect to the empiric meaning of the precipitation multifractal parameters, nineteen precipitation series were generated with a daily precipitation simulator derived from year and month estimations and considering statistics from actual data of four Spanish rain gauges located in a gradient from NW to SE. For all generated series the multifractal parameters were estimated following the technique DTM (Double Trace Moments) developed by Lavalle et al. (1993) and the variations produced considered. The results show the following conclusions: 1. The intermittency, C1, increases when precipitation is concentrating for fewer days, making it more variable, or when increasing its concentration on maximum monthly precipitation days, while it is not affected due to the modification in the variability in the number of days it rained. 2. Multifractility, α, increases with the number of rainy days and the variability of the precipitation, yearly as well as monthly, as well as with the concentration of precipitation on the maximum monthly precipitation day. 3. The maximum probable singularity, γs, increases with the concentration of rain on the day of the maximum monthly precipitation and the variability in yearly and monthly level. 4. The non-conservative degree, H’, depends on the number of rainy days that appear on the series and secondly on the general variability of the rain. 5. The general Hurst index is linked to the maximum probable singularity.
Resumo:
The well-documented re-colonisation of the French large river basins of Loire and Rhone by European otter and beaver allowed the analysis of explanatory factors and threats to species movement in the river corridor. To what extent anthropogenic disturbance of the riparian zone influences the corridor functioning is a central question in the understanding of ecological networks and the definition of restoration goals for river networks. The generalist or specialist nature of target species might be determining for the responses to habitat quality and barriers in the riparian corridor. Detailed datasets of land use, human stressors and hydro-morphological characteristics of river segments for the entire river basins allowed identifying the habitat requirements of the two species for the riparian zone. The identified critical factors were entered in a network analysis based on the ecological niche factor approach. Significant responses to riparian corridor quality for forest cover, alterations of channel straightening and urbanisation and infrastructure in the riparian zone are observed for both species, so they may well serve as indicators for corridor functioning. The hypothesis for generalists being less sensitive to human disturbance was withdrawn, since the otter as generalist species responded strongest to hydro-morphological alterations and human presence in general. The beaver responded the strongest to the physical environment as expected for this specialist species. The difference in responses for generalist and specialist species is clearly present and the two species have a strong complementary indicator value. The interpretation of the network analysis outcomes stresses the need for an estimation of ecological requirements of more species in the evaluation of riparian corridor functioning and in conservation planning.
Resumo:
This article describes the results of an investigation aimed at the analysis methods used in the design of the protections against scour phenomenon on offshore wind farms in transitional waters, using medium and large diameter monopile type deep foundations.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Resumo:
Actualmente, la escasez de agua constituye un importante problema en muchos lugares del mundo. El crecimiento de la población, la creciente necesidad de alimentos, el desarrollo socio-económico y el cambio climático ejercen una importante y cada vez mayor presión sobre los recursos hídricos, a la que muchos países van a tener que enfrentarse en los próximos anos. La región Mediterránea es una de las regiones del mundo de mayor escasez de recursos hídricos, y es además una de las zonas más vulnerables al cambio climático. La mayoría de estudios sobre cambio climático prevén mayores temperaturas y una disminución de las precipitaciones, y una creciente escasez de agua debida a la disminución de recursos disponibles y al aumento de las demandas de riego. En el contexto actual de desarrollo de políticas se demanda cada vez más una mayor consideración del cambio climático en el marco de las políticas sectoriales. Sin embargo, los estudios enfocados a un solo sector no reflejan las múltiples dimensiones del los efectos del cambio climático. Numerosos estudios científicos han demostrado que el cambio climático es un fenómeno de naturaleza multi-dimensional y cuyos efectos se transmiten a múltiples escalas. Por tanto, es necesaria la producción de estudios y herramientas de análisis capaces de reflejar todas estas dimensiones y que contribuyan a la elaboración de políticas robustas en un contexto de cambio climático. Esta investigación pretende aportar una visión global de la problemática de la escasez de agua y los impactos, la vulnerabilidad y la adaptación al cambio climático en el contexto de la región mediterránea. La investigación presenta un marco integrado de modelización que se va ampliando progresivamente en un proceso secuencial y multi-escalar en el que en cada etapa se incorpora una nueva dimensión. La investigación consta de cuatro etapas que se abordan a lo largo de cuatro capítulos. En primer lugar, se estudia la vulnerabilidad económica de las explotaciones de regadío del Medio Guadiana, en España. Para ello, se utiliza un modelo de programación matemática en combinación con un modelo econométrico. A continuación, en la segunda etapa, se utiliza un modelo hidro-económico que incluye un modelo de cultivo para analizar los procesos que tienen lugar a escala de cultivo, explotación y cuenca teniendo en cuenta distintas escalas geográficas y de toma de decisiones. Esta herramienta permite el análisis de escenarios de cambio climático y la evaluación de posibles medidas de adaptación. La tercera fase consiste en el análisis de las barreras que dificultan la aplicación de procesos de adaptación para lo cual se analizan las redes socio-institucionales en la cuenca. Finalmente, la cuarta etapa aporta una visión sobre la escasez de agua y el cambio climático a escala nacional y regional mediante el estudio de distintos escenarios de futuro plausibles y los posibles efectos de las políticas en la escasez de agua. Para este análisis se utiliza un modelo econométrico de datos de panel para la región mediterránea y un modelo hidro-económico que se aplica a los casos de estudio de España y Jordania. Los resultados del estudio ponen de relieve la importancia de considerar múltiples escalas y múltiples dimensiones en el estudio de la gestión de los recursos hídricos y la adaptación al cambio climático en los contextos mediterráneos de escasez de agua estudiados. Los resultados muestran que los impactos del cambio climático en la cuenca del Guadiana y en el conjunto de España pueden comprometer la sostenibilidad del regadío y de los ecosistemas. El análisis a escala de cuenca hidrográfica resalta la importancia de las interacciones entre los distintos usuarios del agua y en concreto entre distintas comunidades de regantes, así como la necesidad de fortalecer el papel de las instituciones y de fomentar la creación de una visión común en la cuenca para facilitar la aplicación de los procesos de adaptación. Asimismo, los resultados de este trabajo evidencian también la capacidad y el papel fundamental de las políticas para lograr un desarrollo sostenible y la adaptación al cambio climático es regiones de escasez de agua tales como la región mediterránea. Especialmente, este trabajo pone de manifiesto el potencial de la Directiva Marco del Agua de la Unión Europea para lograr una efectiva adaptación al cambio climático. Sin embargo, en Jordania, además de la adaptación al cambio climático, es preciso diseñar estrategias de desarrollo sostenible más ambiciosas que contribuyan a reducir el riesgo futuro de escasez de agua. ABSTRACT Water scarcity is becoming a major concern in many parts of the world. Population growth, increasing needs for food production, socio-economic development and climate change represent pressures on water resources that many countries around the world will have to deal in the coming years. The Mediterranean region is one of the most water scarce regions of the world and is considered a climate change hotspot. Most projections of climate change envisage an increase in temperatures and a decrease in precipitation and a resulting reduction in water resources availability as a consequence of both reduced water availability and increased irrigation demands. Current policy development processes require the integration of climate change concerns into sectoral policies. However, sector-oriented studies often fail to address all the dimensions of climate change implications. Climate change research in the last years has evidenced the need for more integrated studies and methodologies that are capable of addressing the multi-scale and multi-dimensional nature of climate change. This research attempts to provide a comprehensive view of water scarcity and climate change impacts, vulnerability and adaptation in Mediterranean contexts. It presents an integrated modelling framework that is progressively enlarged in a sequential multi-scale process in which a new dimension of climate change and water resources is addressed at every stage. It is comprised of four stages, each one explained in a different chapter. The first stage explores farm-level economic vulnerability in the Spanish Guadiana basin using a mathematical programming model in combination with an econometric model. Then, in a second stage, the use of a hydro-economic modelling framework that includes a crop growth model allows for the analysis of crop, farm and basin level processes taking into account different geographical and decision-making scales. This integrated tool is used for the analysis of climate change scenarios and for the assessment of potential adaptation options. The third stage includes the analysis of barriers to the effective implementation of adaptation processes based on socioinstitutional network analysis. Finally, a regional and country level perspective of water scarcity and climate change is provided focusing on different possible socio-economic development pathways and the effect of policies on future water scarcity. For this analysis, a panel-data econometric model and a hydro-economic model are applied for the analysis of the Mediterranean region and country level case studies in Spain and Jordan. The overall results of the study demonstrate the value of considering multiple scales and multiple dimensions in water management and climate change adaptation in the Mediterranean water scarce contexts analysed. Results show that climate change impacts in the Guadiana basin and in Spain may compromise the sustainability of irrigation systems and ecosystems. The analysis at the basin level highlights the prominent role of interactions between different water users and irrigation districts and the need to strengthen institutional capacity and common understanding in the basin to enhance the implementation of adaptation processes. The results of this research also illustrate the relevance of water policies in achieving sustainable development and climate change adaptation in water scarce areas such as the Mediterranean region. Specifically, the EU Water Framework Directive emerges as a powerful trigger for climate change adaptation. However, in Jordan, outreaching sustainable development strategies are required in addition to climate change adaptation to reduce future risk of water scarcity.
Resumo:
Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.
Resumo:
Versatile and accurate motion capture systems, with the required properties to be integrated within both clinical and domiciliary environments, would represent a significant advance in following the progress of the patients as well as in allowing the incorporation of new data exploitation and analysis methods to enhance the functional neurorehabilitation therapeutic processes. Besides, these systems would permit the later development of new applications focused on the automatization of the therapeutic tasks in order to increase the therapist/patient ratio, thus decreasing the costs [1]. However, current motion capture systems are not still ready to work within uncontrolled environments.
Resumo:
Traumatic Brain Injury -TBI- -1- is defined as an acute event that causes certain damage to areas of the brain. TBI may result in a significant impairment of an individuals physical, cognitive and psychosocial functioning. The main consequence of TBI is a dramatic change in the individuals daily life involving a profound disruption of the family, a loss of future income capacity and an increase of lifetime cost. One of the main challenges of TBI Neuroimaging is to develop robust automated image analysis methods to detect signatures of TBI, such as: hyper-intensity areas, changes in image contrast and in brain shape. The final goal of this research is to develop a method to identify the altered brain structures by automatically detecting landmarks on the image where signal changes and to provide comprehensive information to the clinician about them. These landmarks identify injured structures by co-registering the patient?s image with an atlas where landmarks have been previously detected. The research work has been initiated by identifying brain structures on healthy subjects to validate the proposed method. Later, this method will be used to identify modified structures on TBI imaging studies.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.