946 resultados para Application techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary hypothesis stated by this paper is that the use of social choice theory in Ambient Intelligence systems can improve significantly users satisfaction when accessing shared resources. A research methodology based on agent based social simulations is employed to support this hypothesis and to evaluate these benefits. The result is a six-fold contribution summarized as follows. Firstly, several considerable differences between this application case and the most prominent social choice application, political elections, have been found and described. Secondly, given these differences, a number of metrics to evaluate different voting systems in this scope have been proposed and formalized. Thirdly, given the presented application and the metrics proposed, the performance of a number of well known electoral systems is compared. Fourthly, as a result of the performance study, a novel voting algorithm capable of obtaining the best balance between the metrics reviewed is introduced. Fifthly, to improve the social welfare in the experiments, the voting methods are combined with cluster analysis techniques. Finally, the article is complemented by a free and open-source tool, VoteSim, which ensures not only the reproducibility of the experimental results presented, but also allows the interested reader to adapt the case study presented to different environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graph automorphism (GA) is a classical problem, in which the objective is to compute the automorphism group of an input graph. In this work we propose four novel techniques to speed up algorithms that solve the GA problem by exploring a search tree. They increase the performance of the algorithm by allowing to reduce the depth of the search tree, and by effectively pruning it. We formally prove that a GA algorithm that uses these techniques correctly computes the automorphism group of the input graph. We also describe how the techniques have been incorporated into the GA algorithm conauto, as conauto-2.03, with at most an additive polynomial increase in its asymptotic time complexity. We have experimentally evaluated the impact of each of the above techniques with several graph families. We have observed that each of the techniques by itself significantly reduces the number of processed nodes of the search tree in some subset of graphs, which justifies the use of each of them. Then, when they are applied together, their effect is combined, leading to reductions in the number of processed nodes in most graphs. This is also reflected in a reduction of the running time, which is substantial in some graph families.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Society is frequently exposed to and threatened by dangerous phenomena in many parts of the world. Different types of such phenomena require specific actions for proper risk management, from the stages of hazard identification to those of mitigation (including monitoring and early-warning) and/or reduction. The understanding of both predisposing factors and triggering mechanisms of a given danger and the prediction of its evolution from the source to the overall affected zone are relevant issues that must be addressed to properly evaluate a given hazard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis estudia la evolución estructural de conjuntos de neuronas como la capacidad de auto-organización desde conjuntos de neuronas separadas hasta que forman una red (clusterizada) compleja. Esta tesis contribuye con el diseño e implementación de un algoritmo no supervisado de segmentación basado en grafos con un coste computacional muy bajo. Este algoritmo proporciona de forma automática la estructura completa de la red a partir de imágenes de cultivos neuronales tomadas con microscopios de fase con una resolución muy alta. La estructura de la red es representada mediante un objeto matemático (matriz) cuyos nodos representan a las neuronas o grupos de neuronas y los enlaces son las conexiones reconstruidas entre ellos. Este algoritmo extrae también otras medidas morfológicas importantes que caracterizan a las neuronas y a las neuritas. A diferencia de otros algoritmos hasta el momento, que necesitan de fluorescencia y técnicas inmunocitoquímicas, el algoritmo propuesto permite el estudio longitudinal de forma no invasiva posibilitando el estudio durante la formación de un cultivo. Además, esta tesis, estudia de forma sistemática un grupo de variables topológicas que garantizan la posibilidad de cuantificar e investigar la progresión de las características principales durante el proceso de auto-organización del cultivo. Nuestros resultados muestran la existencia de un estado concreto correspondiente a redes con configuracin small-world y la emergencia de propiedades a micro- y meso-escala de la estructura de la red. Finalmente, identificamos los procesos físicos principales que guían las transformaciones morfológicas de los cultivos y proponemos un modelo de crecimiento de red que reproduce el comportamiento cuantitativamente de las observaciones experimentales. ABSTRACT The thesis analyzes the morphological evolution of assemblies of living neurons, as they self-organize from collections of separated cells into elaborated, clustered, networks. In particular, it contributes with the design and implementation of a graph-based unsupervised segmentation algorithm, having an associated very low computational cost. The processing automatically retrieves the whole network structure from large scale phase-contrast images taken at high resolution throughout the entire life of a cultured neuronal network. The network structure is represented by a mathematical object (a matrix) in which nodes are identified neurons or neurons clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocyto- chemistry techniques, our measures are non invasive and entitle us to carry out a fully longitudinal analysis during the maturation of a single culture. In turn, a systematic statistical analysis of a group of topological observables grants us the possibility of quantifying and tracking the progression of the main networks characteristics during the self-organization process of the culture. Our results point to the existence of a particular state corresponding to a small-world network configuration, in which several relevant graphs micro- and meso-scale properties emerge. Finally, we identify the main physical processes taking place during the cultures morphological transformations, and embed them into a simplified growth model that quantitatively reproduces the overall set of experimental observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trabajo que ha dado lugar a esta Tesis Doctoral se enmarca en la invesitagación en células solares de banda intermedia (IBSCs, por sus siglas en inglés). Se trata de un nuevo concepto de célula solar que ofrece la posibilidad de alcanzar altas eficiencias de conversión fotovoltaica. Hasta ahora, se han demostrado de manera experimental los fundamentos de operación de las IBSCs; sin embargo, esto tan sólo has sido posible en condicines de baja temperatura. El concepto de banda intermedia (IB, por sus siglas en inglés) exige que haya desacoplamiento térmico entre la IB y las bandas de valencia y conducción (VB and CB, respectivamente, por sus siglas en inglés). Los materiales de IB actuales presentan un acoplamiento térmico demasiado fuerte entre la IB y una de las otras dos bandas, lo cual impide el correcto funcionamiento de las IBSCs a temperatura ambiente. En el caso particular de las IBSCs fabricadas con puntos cuánticos (QDs, por sus siglas en inglés) de InAs/GaAs - a día de hoy, la tecnología de IBSC más estudiada - , se produce un rápido intercambio de portadores entre la IB y la CB, por dos motivos: (1) una banda prohibida estrecha (< 0.2 eV) entre la IB y la CB, E^, y (2) la existencia de niveles electrónicos entre ellas. El motivo (1) implica, a su vez, que la máxima eficiencia alcanzable en estos dispositivos es inferior al límite teórico de la IBSC ideal, en la cual E^ = 0.71 eV. En este contexto, nuestro trabajo se centra en el estudio de IBSCs de alto gap (o banda prohibida) fabricadsas con QDs, o lo que es lo mismo, QD-IBSCs de alto gap. Hemos fabricado e investigado experimentalmente los primeros prototipos de QD-IBSC en los que se utiliza AlGaAs o InGaP para albergar QDs de InAs. En ellos demostramos une distribución de gaps mejorada con respecto al caso de InAs/GaAs. En concreto, hemos medido valores de E^ mayores que 0.4 eV. En los prototipos de InAs/AlGaAs, este incremento de E^ viene acompaado de un incremento, en más de 100 meV, de la energía de activación del escape térmico. Además, nuestros dispositivos de InAs/AlGaAs demuestran conversión a la alza de tensión; es decir, la producción de una tensión de circuito abierto mayor que la energía de los fotones (dividida por la carga del electrón) de un haz monocromático incidente, así como la preservación del voltaje a temperaura ambiente bajo iluminación de luz blanca concentrada. Asimismo, analizamos el potencial para detección infrarroja de los materiales de IB. Presentamos un nuevo concepto de fotodetector de infrarrojos, basado en la IB, que hemos llamado: fotodetector de infrarrojos activado ópticamente (OTIP, por sus siglas en inglés). Nuestro novedoso dispositivo se basa en un nuevo pricipio físico que permite que la detección de luz infrarroja sea conmutable (ON y OFF) mediante iluminación externa. Hemos fabricado un OTIP basado en QDs de InAs/AlGaAs con el que demostramos fotodetección, bajo incidencia normal, en el rango 2-6/xm, activada ópticamente por un diodoe emisor de luz de 590 nm. El estudio teórico del mecanismo de detección asistido por la IB en el OTIP nos lleva a poner en cuestión la asunción de quasi-niveles de Fermi planos en la zona de carga del espacio de una célula solar. Apoyados por simuaciones a nivel de dispositivo, demostramos y explicamos por qué esta asunción no es válida en condiciones de corto-circuito e iluminación. También llevamos a cabo estudios experimentales en QD-IBSCs de InAs/AlGaAs con la finalidad de ampliar el conocimiento sobre algunos aspectos de estos dispositivos que no han sido tratados aun. En particular, analizamos el impacto que tiene el uso de capas de disminución de campo (FDLs, por sus siglas en inglés), demostrando su eficiencia para evitar el escape por túnel de portadores desde el QD al material anfitrión. Analizamos la relación existente entre el escape por túnel y la preservación del voltaje, y proponemos las medidas de eficiencia cuántica en función de la tensión como una herramienta útil para evaluar la limitación del voltaje relacionada con el túnel en QD-IBSCs. Además, realizamos medidas de luminiscencia en función de la temperatura en muestras de InAs/GaAs y verificamos que los resltados obtenidos están en coherencia con la separación de los quasi-niveles de Fermi de la IB y la CB a baja temperatura. Con objeto de contribuir a la capacidad de fabricación y caracterización del Instituto de Energía Solar de la Universidad Politécnica de Madrid (IES-UPM), hemos participado en la instalación y puesta en marcha de un reactor de epitaxia de haz molecular (MBE, por sus siglas en inglés) y el desarrollo de un equipo de caracterización de foto y electroluminiscencia. Utilizando dicho reactor MBE, hemos crecido, y posteriormente caracterizado, la primera QD-IBSC enteramente fabricada en el IES-UPM. ABSTRACT The constituent work of this Thesis is framed in the research on intermediate band solar cells (IBSCs). This concept offers the possibility of achieving devices with high photovoltaic-conversion efficiency. Up to now, the fundamentals of operation of IBSCs have been demonstrated experimentally; however, this has only been possible at low temperatures. The intermediate band (IB) concept demands thermal decoupling between the IB and the valence and conduction bands. Stateof- the-art IB materials exhibit a too strong thermal coupling between the IB and one of the other two bands, which prevents the proper operation of IBSCs at room temperature. In the particular case of InAs/GaAs quantum-dot (QD) IBSCs - as of today, the most widely studied IBSC technology - , there exist fast thermal carrier exchange between the IB and the conduction band (CB), for two reasons: (1) a narrow (< 0.2 eV) energy gap between the IB and the CB, EL, and (2) the existence of multiple electronic levels between them. Reason (1) also implies that maximum achievable efficiency is below the theoretical limit for the ideal IBSC, in which EL = 0.71 eV. In this context, our work focuses on the study of wide-bandgap QD-IBSCs. We have fabricated and experimentally investigated the first QD-IBSC prototypes in which AlGaAs or InGaP is the host material for the InAs QDs. We demonstrate an improved bandgap distribution, compared to the InAs/GaAs case, in our wide-bandgap devices. In particular, we have measured values of EL higher than 0.4 eV. In the case of the AlGaAs prototypes, the increase in EL comes with an increase of more than 100 meV of the activation energy of the thermal carrier escape. In addition, in our InAs/AlGaAs devices, we demonstrate voltage up-conversion; i. e., the production of an open-circuit voltage larger than the photon energy (divided by the electron charge) of the incident monochromatic beam, and the achievement of voltage preservation at room temperature under concentrated white-light illumination. We also analyze the potential of an IB material for infrared detection. We present a IB-based new concept of infrared photodetector that we have called the optically triggered infrared photodetector (OTIP). Our novel device is based on a new physical principle that allows the detection of infrared light to be switched ON and OFF by means of an external light. We have fabricated an OTIP based on InAs/AlGaAs QDs with which we demonstrate normal incidence photodetection in the 2-6 /xm range optically triggered by a 590 nm light-emitting diode. The theoretical study of the IB-assisted detection mechanism in the OTIP leads us to questioning the assumption of flat quasi-Fermi levels in the space-charge region of a solar cell. Based on device simulations, we prove and explain why this assumption is not valid under short-circuit and illumination conditions. We perform new experimental studies on InAs/GaAs QD-IBSC prototypes in order to gain knowledge on yet unexplored aspects of the performance of these devices. Specifically, we analyze the impact of the use of field-damping layers, and demonstrate this technique to be efficient for avoiding tunnel carrier escape from the QDs to the host material. We analyze the relationship between tunnel escape and voltage preservation, and propose voltage-dependent quantum efficiency measurements as an useful technique for assessing the tunneling-related limitation to the voltage preservation of QD-IBSC prototypes. Moreover, we perform temperature-dependent luminescence studies on InAs/GaAs samples and verify that the results are consistent with a split of the quasi-Fermi levels for the CB and the IB at low temperature. In order to contribute to the fabrication and characterization capabilities of the Solar Energy Institute of the Universidad Polite´cnica de Madrid (IES-UPM), we have participated in the installation and start-up of an molecular beam epitaxy (MBE) reactor and the development of a photo and electroluminescence characterization set-up. Using the MBE reactor, we have manufactured and characterized the first QD-IBSC fully fabricated at the IES-UPM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis estudia la evolución estructural de conjuntos de neuronas como la capacidad de auto-organización desde conjuntos de neuronas separadas hasta que forman una red (clusterizada) compleja. Esta tesis contribuye con el diseño e implementación de un algoritmo no supervisado de segmentación basado en grafos con un coste computacional muy bajo. Este algoritmo proporciona de forma automática la estructura completa de la red a partir de imágenes de cultivos neuronales tomadas con microscopios de fase con una resolución muy alta. La estructura de la red es representada mediante un objeto matemático (matriz) cuyos nodos representan a las neuronas o grupos de neuronas y los enlaces son las conexiones reconstruidas entre ellos. Este algoritmo extrae también otras medidas morfológicas importantes que caracterizan a las neuronas y a las neuritas. A diferencia de otros algoritmos hasta el momento, que necesitan de fluorescencia y técnicas inmunocitoquímicas, el algoritmo propuesto permite el estudio longitudinal de forma no invasiva posibilitando el estudio durante la formación de un cultivo. Además, esta tesis, estudia de forma sistemática un grupo de variables topológicas que garantizan la posibilidad de cuantificar e investigar la progresión de las características principales durante el proceso de auto-organización del cultivo. Nuestros resultados muestran la existencia de un estado concreto correspondiente a redes con configuracin small-world y la emergencia de propiedades a micro- y meso-escala de la estructura de la red. Finalmente, identificamos los procesos físicos principales que guían las transformaciones morfológicas de los cultivos y proponemos un modelo de crecimiento de red que reproduce el comportamiento cuantitativamente de las observaciones experimentales. ABSTRACT The thesis analyzes the morphological evolution of assemblies of living neurons, as they self-organize from collections of separated cells into elaborated, clustered, networks. In particular, it contributes with the design and implementation of a graph-based unsupervised segmentation algorithm, having an associated very low computational cost. The processing automatically retrieves the whole network structure from large scale phase-contrast images taken at high resolution throughout the entire life of a cultured neuronal network. The network structure is represented by a mathematical object (a matrix) in which nodes are identified neurons or neurons clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocyto- chemistry techniques, our measures are non invasive and entitle us to carry out a fully longitudinal analysis during the maturation of a single culture. In turn, a systematic statistical analysis of a group of topological observables grants us the possibility of quantifying and tracking the progression of the main networks characteristics during the self-organization process of the culture. Our results point to the existence of a particular state corresponding to a small-world network configuration, in which several relevant graphs micro- and meso-scale properties emerge. Finally, we identify the main physical processes taking place during the cultures morphological transformations, and embed them into a simplified growth model that quantitatively reproduces the overall set of experimental observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes an automatic methodology for modeling complex systems. Our methodology is based on the combination of Grammatical Evolution and classical regression to obtain an optimal set of features that take part of a linear and convex model. This technique provides both Feature Engineering and Symbolic Regression in order to infer accurate models with no effort or designer's expertise requirements. As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. These facilities consume from 10 to 100 times more power per square foot than typical office buildings. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. For this case study, our methodology minimizes error in power prediction. This work has been tested using real Cloud applications resulting on an average error in power estimation of 3.98%. Our work improves the possibilities of deriving Cloud energy efficient policies in Cloud data centers being applicable to other computing environments with similar characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the many types of natural and manmade cavities in different parts of the world is important to the fields of geology, geophysics, engineering, architectures, agriculture, heritages and landscape. Ground-penetrating radar (GPR) is a noninvasive geodetection and geolocation technique suitable for accurately determining buried structures. This technique requires knowing the propagation velocity of electromagnetic waves (EM velocity) in the medium. We propose a method for calibrating the EM velocity using the integration of laser imaging detection and ranging (LIDAR) and GPR techniques using the Global Navigation Satellite System (GNSS) as support for geolocation. Once the EM velocity is known and the GPR profiles have been properly processed and migrated, they will also show the hidden cavities and the old hidden structures from the cellar. In this article, we present a complete study of the joint use of the GPR, LIDAR and GNSS techniques in the characterization of cavities. We apply this methodology to study underground cavities in a group of wine cellars located in Atauta (Soria, Spain). The results serve to identify construction elements that form the cavity and group of cavities or cellars. The described methodology could be applied to other shallow underground structures with surface connection, where LIDAR and GPR profiles could be joined, as, for example, in archaeological cavities, sewerage systems, drainpipes, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The underground cellars that appear in different parts of Spain are part of an agricultural landscape dispersed, sometimes damaged, others at risk of disappearing. This paper studies the measurement and display of a group of wineries located in Atauta (Soria), in the Duero River corridor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values of dust re-suspended during a LOVA.