905 resultados para STEP-NC
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.
Resumo:
Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.
Resumo:
A proposal for a model of the primary visual cortex is reported. It is structured with the basis of a simple unit cell able to perform fourteen pairs of different boolean functions corresponding to the two possible inputs. As a first step, a model of the retina is presented. Different types of responses, according to the different possibilities of interconnecting the building blocks, have been obtained. These responses constitute the basis for an initial configuration of the mammalian primary visual cortex. Some qualitative functions, as symmetry or size of an optical input, have been obtained. A proposal to extend this model to some higher functions, concludes the paper.
Resumo:
La artroplastia de cadera se considera uno de los mayores avances quirúrgicos de la Medicina. La aplicación de esta técnica de Traumatología se ha incrementado notablemente en los últimos anos, a causa principalmente del progresivo incremento de la esperanza de vida. En efecto, con la edad aumentan los problemas de artrosis y osteoporosis, enfermedades típicas de las articulaciones y de los huesos que requieren en muchos casos la sustitución protésica total o parcial de la articulación. El buen comportamiento funcional de una prótesis depende en gran medida de la estabilidad primaria, es decir, el correcto anclaje de la prótesis en el momento de su implantación. Las prótesis no cementadas basan su éxito a largo plazo en la osteointegración que tiene lugar entre el material protésico y el tejido óseo, y para lograrla es imprescindible conseguir unas buenas condiciones de estabilidad primaria. El aflojamiento aséptico es la principal causa de fallo de artroplastia total de cadera. Este es un fenómeno en el que, debido a complejas interacciones de factores mecánicos y biológicos, se producen movimientos relativos que comprometen la funcionalidad del implante. La minimización de los correspondientes danos depende en gran medida de la detección precoz del aflojamiento. Para lograr la detección temprana del aflojamiento aséptico del vástago femoral se han ensayado diferentes técnicas, tanto in vivo como in vitro: análisis numéricos y técnicas experimentales basadas en sensores de movimientos provocados por cargas transmitidas natural o artificialmente, tales como impactos o vibraciones de distintas frecuencias. Los montajes y procedimientos aplicados son heterogéneos y, en muchas ocasiones, complejos y costosos, no existiendo acuerdo sobre una técnica simple y eficaz de aplicación general. Asimismo, en la normativa vigente que regula las condiciones que debe cumplir una prótesis previamente a su comercialización, no hay ningún apartado referido específicamente a la evaluación de la bondad del diseño del vástago femoral con respecto a la estabilidad primaria. El objetivo de esta tesis es desarrollar una metodología para el análisis, in vitro, de la estabilidad de un vástago femoral implantado, a fin de poder evaluar las técnicas de implantación y los diferentes diseños de prótesis previamente a su oferta en el mercado. Además se plantea como requisito fundamental que el método desarrollado sea sencillo, reversible, repetible, no destructivo, con control riguroso de parámetros (condiciones de contorno de cargas y desplazamientos) y con un sistema de registro e interpretación de resultados rápido, fiable y asequible. Como paso previo, se ha realizado un análisis cualitativo del problema de contacto en la interfaz hueso-vástago aplicando una técnica optomecánica del campo continuo (fotoelasticidad). Para ello se han fabricado tres modelos en 2D del conjunto hueso-vástago, simulando tres tipos de contactos en la interfaz: contacto sin adherencia y con holgura, contacto sin adherencia y sin holgura, y contacto con adherencia y homogéneo. Aplicando la misma carga a cada modelo, y empleando la técnica de congelación de tensiones, se han visualizado los correspondientes estados tensionales, siendo estos más severos en el modelo de unión sin adherencia, como cabía esperar. En todo caso, los resultados son ilustrativos de la complejidad del problema de contacto y confirman la conveniencia y necesidad de la vía experimental para el estudio del problema. Seguidamente se ha planteado un ensayo dinámico de oscilaciones libres con instrumentación de sensores resistivos tipo galga extensométrica. Las muestras de ensayo han sido huesos fémur en todas sus posibles variantes: modelos simplificados, hueso sintético normalizado y hueso de cadáver, seco y fresco. Se ha diseñado un sistema de empotramiento del extremo distal de la muestra (fémur) con control riguroso de las condiciones de anclaje. La oscilación libre de la muestra se ha obtenido mediante la liberación instantánea de una carga estética determinada y aplicada previamente, bien con una maquina de ensayo o bien por gravedad. Cada muestra se ha instrumentado con galgas extensométricas convencionales cuya señal se ha registrado con un equipo dinámico comercial. Se ha aplicado un procedimiento de tratamiento de señal para acotar, filtrar y presentar las respuestas de los sensores en el dominio del tiempo y de la frecuencia. La interpretación de resultados es de tipo comparativo: se aplica el ensayo a una muestra de fémur Intacto que se toma de referencia, y a continuación se repite el ensayo sobre la misma muestra con una prótesis implantada; la comparación de resultados permite establecer conclusiones inmediatas sobre los efectos de la implantación de la prótesis. La implantación ha sido realizada por un cirujano traumatólogo utilizando las mismas técnicas e instrumental empleadas en el quirófano durante la práctica clínica real, y se ha trabajado con tres vástagos femorales comerciales. Con los resultados en el dominio del tiempo y de la frecuencia de las distintas aplicaciones se han establecido conclusiones sobre los siguientes aspectos: Viabilidad de los distintos tipos de muestras sintéticas: modelos simplificados y fémur sintético normalizado. Repetibilidad, linealidad y reversibilidad del ensayo. Congruencia de resultados con los valores teóricos deducidos de la teoría de oscilaciones libres de barras. Efectos de la implantación de tallos femorales en la amplitud de las oscilaciones, amortiguamiento y frecuencias de oscilación. Detección de armónicos asociados a la micromovilidad. La metodología se ha demostrado apta para ser incorporada a la normativa de prótesis, es de aplicación universal y abre vías para el análisis de la detección y caracterización de la micromovilidad de una prótesis frente a las cargas de servicio. ABSTRACT Total hip arthroplasty is considered as one of the greatest surgical advances in medicine. The application of this technique on Traumatology has increased significantly in recent years, mainly due to the progressive increase in life expectancy. In fact, advanced age increases osteoarthritis and osteoporosis problems, which are typical diseases of joints and bones, and in many cases require full or partial prosthetic replacement on the joint. Right functional behavior of prosthesis is highly dependent on the primary stability; this means it depends on the correct anchoring of the prosthesis at the time of implantation. Uncemented prosthesis base their long-term success on the quality of osseointegration that takes place between the prosthetic material and bone tissue, and to achieve this good primary stability conditions is mandatory. Aseptic loosening is the main cause of failure in total hip arthroplasty. This is a phenomenon in which relative movements occur, due to complex interactions of mechanical and biological factors, and these micromovements put the implant functionality at risk. To minimize possible damage, it greatly depends on the early detection of loosening. For this purpose, various techniques have been tested both in vivo and in vitro: numerical analysis and experimental techniques based on sensors for movements caused by naturally or artificially transmitted loads, such as impacts or vibrations at different frequencies. The assemblies and methods applied are heterogeneous and, in many cases, they are complex and expensive, with no agreement on the use of a simple and effective technique for general purposes. Likewise, in current regulations for governing the conditions to be fulfilled by the prosthesis before going to market, there is no specific section related to the evaluation of the femoral stem design in relation to primary stability. The aim of this thesis is to develop a in vitro methodology for analyzing the stability of an implanted femoral stem, in order to assess the implantation techniques and the different prosthesis designs prior to its offer in the market. We also propose as a fundamental requirement that the developed testing method should be simple, reversible, repeatable, non-destructive, with close monitoring of parameters (boundary conditions of loads and displacements) and with the availability of a register system to record and interpret results in a fast, reliable and affordable manner. As a preliminary step, we have performed a qualitative analysis of the contact problems in the bone-stem interface, through the application of a continuous field optomechanical technique (photoelasticity). For this proposal three 2D models of bone–stem set, has been built simulating three interface contact types: loosened an unbounded contact, unbounded and fixed contact, and bounded homogeneous contact. By means of applying the same load to each model, and using the stress freezing technique, it has displayed the corresponding stress states, being more severe as expected, in the unbounded union model. In any case, the results clearly show the complexity of the interface contact problem, and they confirm the need for experimental studies about this problem. Afterward a free oscillation dynamic test has been done using resistive strain gauge sensors. Test samples have been femur bones in all possible variants: simplified models, standardized synthetic bone, and dry and cool cadaveric bones. An embedding system at the distal end of the sample with strong control of the anchoring conditions has been designed. The free oscillation of the sample has been obtained by the instantaneous release of a static load, which was previously determined and applied to the sample through a testing machine or using the gravity force. Each sample was equipped with conventional strain gauges whose signal is registered with a marketed dynamic equipment. Then, it has applied a signal processing procedure to delimit, filter and present the time and frequency response signals from the sensors. Results are interpreted by comparing different trials: the test is applied to an intact femur sample which is taken as a reference, and then this test is repeated over the same sample with an implanted prosthesis. From comparison between results, immediate conclusions about the effects of the implantation of the prosthesis can be obtained. It must be said that the implementation has been made by an expert orthopedic surgeon using the same techniques and instruments as those used in clinical surgery. He has worked with three commercial femoral stems. From the results obtained in the time and frequency domains for the different applications the following conclusions have been established: Feasibility of the different types of synthetic samples: simplified models and standardized synthetic femur. Repeatability, linearity and reversibility of the testing method. Consistency of results with theoretical values deduced from the bars free oscillations theory. Effects of introduction of femoral stems in the amplitude, damping and frequencies of oscillations Detection of micromobility associated harmonics. This methodology has been proved suitable to be included in the standardization process of arthroplasty prosthesis, it is universally applicable and it allows establishing new methods for the analysis, detection and characterization of prosthesis micromobility due to functional loads.
Resumo:
When the fresh fruit reaches the final markets from the suppliers, its quality is not always as good as it should, either because it has been mishandled during transportation or because it lacks an adequate quality control at the producer level, before being shipped. This is why it is necessary for the final markets to establish their own quality assessment system if they want to ensure to their customers the quality they want to sell. In this work, a system to control fruit quality at the last level of the distribution channel has been designed. The system combines rapid control techniques with laboratory equipment and statistical sampling protocols, to obtain a dynamic, objective process, which can substitute advantageously the quality control inspections carried out visually by human experts at the reception platform of most hypermarkets. Portable measuring equipment have been chosen (firmness tester, temperature and humidity sensors...) as well as easy-to-use laboratory equipment (texturometer, colorimeter, refractometer..,) combining them to control the most important fruit quality parameters (firmness, colour, sugars, acids). A complete computer network has been designed to control all the processes and store the collected data in real time, and to perform the computations. The sampling methods have been also defined to guarantee the confidence of the results. Some of the advantages of a quality assessment system as the proposed one are: the minimisation of human subjectivity, the ability to use modern measuring techniques, and the possibility of using it also as a supplier's quality control system. It can be also a way to clarify the quality limits of fruits among members of the commercial channel, as well as the first step in the standardisation of quality control procedures.
Condicionantes de la adherencia y anclaje en el refuerzo de muros de fábrica con elementos de fibras
Resumo:
Es cada vez más frecuente la rehabilitación de patrimonio construido, tanto de obras deterioradas como para la adecuación de obras existentes a nuevos usos o solicitaciones. Se ha considerado el estudio del refuerzo de obras de fábrica ya que constituyen un importante número dentro del patrimonio tanto de edificación como de obra civil (sistemas de muros de carga o en estructuras principales porticadas de acero u hormigón empleándose las fábricas como cerramiento o distribución con elementos autoportantes). A la hora de reparar o reforzar una estructura es importante realizar un análisis de las deficiencias, caracterización mecánica del elemento y solicitaciones presentes o posibles; en el apartado 1.3 del presente trabajo se refieren acciones de rehabilitación cuando lo que se precisa no es refuerzo estructural, así como las técnicas tradicionales más habituales para refuerzo de fábricas que suelen clasificarse según se trate de refuerzos exteriores o interiores. En los últimos años se ha adoptado el sistema de refuerzo de FRP, tecnología con origen en los refuerzos de hormigón tanto de elementos a flexión como de soportes. Estos refuerzos pueden ser de láminas adheridas a la fábrica soporte (SM), o de barras incluidas en rozas lineales (NSM). La elección de un sistema u otro depende de la necesidad de refuerzo y tipo de solicitación predominante, del acceso para colocación y de la exigencia de impacto visual. Una de las mayores limitaciones de los sistemas de refuerzo por FRP es que no suele movilizarse la resistencia del material de refuerzo, produciéndose previamente fallo en la interfase con el soporte con el consecuente despegue o deslaminación; dichos fallos pueden tener un origen local y propagarse a partir de una discontinuidad, por lo que es preciso un tratamiento cuidadoso de la superficie soporte, o bien como consecuencia de una insuficiente longitud de anclaje para la transferencia de los esfuerzos en la interfase. Se considera imprescindible una caracterización mecánica del elemento a reforzar. Es por ello que el trabajo presenta en el capítulo 2 métodos de cálculo de la fábrica soporte de distintas normativas y también una formulación alternativa que tiene en cuenta la fábrica histórica ya que su caracterización suele ser más complicada por la heterogeneidad y falta de clasificación de sus materiales, especialmente de los morteros. Una vez conocidos los parámetros resistentes de la fábrica soporte es posible diseñar el refuerzo; hasta la fecha existe escasa normativa de refuerzos de FRP para muros de fábrica, consistente en un protocolo propuesto por la ACI 440 7R-10 que carece de mejoras por tipo de anclaje y aporta valores muy conservadores de la eficacia del refuerzo. Como se ha indicado, la problemática principal de los refuerzos de FRP en muros es el modo de fallo que impide un aprovechamiento óptimo de las propiedades del material. Recientemente se están realizando estudios con distintos métodos de anclaje para estos refuerzos, con lo que se incremente la capacidad última y se mantenga el soporte ligado al refuerzo tras la rotura. Junto con sistemas de anclajes por prolongación del refuerzo (tanto para láminas como para barras) se han ensayado anclajes con llaves de cortante, barras embebidas, o anclajes mecánicos de acero o incluso de FRP. Este texto resume, en el capítulo 4, algunas de las campañas experimentales llevadas a cabo entre los años 2000 y 2013 con distintos anclajes. Se observan los parámetros fundamentales para medir la eficacia del anclajes como son: el modo de fallo, el incremento de resistencia, y los desplazamientos que permite observar la ductilidad del refuerzo; estos datos se analizan en función de la variación de: tipo de refuerzo incluyéndose el tipo de fibra y sistema de colocación, y tipo de anclaje. Existen también parámetros de diseño de los propios anclajes. En el caso de barras embebidas se resumen en diámetro y material de la barra, acabado superficial, dimensiones y forma de la roza, tipo de adhesivo. En el caso de anclajes de FRP tipo pasador la caracterización incluye: tipo de fibra, sistema de fabricación del anclajes y diámetro del mismo, radio de expansión del abanico, espaciamiento longitudinal de anclajes, número de filas de anclajes, número de láminas del refuerzo, longitud adherida tras el anclaje; es compleja la sistematización de resultados de los autores de las campañas expuestas ya que algunos de estos parámetros varían impidiendo la comparación. El capítulo 5 presenta los ensayos empleados para estas campañas de anclajes, distinguiéndose entre ensayos de modo I, tipo tracción directa o arrancamiento, que servirían para sistemas NSM o para cuantificar la resistencia individual de anclajes tipo pasador; ensayos de modo II, tipo corte simple, que se asemeja más a las condiciones de trabajo de los refuerzos. El presente texto se realiza con objeto de abrir una posible investigación sobre los anclajes tipo pasador, considerándose que junto con los sistemas de barra embebida son los que permiten una mayor versatilidad de diseño para los refuerzos de FRP y siendo su eficacia aún difícil de aislar por el número de parámetros de diseño. Rehabilitation of built heritage is becoming increasingly frequent, including repair of damaged works and conditioning for a new use or higher loads. In this work it has been considered the study of masonry wall reinforcement, as most buildings and civil works have load bearing walls or at least infilled masonry walls in concrete and steel structures. Before repairing or reinforcing an structure, it is important to analyse its deficiencies, its mechanical properties and both existing and potential loads; chapter 1, section 4 includes the most common rehabilitation methods when structural reinforcement is not needed, as well as traditional reinforcement techniques (internal and external reinforcement) In the last years the FRP reinforcement system has been adopted for masonry walls. FRP materials for reinforcement were initially used for concrete pillars and beams. FRP reinforcement includes two main techniques: surface mounted laminates (SM) and near surface mounted bars (NSM); one of them may be more accurate according to the need for reinforcement and main load, accessibility for installation and aesthetic requirements. One of the main constraints of FRP systems is not reaching maximum load for material due to premature debonding failure, which can be caused by surface irregularities so surface preparation is necessary. But debonding (or delamination for SM techniques) can also be a consequence of insufficient anchorage length or stress concentration. In order to provide an accurate mechanical characterisation of walls, chapter 2 summarises the calculation methods included in guidelines as well as alternative formulations for old masonry walls as historic wall properties are more complicated to obtain due to heterogeneity and data gaps (specially for mortars). The next step is designing reinforcement system; to date there are scarce regulations for walls reinforcement with FRP: ACI 440 7R-10 includes a protocol without considering the potential benefits provided by anchorage devices and with conservative values for reinforcement efficiency. As noted above, the main problem of FRP masonry walls reinforcement is failure mode. Recently, some authors have performed studies with different anchorage systems, finding that these systems are able to delay or prevent debonding . Studies include the following anchorage systems: Overlap, embedded bars, shear keys, shear restraint and fiber anchors. Chapter 4 briefly describes several experimental works between years 2000 and 2013, concerning different anchorage systems. The main parameters that measure the anchorage efficiency are: failure mode, failure load increase, displacements (in order to evaluate the ductility of the system); all these data points strongly depend on: reinforcement system, FRP fibers, anchorage system, and also on the specific anchorage parameters. Specific anchorage parameters are a function of the anchorage system used. The embedded bar system have design variables which can be identified as: bar diameter and material, surface finish, groove dimensions, and adhesive. In FRP anchorages (spikes) a complete design characterisation should include: type of fiber, manufacturing process, diameter, fan orientation, anchor splay width, anchor longitudinal spacing and number or rows, number or FRP sheet plies, bonded length beyond anchorage devices,...the parameters considered differ from some authors to others, so the comparison of results is quite complicated. Chapter 5 includes the most common tests used in experimental investigations on bond-behaviour and anchorage characterisation: direct shear tests (with variations single-shear and double-shear), pullout tests and bending tests. Each of them may be used according to the data needed. The purpose of this text is to promote further investigation of anchor spikes, accepting that both FRP anchors and embedded bars are the most versatile anchorage systems of FRP reinforcement and considering that to date its efficiency cannot be evaluated as there are too many design uncertainties.
Resumo:
On 22nd February '96, the space mission STS 75 started ,from the NASA facilities at Cape Canaveral. Such a mission consists in the launch of the shuttle Columbia in order to carry out two experiments in the space: the TSS 1R (Tethered Satellite Sistem 1 Refliight) and the USMP (United States Microgravity Payload). The TSS 1R is a replica of a similar mission TSS 1 '92. The TSS space programme is a bilateral scientific cooperation between the USA space agency NASA (National Aeronautics and Space Agency) and the ASI (Italian Space Agency. The TSS 1R system consists on the shuttle Columbia which deploys, up-ward, by means a conducting tether 20 km long, a spherical satellite (1.5 mt diameter) containing scientific instrumentation. This system, orbiting at about 300 km from the Earth's surface, represents, presently, the largest experimental space structure, Due to its dimensions, flexibility and conducting properties of the tether, the system interacts, in a quite complex manner, wih the earth magnetic field and the ionospheric plasma, in a way that the total system behaves as an electromagnetic radiating antenna as well as an electric power generator. Twelve scientific experiments have been assessed by US and Italian scientists in order to study the electro dynamic behaviour of the structure orbiting in the ionos phere. Two experiments have been prepared in the attempt to receive on the Earth's surface possible electromagnetic events radiated by the TSS 1R. The project EMET (Electro Magnetic Emissions from Tether),USA and the project OESEE (Observations on the Earth Surface of Electromagnetic Emissions) Italy, consist in a coordinated programme of passive detection of such possible EM emissions. This detection will supply the verification of some thoretical hypotheses on the electrodynamic interactions between the orbiting system, the Earth's magnetic field and the ionospheric plasma with two principal aims as the technological assesment of the system concept as well as a deeper knowledge of the ionosphere properties for future space applications. A theoretical model that keeps the peculiarities of tether emissionsis being developed for signal prediction at constant tether current. As a step previous to the calculation of the expected ground signal , the Alfven-wave signature left by the tether far back in the ionosphere has been determined. The scientific expectations from the combined effort to measure the entity of those perturbations will be outlined taking in to account the used ground track sensor systems.
Resumo:
Realistic operation of helicopter flight simulators in complex topographies (such as urban environments) requires appropriate prediction of the incoming wind, and this prediction should be made in real time. Unfortunately, the wind topology around complex topographies shows time-dependent, fully nonlinear, turbulent patterns (i.e., wakes) whose simulation cannot be made using computationally inexpensive tools based on corrected potential approximations. Instead, the full Navier-Stokes plus some kind of turbulent modeling is necessary, which is quite computationally expensive. The complete unsteady flow depends on two parameters, namely the velocity and orientation of the free stream flow. The aim of this MSc thesis is to develop a methodology for the real time simulation of these complex flows. For simplicity, the flow around a single building (20 mx20 m cross section and 100 m height) is considered, with free stream velocity in the range 5-25 m/s. Because of the square cross section, the problem shows two reflection symmetries, which allows for restricting the orientations to the range 0° < a. < 45°. The methodology includes an offline preprocess and the online operation. The preprocess consists in three steps: An appropriate, unstructured mesh is selected in which the flow is sim¬ulated using OpenFOAM, and this is done for 33 combinations of 3 free stream intensities and 11 orientations. For each of these, the simulation proceeds for a sufficiently large time as to eliminate transients. This step is quite computationally expensive. Each flow field is post-processed using a combination of proper orthogonal decomposition, fast Fourier transform, and a convenient optimization tool, which identifies the relevant frequencies (namely, both the basic frequencies and their harmonics) and modes in the computational mesh. This combination includes several new ingredients to filter errors out and identify the relevant spatio-temporal patterns. Note that, in principle, the basic frequencies depend on both the intensity and the orientation of the free stream flow. The outcome of this step is a set of modes (vectors containing the three velocity components at all mesh points) for the various Fourier components, intensities, and orientations, which can be organized as a third order tensor. This step is fairly computationally inexpensive. The above mentioned tensor is treated using a combination of truncated high order singular value, decomposition and appropriate one-dimensional interpolation (as in Lorente, Velazquez, Vega, J. Aircraft, 45 (2008) 1779-1788). The outcome is a tensor representation of both the relevant fre¬quencies and the associated Fourier modes for a given pair of values of the free stream flow intensity and orientation. This step is fairly compu¬tationally inexpensive. The online, operation requires just reconstructing the time-dependent flow field from its Fourier representation, which is extremely computationally inex¬pensive. The whole method is quite robust.
Resumo:
Offshore wind industry has exponentially grown in the last years. Despite this growth, there are still many uncertainties in this field. This paper analyzes some current uncertainties in the offshore wind market, with the aim of going one step further in the development of this sector. To do this, some already identified uncertainties compromising offshore wind farm structural design have been identified and described in the paper. Examples of these identified uncertainties are the design of the transition piece and the difficulties for the soil properties characterization. Furthermore, this paper deals with other uncertainties not identified yet due to the limited experience in the sector. To do that, current and most used offshore wind standards and recommendations related to the design of foundation and support structures (IEC 61400-1, 2005; IEC 61400-3, 2009; DNV-OS-J101, Design of Offshore Wind Turbine, 2013 and Rules and Guidelines Germanischer Lloyd, WindEnergie, 2005) have been analyzed. These new identified uncertainties are related to the lifetime and return period, loads combination, scour phenomenon and its protection, Morison e Froude Krilov and diffraction regimes, wave theory, different scale and liquefaction. In fact, there are a lot of improvements to make in this field. Some of them are mentioned in this paper, but the future experience in the matter will make it possible to detect more issues to be solved and improved.
Resumo:
In recent years, remote sensing imaging systems for the measurement of oceanic sea states have attracted renovated attention. Imaging technology is economical, non-invasive and enables a better understanding of the space-time dynamics of ocean waves over an area rather than at selected point locations of previous monitoring methods (buoys, wave gauges, etc.). We present recent progress in space-time measurement of ocean waves using stereo vision systems on offshore platforms, which focus on sea states with wavelengths in the range of 0.01 m to 1 m. Both traditional disparity-based systems and modern elevation-based ones are presented in a variational optimization framework: the main idea is to pose the stereoscopic reconstruction problem of the surface of the ocean in a variational setting and design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal smoothness priors. Disparity methods estimate the disparity between images as an intermediate step toward retrieving the depth of the waves with respect to the cameras, whereas elevation methods estimate the ocean surface displacements directly in 3-D space. Both techniques are used to measure ocean waves from real data collected at offshore platforms in the Black Sea (Crimean Peninsula, Ukraine) and the Northern Adriatic Sea (Venice coast, Italy). Then, the statistical and spectral properties of the resulting observed waves are analyzed. We show the advantages and disadvantages of the presented stereo vision systems and discuss future lines of research to improve their performance in critical issues such as the robustness of the camera calibration in spite of undesired variations of the camera parameters or the processing time that it takes to retrieve ocean wave measurements from the stereo videos, which are very large datasets that need to be processed efficiently to be of practical usage. Multiresolution and short-time approaches would improve efficiency and scalability of the techniques so that wave displacements are obtained in feasible times.
Resumo:
NASA's tether experiment ProSEDS will be placed in orbit on board a Delta-II rocket in early 2003. ProSEDS will test bare-tether electron collection, deorbiting of the rocket second stage, and the system dynamic stability. ProSEDS performance will vary both because ambient conditions change along the orbit and because tether-circuit parameters follow a step by step sequence in the current operating cycle. In this work we discuss how measurements of tether current and bias, plasma density, and deorbiting rate can be used to check the OML law for current collection. We review circuit bulk elements; characteristic lengths and energies that determine collection (tether radius, electron thermal gyroradius and Debye length, particle temperatures, tether bias, ion ram energy); and lengths determining current and bias profiles along the tether (extent of magnetic self-field, a length gauging ohmic versus collection impedances, tether length). The analysis serves the purpose of estimating ProSEDS behavior in orbit and fostering our ability for extrapolating ProSEDS flight data to different tether and environmental conditions.
Resumo:
Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed.
Resumo:
In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.
Resumo:
We studied the coastal zone of the Tavoliere di Puglia plain, (Puglia region, southern Italy) with the aim to recognize the main unconformities, and therefore, the unconformity-bounded stratigraphic units (UBSUs; Salvador 1987, 1994) forming its Quaternary sedimentary fill. Recognizing unconformities is particularly problematic in an alluvial plain, due to the difficulties in distinguishing the unconformities that bound the UBSUs. So far, the recognition of UBSUs in buried successions has been made mostly by using seismic profiles. Instead, in our case, the unavailability of the latter has prompted us to address the problem by developing a methodological protocol consisting of the following steps: I) geological survey in the field; II) draft of a preliminary geological setting based on the field-survey results; III) dating of 102 samples coming from a large number of boreholes and some outcropping sections by means of the amino acid racemization (AAR) method applied to ostracod shells and 14C dating, filtering of the ages and the selection of valid ages; IV) correction of the preliminary geological setting in the light of the numerical ages; definition of the final geological setting with UBSUs; identification of a ‘‘hypothetical’’ or ‘‘attributed time range’’ (HTR or ATR) for each UBSU, the former very wide and subject to a subsequent modification, the latter definitive; V) cross-checking between the numerical ages and/or other characteristics of the sedimentary bodies and/or the sea-level curves (with their effects on the sedimentary processes) in order to restrict also the hypothetical time ranges in the attributed time ranges. The successful application of AAR geochronology to ostracod shells relies on the fact that the ability of ostracods to colonize almost all environments constitutes a tool for correlation, and also allow the inclusion in the same unit of coeval sediments that differ lithologically and paleoenvironmentally. The treatment of the numerical ages obtained using the AAR method required special attention. The first filtering step was made by the laboratory (rejection criteria a and b). Then, the second filtering step was made by testing in the field the remaining ages. Among these, in fact, we never compared an age with a single preceding and/or following age; instead, we identified homogeneous groups of numerical ages consistent with their reciprocal stratigraphic position. This operation led to the rejection of further numerical ages that deviate erratically from a larger, homogeneous age population which fits well with its stratigraphic position (rejection criterion c). After all of the filtering steps, the valid ages that remained were used for the subdivision of the sedimentary sequences into UBSUs together with the lithological and paleoenvironmental criteria. The numerical ages allowed us, in the first instance, to recognize all of the age gaps between two consecutive samples. Next, we identified the level, in the sedimentary thickness that is between these two samples, that may represent the most suitable UBSU boundary based on its lithology and/or the paleoenvironment. The recognized units are: I) Coppa Nevigata sands (NEA), HTR: MIS 20–14, ATR: MIS 17–16; II) Argille subappennine (ASP), HTR: MIS 15–11, ATR: MIS 15–13; III) Coppa Nevigata synthem (NVI), HTR: MIS 13–8, ATR: MIS 12–11; IV) Sabbie di Torre Quarto (STQ), HTR: MIS 13–9.1, ATR: MIS 11; V) Amendola subsynthem (MLM1), HTR: MIS 12–10, ATR: MIS 11; VI) Undifferentiated continental unit (UCI), HTR: MIS 11–6.2, ATR: MIS 9.3–7.1; VII) Foggia synthem (TGF), ATR: MIS 6; VIII) Masseria Finamondo synthem (TPF), ATR: Upper Pleistocene; IX) Carapelle and Cervaro streams synthem (RPL), subdivided into: IXa) Incoronata subsynthem (RPL1), HTR: MIS 6–3; ATR: MIS 5–3; IXb) Marane La Pidocchiosa–Castello subsynthem (RPL3), ATR: Holocene; X) Masseria Inacquata synthem (NAQ), ATR: Holocene. The possibility of recognizing and dating Quaternary units in an alluvial plain to the scale of a marine isotope stage constitutes a clear step forward compared with similar studies regarding other alluvial-plain areas, where Quaternary units were dated almost exclusively using their stratigraphic position. As a result, they were generically associated with a geological sub-epoch. Instead, our method allowed a higher detail in the timing of the sedimentary processes: for example, MIS 11 and MIS 5.5 deposits have been recognized and characterized for the first time in the study area, highlighting their importance as phases of sedimentation.
Resumo:
In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.