970 resultados para modified local binary pattern
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.
Resumo:
We introduce the need for a distributed guideline-based decision sup-port (DSS) process, describe its characteristics, and explain how we implement-ed this process within the European Union?s MobiGuide project. In particular, we have developed a mechanism of sequential, piecemeal projection, i.e., 'downloading' small portions of the guideline from the central DSS server, to the local DSS in the patient's mobile device, which then applies that portion, us-ing the mobile device's local resources. The mobile device sends a callback to the central DSS when it encounters a triggering pattern predefined in the pro-jected module, which leads to an appropriate predefined action by the central DSS, including sending a new projected module, or directly controlling the rest of the workflow. We suggest that such a distributed architecture that explicitly defines a dialog between a central DSS server and a local DSS module, better balances the computational load and exploits the relative advantages of the cen-tral server and of the local mobile device.
Resumo:
El análisis de los factores que determinan el establecimiento y supervivencia de orquídeas epífitas, incluyen: a) las condiciones microambientales de los bosques que las mantienen, b) preferencias por las características de los hospederos donde crecen, c) limitación en la dispersión de semillas, d) interacciones planta-planta, y e) asociaciones micorrízicas para la germinación y resultan esenciales para el desarrollo de estrategias para la conservación y manejo de este grupo de plantas. Este trabajo ha evaluado la importancia de estos factores en Epidendrum rhopalostele, orquídea epífita del bosque de niebla montano, a través de los análisis de los patrones espaciales de los árboles que la portan y de la propia orquídea, a escala de población, estudios de asociación y métodos moleculares. Estos últimos han consistido en el uso de marcadores AFLP para el análisis de la estructura genética de la orquídea y en la secuenciación-clonación de la región ITS para la identificación de los hongos micorrízicos asociados. El objetivo de esta tesis es, por tanto, una mejor comprensión de los factores que condicionan la presencia de orquídeas epífitas en los remanentes de bosque de niebla montano y una evaluación de las implicaciones para la conservación y mantenimiento de sus hábitats y la permanencia de sus poblaciones. El estudio fue realizado en un fragmento de bosque de niebla montano de sucesión secundaria situado al este de la Cordillera Real, en los Andes del sur de Ecuador, a 2250 m.s.n.m y caracterizado por una pendiente marcada, temperatura media anual de 20.8°C y precipitación anual de 2193 mm. En este fragmento se mapearon, identificaron y caracterizaron todos los árboles presentes con DBH > 1 cm y todos los individuos de Epidendrum rhopalostele. Así mismo se tomaron muestras de hoja para obtener ADN de todas las orquídeas registradas y muestras de raíces de individuos con flor de E. rhopalostele, uno por cada forófito, para el análisis filogenético de micorrizas. Análisis espaciales de patrones de puntos basados en la K de Ripley y la distancia al vecino más cercano fueron usados para los árboles, forófitos y la población de E. rhopalostele. Se observó que la distribución espacial de árboles y forófitos de E. rhopalostele no es aleatoria, ya que se ajusta a un proceso agregado de Poisson. De ahí se infiere una limitación en la dispersión de las semillas en el fragmento estudiado y en el establecimiento de la orquídea. El patrón de distribución de la población de E. rhopalostele en el fragmento muestra un agrupamiento a pequeña escala sugiriendo una preferencia por micro-sitios para el establecimiento de la orquídea con un kernel de dispersión de las semillas estimado de 0.4 m. Las características preferentes del micro-sitio como tipos de árboles (Clusia alata y árboles muertos), tolerancia a la sombra, corteza rugosa, distribución en los dos primeros metros sugieren una tendencia a distribuirse en el sotobosque. La existencia de una segregación espacial entre adultos y juveniles sugiere una competencia por recursos limitados condicionada por la preferencia de micro-sitio. La estructura genética de la población de E. rhopalostele analizada a través de Structure y PCoA evidencia la presencia de dos grupos genéticos coexistiendo en el fragmento y en los mismos forófitos, posiblemente por eventos de hibridización entre especies de Epidendrum simpátricas. Los resultados del análisis de autocorrelación espacial efectuados en GenAlex confirman una estructura genético-espacial a pequeña escala que es compatible con un mecanismo de dispersión de semillas a corta distancia ocasionada por gravedad o pequeñas escorrentías, frente a la dispersión a larga distancia promovida por el viento generalmente atribuida a las orquídeas. Para la identificación de los micobiontes se amplificó la región ITS1-5.8S-ITS2, y 47 secuencias fueron usadas para el análisis filogenético basado en neighborjoining, análisis bayesiano y máximum-likelihood que determinó que Epidendrum rhopalostele establece asociaciones micorrízicas con al menos dos especies diferentes de Tulasnella. Se registraron plantas que estaban asociadas con los dos clados de hongos encontrados, sugiriendo ausencia de limitación en la distribución del hongo. Con relación a las implicaciones para la conservación in situ resultado de este trabajo se recomienda la preservación de todo el fragmento de bosque así como de las interacciones existentes (polinizadores, micorrizas) a fin de conservar la diversidad genética de esta orquídea epífita. Si fuere necesaria una reintroducción se deben contemplar distancias entre los individuos en cada forófito dentro de un rango de 0.4 m. Para promover el reclutamiento y regeneración de E. rhopalostele, se recomienda que los forófitos correspondan preferentemente a árboles muertos o caídos y a especies, como Clusia alata, que posean además corteza rugosa, sean tolerantes a la sombra, y en el área del sotobosque con menor luminosidad. Además es conveniente que las orquídeas en su distribución vertical estén ubicadas en los primeros metros. En conclusión, la limitación en la dispersión, las características del micro-sitio, las interacciones intraespecíficas y con especies congenéricas simpátricas y las preferencias micorrízicas condicionan la presencia de esta orquídea epífita en este tipo de bosque. ABSTRACT The analysis of factors that determine the establishment and survival of epiphytic depends on factors such as a) microenvironmental conditions of forest, b) preference for host characteristics where orchids grow, c) seed dispersal limitation, d) plant-plant interaction, e) priority mycorrhizal associations for germination, are essential for the development of strategies for management and conservation. This work evaluated the importance of these factors in Epidendrum rhopalostele, an epiphytic orchid of montane cloud forest through the analysis of spatial patterns of host trees and the orchid, in a more specific scale, with association studies and molecular methods, including AFLPs for orchid population genetic structure and the sequencing of the ITS region for associated mycorrhizal fungi. The aim of this thesis is to understand the factors that condition the presence of epiphytic orchids in the remnants of montane cloud forest and to assess the implications for the conservation and preservation of their habitats and the persistence of the orchid populations. The study was carried out in a fragment of montane cloud forest of secondary succession on the eastern slope of Cordillera Real in the Andes of southern Ecuador, located at 2250 m a.s.l. characterized by a steep slope, mean annual temperature of 20.8°C and annual precipitation of 2193 mm. All trees with DBH > 1 cm were mapped, characterized and identified. All E. rhopalostele individuals present were counted, marked, characterized and mapped. Leaf samples of all orchid individuals were collected for DNA analysis. Root samples of flowering E. rhopalostele individuals were collected for phylogenetic analysis of mycorrhizae, one per phorophyte. Spatial point pattern analysis based on Ripley`s K function and nearest neighbor function was used for trees, phorophytes and orchid population. We observed that spatial distribution of trees and phorophytes is not random, as it adjusts to a Poisson cluster process. This suggests a limitation for seed dispersal in the study fragment that is affecting orchid establishment. Furthermore, the small-scale spatial pattern of E. rhopalostele evidences a clustering that suggests a microsite preference for orchid establishment with a dispersal kernel of 0.4 m. Microsite features such as types of trees (dead trees or Clusia alata), shade tolerance trees, rough bark, distribution in the first meters suggest a tendency to prefer the understory for their establishment. Regarding plant-plant interaction a spatial segregation between adults and juveniles was present suggesting competition for limited resources conditioned for a microsite preference. Analysis of genetic structure of E. rhopalostele population through Structure and PCoA shows two genetic groups coexisting in this fragment and in the same phorophyte, possibly as a result of hybridization between sympatric species of Epidendrum. Our results of spatial autocorrelation analysis develop in GenAlex confirm a small-scale spatial-genetic structure within the genetic groups that is compatible with a short-distance dispersal mechanism caused by gravity or water run-off, instead of the long-distance seed dispersal promoted by wind generally attributed to orchids. For mycobionts identification ITS1-5.8S-ITS2 rDNA region was amplified. Phylogenetic analysis was performed with neighborjoining, Bayesian likelihood and maximum-likelihood for 47 sequences yielded two Tulasnella clades. This orchid establishes mycorrhizal associations with at least two different Tulasnella species. In some cases both fungi clades were present in same root, suggesting no limitation in fungal distribution. Concerning the implications for in situ conservation resulting from this work, the preservation of all forest fragment and their interactions (pollinators, mycorrhiza) is recommended to conserve the genetic diversity of this species. If a reintroduction were necessary, distances between individuals in each phorophyte within a range of 0.4 m, are recommended. To promote recruitment and regeneration of E. rhopalostele it is recommended that phorophytes correspond to dead or fallen trees or species, such as Clusia alata. Trees that have rough bark and are shade tolerant are also recommended. Furthermore, regarding vertical distribution, it is also convenient that orchids are located in the first meter (in understory, area with less light). In conclusion, limitation on seed dispersal, microsite characteristics, plant-plant interactions or interaction with cogeneric sympatric species and mycorrhizal preferences conditioned the presence of this epiphytic orchid in this fragment forest.
Resumo:
A new ultrafiltration membrane was developed by the incorporation of binary metal oxides inside polyethersulfone. Physico-chemical characterization of the binary metal oxides demonstrated that the presence of Ti in the TiO2?ZrO2 system results in an increase of the size of the oxides, and also their dispersity. The crystalline phases of the synthesized binary metal oxides were identified as srilankite and zirconium titanium oxide. The effect of the addition of ZrO2 can be expressed in terms of the inhibition of crystal growth of anocrystalline TiO2 during the synthesis process. For photocatalytic applications the band gap of the synthesized semiconductors was determined, confirming a gradual increase (blue shift) in the band gap as the amount of Zr loading increases. Distinct distributions of binary metal oxides were found along the permeation axis for the synthesized membranes. Particles with Ti are more uniformly dispersed throughout the membrane cross-section. The physico-chemical characterization of membranes showed a strong correlation between some key membrane properties and the spatial particle distribution in the membrane structure. The proximity of metal oxide fillers to the membrane surface determines the hydrophilicity and porosity of modified membranes. Membranes incorporating binary metal oxides were found to be promising candidates for wastewater treatment by ultrafiltration, considering the observed improvement influx and anti-fouling properties of doped membranes. Multi-run fouling tests of doped membranes confirmed the stability of permeation through membranes embedded with binary TiO2?ZrO2 particles.
Resumo:
The human visual system is able to effortlessly integrate local features to form our rich perception of patterns, despite the fact that visual information is discretely sampled by the retina and cortex. By using a novel perturbation technique, we show that the mechanisms by which features are integrated into coherent percepts are scale-invariant and nonlinear (phase and contrast polarity independent). They appear to operate by assigning position labels or “place tags” to each feature. Specifically, in the first series of experiments, we show that the positional tolerance of these place tags in foveal, and peripheral vision is about half the separation of the features, suggesting that the neural mechanisms that bind features into forms are quite robust to topographical jitter. In the second series of experiment, we asked how many stimulus samples are required for pattern identification by human and ideal observers. In human foveal vision, only about half the features are needed for reliable pattern interpolation. In this regard, human vision is quite efficient (ratio of ideal to real ≈ 0.75). Peripheral vision, on the other hand is rather inefficient, requiring more features, suggesting that the stimulus may be relatively underrepresented at the stage of feature integration.
Resumo:
Developmental and physiological responses are regulated by light throughout the entire life cycle of higher plants. To sense changes in the light environment, plants have developed various photoreceptors, including the red/far-red light-absorbing phytochromes and blue light-absorbing cryptochromes. A wide variety of physiological responses, including most light responses, also are modulated by circadian rhythms that are generated by an endogenous oscillator, the circadian clock. To provide information on local time, circadian clocks are synchronized and entrained by environmental time cues, of which light is among the most important. Light-driven entrainment of the Arabidopsis circadian clock has been shown to be mediated by phytochrome A (phyA), phytochrome B (phyB), and cryptochromes 1 and 2, thus affirming the roles of these photoreceptors as input regulators to the plant circadian clock. Here we show that the expression of PHYB∷LUC reporter genes containing the promoter and 5′ untranslated region of the tobacco NtPHYB1 or Arabidopsis AtPHYB genes fused to the luciferase (LUC) gene exhibit robust circadian oscillations in transgenic plants. We demonstrate that the abundance of PHYB RNA retains this circadian regulation and use a PHYB∷Luc fusion protein to show that the rate of PHYB synthesis is also rhythmic. The abundance of bulk PHYB protein, however, exhibits only weak circadian rhythmicity, if any. These data suggest that photoreceptor gene expression patterns may be significant in the daily regulation of plant physiology and indicate an unexpectedly intimate relationship between the components of the input pathway and the putative circadian clock mechanism in higher plants.
Resumo:
The Internet has created new opportunities for librarians to present literature search results to clinicians. In order to take full advantage of these opportunities, libraries need to create locally maintained bibliographic databases. A simple method of creating a local bibliographic database and publishing it on the Web is described. The method uses off-the-shelf software and requires minimal programming. A hedge search strategy for outcome studies of clinical process interventions is created, and Ovid is used to search MEDLINE. The search results are saved and imported into EndNote libraries. The citations are modified, exported to a Microsoft Access database, and published on the Web. Clinicians can use a Web browser to search the database. The bibliographic database contains 13,803 MEDLINE citations of outcome studies. Most searches take between four and ten seconds and retrieve between ten and 100 citations. The entire cost of the software is under $900. Locally maintained bibliographic databases can be created easily and inexpensively. They significantly extend the evidence-based health care services that libraries can offer to clinicians.
Resumo:
Explanations of self-thinning in plant populations have focused on plant shape and packing. A dynamic model based on the structure of local interactions successfully reproduces the pattern and can be approximated to identify key parameters and relationships. The approach generates testable new explanations for differences between species and populations, unifies self-thinning with other patterns in plant population dynamics, and indicates why organisms other than plants can follow the law.
Resumo:
The p40 subunit of interleukin 12 (IL-12p40) has been known to act as an IL-12 antagonist in vitro. We here describe the immunosuppressive effect of IL-12p40 in vivo. A murine myoblast cell line, C2C12, was transduced with retro-virus vectors carrying the lacZ gene as a marker and the IL-12p40 gene. IL-12p40 secreted from the transfectant inhibited the IL-12-induced interferon gamma (IFN-gamma) production by splenocytes in vitro. Survival of C2C12 transplanted into allogeneic recipients was substantially prolonged when transduced with IL-12p40. Cytokine (IL-2 and IFN-gamma) production and cytotoxic T lymphocyte induction against allogeneic C2C12 were impaired in the recipients transplanted with the IL-12p40 transfectant. Delayed-type hypersensitivity response against C2C12 was also diminished in the IL-12p40 recipients. Furthermore, serum antibodies against beta-galactosidase of the T-helper 1-dependent isotypes (IgG2 and IgG3) were decreased in the IL-12p40 recipients. These results indicate that locally produced IL-12p40 exerts a potent immunosuppressive effect on T-helper 1-mediated immune responses that lead to allograft rejection. Therefore, IL-12p40 gene transduction would be useful for preventing the rejection of allografts and genetically modified own cells that are transduced with potentially antigenic molecules in gene therapy.
Resumo:
The objective of this paper is to develop a method to hide information inside a binary image. An algorithm to embed data in scanned text or figures is proposed, based on the detection of suitable pixels, which verify some conditions in order to be not detected. In broad terms, the algorithm locates those pixels placed at the contours of the figures or in those areas where some scattering of the two colors can be found. The hidden information is independent from the values of the pixels where this information is embedded. Notice that, depending on the sequence of bits to be hidden, around half of the used pixels to keep bits of data will not be modified. The other basic characteristic of the proposed scheme is that it is necessary to take into consideration the bits that are modified, in order to perform the recovering process of the information, which consists on recovering the sequence of bits placed in the proper positions. An application to banking sector is proposed for hidding some information in signatures.
Resumo:
A study of archival RXTE, Swift, and Suzaku pointed observations of the transient high-mass X-ray binary GRO J1008−57 is presented. A new orbital ephemeris based on pulse arrival-timing shows the times of maximum luminosities during outbursts of GRO J1008−57 to be close to periastron at orbital phase − 0.03. This makes the source one of a few for which outburst dates can be predicted with very high precision. Spectra of the source in 2005, 2007, and 2011 can be well described by a simple power law with high-energy cutoff and an additional black body at lower energies. The photon index of the power law and the black-body flux only depend on the 15–50 keV source flux. No apparent hysteresis effects are seen. These correlations allow us to predict the evolution of the pulsar’s X-ray spectral shape over all outbursts as a function of just one parameter, the source’s flux. If modified by an additional soft component, this prediction even holds during GRO J1008−57’s 2012 type II outburst.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Besides its importance in cattle, Neospora caninum may also pose a high risk as abortifacient for small ruminants. We have recently demonstrated that the outcome of experimental infection of pregnant sheep with 10(6) Nc-Spain7 tachyzoites is strongly dependent on the time of gestation. In the current study, we assessed peripheral and local immune response in those animals. Serological analysis revealed earlier and higher IFN-γ and IgG responses in ewes infected at early (G1) and mid (G2) gestation, when abortion occurred. IL-4 was not detected in sera from any sheep. Inflammatory infiltrates in the placenta mainly consisted of CD8+ and, to a lesser extent, CD4+ T cells and macrophages (CD163+). The infiltrate was more intense in sheep infected at mid-gestation. In the foetal mesenchyme, mostly free tachyzoites were found in animals infected at G1, while those infected in G2 displayed predominantly particulate antigen, and parasitophorous vacuoles were detected in sheep infected at G3. A similar pattern of placental cytokine mRNA expression was found in all groups, displaying a strengthened upregulation of IFN-γ and IL-4 and milder increases of TNF-α and IL-10, reminiscent of a mixed Th1 and Th2 response. IL-12 and IL-6 were only slightly upregulated in G2, and TGF-β was downregulated in G1 and G2, suggestive of limited T regulatory (Treg) cell activity. No significant expression of TLR2 or TLR4 could be detected. In summary, this study confirms the pivotal role of systemic and local immune responses at different times of gestation during N. caninum infection in sheep.
Resumo:
Tricyclo-DNA (tcDNA) is a sugar-modified analogue of DNA currently tested for the treatment of Duchenne muscular dystrophy in an antisense approach. Tandem mass spectrometry plays a key role in modern medical diagnostics and has become a widespread technique for the structure elucidation and quantification of antisense oligonucleotides. Herein, mechanistic aspects of the fragmentation of tcDNA are discussed, which lay the basis for reliable sequencing and quantification of the antisense oligonucleotide. Excellent selectivity of tcDNA for complementary RNA is demonstrated in direct competition experiments. Moreover, the kinetic stability and fragmentation pattern of matched and mismatched tcDNA heteroduplexes were investigated and compared with non-modified DNA and RNA duplexes. Although the separation of the constituting strands is the entropy-favored fragmentation pathway of all nucleic acid duplexes, it was found to be only a minor pathway of tcDNA duplexes. The modified hybrid duplexes preferentially undergo neutral base loss and backbone cleavage. This difference is due to the low activation entropy for the strand dissociation of modified duplexes that arises from the conformational constraint of the tc-sugar-moiety. The low activation entropy results in a relatively high free activation enthalpy for the dissociation comparable to the free activation enthalpy of the alternative reaction pathway, the release of a nucleobase. The gas-phase behavior of tcDNA duplexes illustrates the impact of the activation entropy on the fragmentation kinetics and suggests that tandem mass spectrometric experiments are not suited to determine the relative stability of different types of nucleic acid duplexes.
Resumo:
Cover tite: The use of existing and modified land use instruments to achieve environmental quality.