873 resultados para Embedded and embodied cognition
Resumo:
This paper integrates two lines of research into a unified conceptual framework: trade in global value chains and embodied emissions. This allows both value added and emissions to be systematically traced at the country, sector, and bilateral levels through various production network routes. By combining value-added and emissions accounting in a consistent way, the potential environmental cost (amount of emissions per unit of value added) along global value chains can be estimated. Using this unified accounting method, we trace CO2 emissions in the global production and trade network among 41 economies in 35 sectors from 1995 to 2009, basing our calculations on the World Input–Output Database, and show how they help us to better understand the impact of cross-country production sharing on the environment.
Resumo:
Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.
Resumo:
Mixed criticality systems emerges as a suitable solution for dealing with the complexity, performance and costs of future embedded and dependable systems. However, this paradigm adds additional complexity to their development. This paper proposes an approach for dealing with this scenario that relies on hardware virtualization and Model-Driven Engineering (MDE). Hardware virtualization ensures isolation between subsystems with different criticality levels. MDE is intended to bridge the gap between design issues and partitioning concerns. MDE tooling will enhance the functional models by annotating partitioning and extra-functional properties. System partitioning and subsystems allocation will be generated with a high degree of automation. System configuration will be validated for ensuring that the resources assigned to a partition are sufficient for executing the allocated software components and that time requirements are met.
Resumo:
A construção civil é responsável por relevante impacto ao meio ambiente, da extração das materiais-primas até a disposição dos seus resíduos em aterros. A avaliação do ciclo de vida (ACV) é uma ferramenta que possibilita a estimativa dos impactos ambientais potenciais do setor de forma sistemática. A simplificação da ACV, pelo uso de dados secundários e redução do escopo do estudo, facilita sua implementação como ferramenta de promoção da sustentabilidade. O objetivo dessa dissertação é estimar faixas dos cinco principais indicadores do setor de blocos de concreto do mercado brasileiro pela simplificação da ACV: consumo de materiais, energia incorporada, emissão de CO2, água e geração de resíduos. Este estudo foi o piloto do Projeto ACV Modular, iniciativa do Conselho Brasileiro de Construção Sustentável em parceria da Associação Brasileira de Cimento Portland e da Associação Brasileira da Indústria de Blocos de Concreto. O inventário foi desenvolvido com 33 fábricas localizadas em diferentes regiões do Brasil, estas sendo responsáveis por aproximadamente 50% da produção nacional. Os produtos selecionados foram blocos para pavimentação e alvenaria (estruturais e de vedação) considerados mais representativos no mercado. A fronteira do sistema adotada foi do berço ao portão da fábrica. O indicador de consumo de materiais não foi apresentado para garantir a confidencialidade dos dados das empresas, pois o teor de cimento foi dado direto informado no formulário. O indicador de resíduos não pode ser gerado devido a diferentes interpretações adotadas pelos fabricantes ao registrar seus dados. O indicador de água, apesar de incluir todo o consumo informado pela fábrica, apresentou valores muito baixos, alguns próximos a zero. O consumo de cimento, não o teor de clínquer, foi responsável por parcela significativa do CO2 e da energia incorporada do bloco, com participação de 62 a 99% das emissões de CO2. Assim, entre as empresas analisadas, mesmo com igual rota tecnológica, os insumos utilizados, a formulação do concreto, a eficiência de compactação da vibro prensa e o sistema produtivo tiveram maior influência nos indicadores de materiais, energia e CO2.
Resumo:
This paper presents novel data that challenge the traditional categorial understanding of the nominal phrase. The established use of an indefinite pronoun with a determiner in French (ce quelqu'un, du n'importe quoi, un je ne sais quoi) contravenes assumptions both about pronouns, which should not be embedded, and nominal phrases, which should be headed by a noun. Analysed here for the first time, the embedding of a pronoun under a determiner is shown to find its justification in the semantic import of the construction. The anaphoric role guaranteeing referential continuity is promoted by a strong determiner; weak determiners typically contribute to constructing a designative use of the pronoun when a more precise characterisation cannot or will not be provided. How this construction would be analysed in the Minimalist Programme is presented to suggest that the phrase satisfies semantic requirements, which resolves the paradoxes of its traditional definition
Resumo:
Lifelong learning is a ‘keystone’ of educational policies (Faure, 1972) where the emphasis on learning shifts from teacher to learner. Higher Education (HE) institutions should be committed to developing lifelong learning, that is promoting learning that is flexible, diverse and relevant at different times, and in different places, and is pursued throughout life. Therefore the HE sector needs to develop effective strategies to encourage engagement in meaningful learning for diverse student populations. The use of e-portfolios, as a ‘purposeful aggregation of digital items’ (Sutherland & Powell, 2007), can meet the needs of the student community by encouraging reflection, the recording of experiences and achievements, and personal development planning (PDP). The use of e-portfolios also promotes inclusivity in learning as it provides students with the opportunity to articulate their aspirations and take the first steps along the pathway of lifelong learning. However, ensuring the uptake of opportunities within their learning is more complex than the students simply having access to the software. Therefore it is argued here that crucial to the effective uptake and engagement of the e-portfolio is embedding it purposefully within the curriculum. In order to investigate effective implementation of e-portfolios an explanatory case study on their use was carried out, initially focusing on 3 groups of students engaged in work-based learning and professional practice. The 3 groups had e-Portfolios embedded and assessed at different levels. Group 1 did not have the e-Portfolio embedded into their curriculum nor was the e-Portfolio assessed. Group 2 had the e-Portfolio embedded into the curriculum and formatively assessed. Group 3 also had the e-Portfolio embedded into the curriculum and were summatively assessed. Results suggest that the use of e-Portfolios needs to be integral to curriculum design in modules rather than used as an additional tool. In addition to this more user engagement was found in group 2 where the e-Portfolio was formatively assessed only. The implications of this case study are further discussed in terms of curriculum development.
Resumo:
Background: Widespread use of automated sensitive assays for thyroid hormones and thyroid-stimulating hormone (TSH) has increased identification of mild thyroid dysfunction, especially in elderly patients. The clinical significance of this dysfunction, however, remains uncertain, and associations with cognitive impairment, depression, and anxiety are unconfirmed. Objective: To determine the association between mild thyroid dysfunction and cognition, depression, and anxiety in elderly persons. Design: Cross-sectional study. Associations were explored through mixed-model analyses. Setting: Primary care practices in central England. Patients: 5865 patients 65 years of age or older with no known thyroid disease who were recruited from primary care registers. Measurements: Serum TSH and free thyroxine (T4) were measured. Depression and anxiety were assessed by using the Hospital Anxiety and Depression Scale (HADS), and cognitive functioning was established by using the Middlesex Elderly Assessment of Mental State and the Folstein Mini-Mental State Examination. Comorbid conditions, medication use, and sociodemographic profiles were recorded. Results: 295 patients met the criteria for subclinical thyroid dysfunction (127 were hyperthyroid, and 168 were hypothyroid). After confounding variables were controlled for, statistically significant associations were seen between anxiety (HADS score) and TSH level (P = 0.013) and between cognition and both TSH and free T4 levels. The magnitude of these associations lacked clinical relevance: A 50-mIU/L increase in the TSH level was associated with a 1-point reduction in the HADS anxiety score, and a 1-point increase in the Mini-Mental State Examination score was associated with an increase of 50 mIU/L in the TSH level or 25 pmol/L in the free T4 level. Limitations: Because of the low participation rate, low prevalence of subclinical thyroid dysfunction, and other unidentified recruitment biases, participants may not be representative of the elderly population. Conclusions: After the confounding effects of comorbid conditions and use of medication were controlled for, subclinical thyroid dysfunction was not associated with depression, anxiety, or cognition. © 2006 American College of Physicians.
Resumo:
The vacuolar proton-ATPase (V-ATPase) is a multisubunit enzyme complex that is able to transfer protons over membranes against an electrochemical potential under ATP hydrolysis. The enzyme consists of two subcomplexes: V0, which is membrane embedded; and V1, which is cytosolic. V0 was also reported to be involved in fusion of vacuoles in yeast. We identified six genes encoding c-subunits (proteolipids) of V0 and two genes encoding F-subunits of V1 and studied the role of the V-ATPase in trafficking in Paramecium. Green fluorescent protein (GFP) fusion proteins allowed a clear subcellular localization of c- and F-subunits in the contractile vacuole complex of the osmoregulatory system and in food vacuoles. Several other organelles were also detected, in particular dense core secretory granules (trichocysts). The functional significance of the V-ATPase in Paramecium was investigated by RNA interference (RNAi), using a recently developed feeding method. A novel strategy was used to block the expression of all six c- or both F-subunits simultaneously. The V-ATPase was found to be crucial for osmoregulation, the phagocytotic pathway and the biogenesis of dense core secretory granules. No evidence was found supporting participation of V0 in membrane fusion.
Resumo:
INTRODUCTION: The inappropriate use of antipsychotics in people with dementia for behaviour that challenges is associated with an estimated 1800 deaths annually. However, solely focusing on antipsychotics may transfer prescribing to other equally dangerous psychotropics. Little is known about the role of pharmacists in the management of psychotropics used to treat behaviours that challenge. This research aims to determine whether it is feasible to implement and measure the effectiveness of a combined pharmacy-health psychology intervention incorporating a medication review and staff training package to limit the prescription of psychotropics to manage behaviour that challenges in care home residents with dementia. METHODS/ANALYSIS: 6 care homes within the West Midlands will be recruited. People with dementia receiving medication for behaviour that challenges, or their personal consultee, will be approached regarding participation. Medication used to treat behaviour that challenges will be reviewed by the pharmacist, in collaboration with the general practitioner (GP), person with dementia and carer. The behavioural intervention consists of a training package for care home staff and GPs promoting person-centred care and treating behaviours that challenge as an expression of unmet need. The primary outcome measure is the Neuropsychiatric Inventory-Nursing Home version (NPI-NH). Other outcomes include quality of life (EQ-5D and DEMQoL), cognition (sMMSE), health economic (CSRI) and prescribed medication including whether recommendations were implemented. Outcome data will be collected at 6 weeks, and 3 and 6 months. Pretraining and post-training interviews will explore stakeholders' expectations and experiences of the intervention. Data will be used to estimate the sample size for a definitive study. ETHICS/DISSEMINATION: The project has received a favourable opinion from the East Midlands REC (15/EM/3014). If potential participants lack capacity, a personal consultee will be consulted regarding participation in line with the Mental Capacity Act. Results will be published in peer-reviewed journals and presented at conferences.
Resumo:
Adaptability for distributed object-oriented enterprise frameworks in multimedia technology is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing systems. In this paper, we propose a Metalevel Component-Based Framework which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our approach of combining a meta-architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed multimedia applications. The proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address issues in the domain of distributed computing and they can be woven together to shape the framework in future. © 2011 Springer Science+Business Media B.V.
Resumo:
Engineering education in the United Kingdom is at the point of embarking upon an interesting journey into uncharted waters. At no point in the past have there been so many drivers for change and so many opportunities for the development of engineering pedagogy. This paper will look at how Engineering Education Research (EER) has developed within the UK and what differentiates it from the many small scale practitioner interventions, perhaps without a clear research question or with little evaluation, which are presented at numerous staff development sessions, workshops and conferences. From this position some examples of current projects will be described, outcomes of funding opportunities will be summarised and the benefits of collaboration with other disciplines illustrated. In this study, I will account for how the design of task structure according to variation theory, as well as the probe-ware technology, make the laws of force and motion visible and learnable and, especially, in the lab studied make Newton's third law visible and learnable. I will also, as a comparison, include data from a mechanics lab that use the same probe-ware technology and deal with the same topics in mechanics, but uses a differently designed task structure. I will argue that the lower achievements on the FMCE-test in this latter case can be attributed to these differences in task structure in the lab instructions. According to my analysis, the necessary pattern of variation is not included in the design. I will also present a microanalysis of 15 hours collected from engineering students' activities in a lab about impulse and collisions based on video recordings of student's activities in a lab about impulse and collisions. The important object of learning in this lab is the development of an understanding of Newton's third law. The approach analysing students interaction using video data is inspired by ethnomethodology and conversation analysis, i.e. I will focus on students practical, contingent and embodied inquiry in the setting of the lab. I argue that my result corroborates variation theory and show this theory can be used as a 'tool' for designing labs as well as for analysing labs and lab instructions. Thus my results have implications outside the domain of this study and have implications for understanding critical features for student learning in labs. Engineering higher education is well used to change. As technology develops the abilities expected by employers of graduates expand, yet our understanding of how to make informed decisions about learning and teaching strategies does not without a conscious effort to do so. With the numerous demands of academic life, we often fail to acknowledge our incomplete understanding of how our students learn within our discipline. The journey facing engineering education in the UK is being driven by two classes of driver. Firstly there are those which we have been working to expand our understanding of, such as retention and employability, and secondly the new challenges such as substantial changes to funding systems allied with an increase in student expectations. Only through continued research can priorities be identified, addressed and a coherent and strong voice for informed change be heard within the wider engineering education community. This new position makes it even more important that through EER we acquire the knowledge and understanding needed to make informed decisions regarding approaches to teaching, curriculum design and measures to promote effective student learning. This then raises the question 'how does EER function within a diverse academic community?' Within an existing community of academics interested in taking meaningful steps towards understanding the ongoing challenges of engineering education a Special Interest Group (SIG) has formed in the UK. The formation of this group has itself been part of the rapidly changing environment through its facilitation by the Higher Education Academy's Engineering Subject Centre, an entity which through the Academy's current restructuring will no longer exist as a discrete Centre dedicated to supporting engineering academics. The aims of this group, the activities it is currently undertaking and how it expects to network and collaborate with the global EER community will be reported in this paper. This will include explanation of how the group has identified barriers to the progress of EER and how it is seeking, through a series of activities, to facilitate recognition and growth of EER both within the UK and with our valued international colleagues.
Resumo:
The environmental characteristics can modify the acoustics of a species due to habitat, time of day and year. Therefore, this study investigated the relationships between seasons, tide, daily cycle of tides, times of day and different habitat and noise emission of S. guianensis. Sound recordings occurred in the Curral’s Cove and Lagoon Complex of Guaraíras (CLG) in the municipality of Tibau do Sul/RN. Whistles are emitted with lower frequency during rainy season and spring tide while clicks are higher; whistles, clicks and calls have higher frequency during ebb tide. These modifications can be related with turbidity and prey availability. The whistles and clicks occurrence are higher at night probably because luminosity is lower. Furthermore, the whistles and clicks frequency reduction overnight allows the sound to travel longer distance and helps the view which is limited; but the minimum frequency increase was needed to catch the prey. The low occurrence of calls could be related to the small group size. The acoustic changes at night may be partly influenced by light levels as prey availability that is larger in this period. Whistle frequencies and click initial frequency are higher in CLG than Curral’s cove that permitted good precision. However, click central frequency is lower and may be connected to tracking the area. Several factors may be associated with such modifications as background noise, bottom and others. This study supports the hypothesis that S. guianensis presents an acoustic plasticity according to the local conditions where the species is embedded and adapts to the environmental changes.
Resumo:
Police is Dead is an historiographic analysis whose objective is to change the terms by which contemporary humanist scholarship assesses the phenomenon currently termed neoliberalism. It proceeds by building an archeology of legal thought in the United States that spans the nineteenth and twentieth centuries. My approach assumes that the decline of certain paradigms of political consciousness set historical conditions that enable the emergence of what is to follow. The particular historical form of political consciousness I seek to reintroduce to the present is what I call “police:” a counter-liberal way of understanding social relations that I claim has particular visibility within a legal archive, but that has been largely ignored by humanist theory on account of two tendencies: first, an over-valuation of liberalism as Western history’s master signifier; and second, inconsistent and selective attention to law as a cultural artifact. The first part of my dissertation reconstructs an anatomy of police through close studies of court opinions, legal treatises, and legal scholarship. I focus in particular on juridical descriptions of intimate relationality—which police configured as a public phenomenon—and slave society apologetics, which projected the notion of community as an affective and embodied structure. The second part of this dissertation demonstrates that the dissolution of police was critical to emergence of a paradigm I call economism: an originally progressive economic framework for understanding social relations that I argue developed at the nexus of law and economics at the turn of the twentieth century. Economism is a way of understanding sociality that collapses ontological distinctions between formally distinct political subjects—i.e., the state, the individual, the collective—by reducing them to the perspective of economic force. Insofar as it was taken up and reoriented by neoliberal theory, this paradigm has become a hegemonic form of political consciousness. This project concludes by encouraging a disarticulation of economism—insofar as it is a form of knowledge—from neoliberalism as its contemporary doctrinal manifestation. I suggest that this is one way progressive scholarship can think about moving forward in the development of economic knowledge, rather than desiring to move backwards to a time before the rise of neoliberalism. Disciplinarily, I aim to show that understanding the legal historiography informing our present moment is crucial to this task.