967 resultados para Protection systems
Resumo:
Methods We conducted a phase I, multicenter, randomized, double-blind, placebo-controlled, multi-arm (10) parallel study involving healthy adults to evaluate the safety and immunogenicity of influenza A (H1N1) 2009 non-adjuvanted and adjuvanted candidate vaccines. Subjects received two intramuscular injections of one of the candidate vaccines administered 21 days apart. Antibody responses were measured by means of hemagglutination-inhibition assay before and 21 days after each vaccination. The three co-primary immunogenicity end points were the proportion of seroprotection >70%, seroconversion >40%, and the factor increase in the geometric mean titer >2.5. Results A total of 266 participants were enrolled into the study. No deaths or serious adverse events were reported. The most commonly solicited local and systemic adverse events were injection-site pain and headache, respectively. Only three subjects (1.1%) reported severe injection-site pain. Four 2009 influenza A (H1N1) inactivated monovalent candidate vaccines that met the three requirements to evaluate influenza protection, after a single dose, were identified: 15 μg of hemagglutinin antigen without adjuvant; 7.5 μg of hemagglutinin antigen with aluminum hydroxide, MPL and squalene; 3.75 μg of hemagglutinin antigen with aluminum hydroxide and MPL; and 3.75 μg of hemagglutinin antigen with aluminum hydroxide and squalene. Conclusions Adjuvant systems can be safely used in influenza vaccines, including the adjuvant monophosphoryl lipid A (MPL) derived from Bordetella pertussis with squalene and aluminum hydroxide, MPL with aluminum hydroxide, and squalene and aluminum hydroxide.
Resumo:
In green plants, the function of collecting solar energy for photosynthesis is fulfilled by a series of light-harvesting complexes (LHC). The light-harvesting chlorophyll a/b protein (LHCP) is synthesized in the cytosol as a precursor (pLHCP), then imported into chloroplasts and assembled into photosynthetic thylakoid membranes. Knowledge about the regulation of the transport processes of LHCP is rather limited. Closely mimicking the in vivo situation, cell-free protein expression system is employed in this dissertation to study the reconstitution of LHCP into artificial membranes. The approach starts merely from the genetic information of the protein, so the difficult and time-consuming procedures of protein expression and purification can be avoided. The LHCP encoding gene from Pisum sativum was cloned into a cell-free compatible vector system and the protein was expressed in wheat germ extracts. Vesicles or pigment-containing vesicles were prepared with either synthetic lipid or purified plant leaf lipid to mimic cell membranes. LHCP was synthesized in wheat germ extract systems with or without supplemented lipids. The addition of either synthetic or purified plant leaf lipid was found to be beneficial to the general productivity of the expression system. The lipid membrane insertion of the LHCP was investigated by radioactive labelling, protease digestion, and centrifugation assays. The LHCP is partially protected against protease digestion; however the protection is independent from the supplemented lipids.
Resumo:
Mountainous areas are prone to natural hazards like rockfalls. Among the many countermeasures, rockfall protection barriers represent an effective solution to mitigate the risk. They are metallic structures designed to intercept rocks falling from unstable slopes, thus dissipating the energy deriving from the impact. This study aims at providing a better understanding of the response of several rockfall barrier types, through the development of rather sophisticated three-dimensional numerical finite elements models which take into account for the highly dynamic and non-linear conditions of such events. The models are built considering the actual geometrical and mechanical properties of real systems. Particular attention is given to the connecting details between the structural components and to their interactions. The importance of the work lies in being able to support a wide experimental activity with appropriate numerical modelling. The data of several full-scale tests carried out on barrier prototypes, as well as on their structural components, are combined with results of numerical simulations. Though the models are designed with relatively simple solutions in order to obtain a low computational cost of the simulations, they are able to reproduce with great accuracy the test results, thus validating the reliability of the numerical strategy proposed for the design of these structures. The developed models have shown to be readily applied to predict the barrier performance under different possible scenarios, by varying the initial configuration of the structures and/or of the impact conditions. Furthermore, the numerical models enable to optimize the design of these structures and to evaluate the benefit of possible solutions. Finally it is shown they can be also used as a valuable supporting tool for the operators within a rockfall risk assessment procedure, to gain crucial understanding of the performance of existing barriers in working conditions.
Resumo:
The inter-American human rights system has been conceived following the example of the European system under the European Convention on Human Rights (ECHR) before it was modified by Protocol No 11. However, two important differences exist. First, the authority of the European Court of Human Rights (ECtHR) to order reparation has been strictly limited by the principle of subsidiarity. Thus, the ECtHR's main function is to determine whether the ECHR has been violated. Beyond the declaratory effect of its judgments, according to Article 41 ECHR, it may only "afford just satisfaction to the injured party". The powers of the Inter-American Court of Human Rights (IACtHR) were conceived in a much broader fashion in Article 63 of the American Convention on Human Rights (ACHR), giving the Court the authority to order a variety of individual and general measures aimed at obtaining restitutio in integrum. The first main part of this thesis shows how both Courts have developed their reparation practice and examines the advantages and disadvantages of each approach. Secondly, the ECtHR's rather limited reparation powers have, interestingly, been combined with an elaborate implementation system that includes several of the Council of Europe's organs, principally the Committee of Ministers. In the Inter-American System, no dedicated mechanism was implemented to oversee compliance with the IACtHR's judgments. The ACHR limits itself to inviting the Court to point out in its annual reports the cases that have not been complied with and to propose measures to be adopted by the General Assembly of the Organization of American States. The General Assembly, however, hardly ever took action. The IACtHR has therefore filled this gap by developing a proper procedure to oversee compliance with its judgments. Both the European and the American solutions to ensure compliance are presented and compared in the second main part of this thesis. Finally, based on the results of both main parts, a comparative analysis of the reparation practice and the execution results in both human rights systems is being provided, aimed at developing proposals for the improvement of the functioning of either human rights protection system.
Resumo:
BACKGROUND: Studying the interactions between xenoreactive antibodies, complement and coagulation factors with the endothelium in hyperacute and acute vascular rejection usually necessitates the use of in vivo models. Conventional in vitro or ex vivo systems require either serum, plasma or anti-coagulated whole blood, making analysis of coagulation-mediated effects difficult. Here a novel in vitro microcarrier-based system for the study of endothelial cell (EC) activation and damage, using non-anticoagulated whole blood is described. Once established, the model was used to study the effect of the characterized complement- and coagulation inhibitor dextran sulfate (DXS, MW 5000) for its EC protective properties in a xenotransplantation setting. METHODS: Porcine aortic endothelial cells (PAEC), grown to confluence on microcarrier beads, were incubated with non-anticoagulated whole human blood until coagulation occurred or for a maximum of 90 min. PAEC-beads were either pre- or co-incubated with DXS. Phosphate buffered saline (PBS) experiments served as controls. Fluid phase and surface activation markers for complement and coagulation were analyzed as well as binding of DXS to PAEC-beads. RESULTS: Co- as well as pre-incubation of DXS, followed by washing of the beads, significantly prolonged time to coagulation from 39 +/- 12 min (PBS control) to 74 +/- 23 and 77 +/- 20 min, respectively (P < 0.005 vs. PBS). DXS treatment attenuated surface deposition of C1q, C4b/c, C3b/c and C5b-9 without affecting IgG or IgM deposition. Endothelial integrity, expressed by positivity for von Willebrand Factor, was maintained longer with DXS treatment. Compared with PBS controls, both pre- and co-incubation with DXS significantly prolonged activated partial thromboplastin time (>300 s, P < 0.05) and reduced production of thrombin-antithrombin complexes and fibrinopeptide A. Whilst DXS co-incubation completely blocked classical pathway complement activity (CH50 test) DXS pre-incubation or PBS control experiments showed no inhibition. DXS bound to PAEC-beads as visualized using fluorescein-labeled DXS. CONCLUSIONS: This novel in vitro microcarrier model can be used to study EC damage and the complex interactions with whole blood as well as screen ''endothelial protective'' substances in a xenotransplantation setting. DXS provides EC protection in this in vitro setting, attenuating damage of ECs as seen in hyperacute xenograft rejection.
Resumo:
This article provides a comprehensive overview of the regulations on e-commerce protection rules in China and the European Union. It starts by giving a general overview of different approaches towards consumer protection in e-commerce. This article then scrutinizes the current legal system in China by mainly focusing on SAIC’s “Interim Measures for the Administration of Online Commodity Trading and Relevant Service Activities”. The subsequent chapter covers the supervision of consumer protection in e-commerce in China, which covers both the regulatory objects of online commodity trading and the applied regulatory mechanisms. While the regulatory objects include operating agents, operating objects, operating behavior, electronic contracts, intellectual property and consumer protection, the regulatory mechanisms for e-commerce in China combines market mechanism and industry self-discipline under the government’s administrative regulation. Further, this article examines the current European legal system in online commodity trading. It outlines the aim and the scope of EU legislation in the respective field. Subsequently, the paper describes the European approach towards the supervision of consumer protection in e-commerce. As there is no central EU agency for consumer protection in e-commerce transactions, the EU stipulates a framework for Member States’ institutions, thereby creating a European supervisory network of Member States’ institutions and empowers private consumer organisations to supervise the market on their behalf. Moreover, the EU encourages the industry to self- or co-regulate e-commerce by providing incentives. Consequently, this article concludes that consumer protection may be achieved by different means and different systems. However, even though at first glance the Chinese and the European system appear to differ substantially, a closer look reveals tendencies of convergence between the two systems.
Resumo:
Earth observations (EO) represent a growing and valuable resource for many scientific, research and practical applications carried out by users around the world. Access to EO data for some applications or activities, like climate change research or emergency response activities, becomes indispensable for their success. However, often EO data or products made of them are (or are claimed to be) subject to intellectual property law protection and are licensed under specific conditions regarding access and use. Restrictive conditions on data use can be prohibitive for further work with the data. Global Earth Observation System of Systems (GEOSS) is an initiative led by the Group on Earth Observations (GEO) with the aim to provide coordinated, comprehensive, and sustained EO and information for making informed decisions in various areas beneficial to societies, their functioning and development. It seeks to share data with users world-wide with the fewest possible restrictions on their use by implementing GEOSS Data Sharing Principles adopted by GEO. The Principles proclaim full and open exchange of data shared within GEOSS, while recognising relevant international instruments and national policies and legislation through which restrictions on the use of data may be imposed.The paper focuses on the issue of the legal interoperability of data that are shared with varying restrictions on use with the aim to explore the options of making data interoperable. The main question it addresses is whether the public domain or its equivalents represent the best mechanism to ensure legal interoperability of data. To this end, the paper analyses legal protection regimes and their norms applicable to EO data. Based on the findings, it highlights the existing public law statutory, regulatory, and policy approaches, as well as private law instruments, such as waivers, licenses and contracts, that may be used to place the datasets in the public domain, or otherwise make them publicly available for use and re-use without restrictions. It uses GEOSS and the particular characteristics of it as a system to identify the ways to reconcile the vast possibilities it provides through sharing of data from various sources and jurisdictions on the one hand, and the restrictions on the use of the shared resources on the other. On a more general level the paper seeks to draw attention to the obstacles and potential regulatory solutions for sharing factual or research data for the purposes that go beyond research and education.
Resumo:
Digital Rights Management Systems (DRMS) are seen by content providers as the appropriate tool to, on the one hand, fight piracy and, on the other hand, monetize their assets. Although these systems claim to be very powerful and include multiple protection technologies, there is a lack of understanding about how such systems are currently being implemented and used by content providers. The aim of this paper is twofold. First, it provides a theoretical basis through which we present shortly the seven core protection technologies of a DRMS. Second, this paper provides empirical evidence that the seven protection technologies outlined in the first section of this paper are the most commonly used technologies. It further evaluates to what extent these technologies are being used within the music and print industry. It concludes that the three main Technologies are encryption, password, and payment systems. However, there are some industry differences: the number of protection technologies used, the requirements for a DRMS, the required investment, or the perceived success of DRMS in fighting piracy.
Resumo:
Technology advances in hardware, software and IP-networks such as the Internet or peer-to-peer file sharing systems are threatening the music business. The result has been an increasing amount of illegal copies available on-line as well as off-line. With the emergence of digital rights management systems (DRMS), the music industry seems to have found the appropriate tool to simultaneously fight piracy and to monetize their assets. Although these systems are very powerful and include multiple technologies to prevent piracy, it is as of yet unknown to what extent such systems are currently being used by content providers. We provide empirical analyses, results, and conclusions related to digital rights management systems and the protection of digital content in the music industry. It shows that most content providers are protecting their digital content through a variety of technologies such as passwords or encryption. However, each protection technology has its own specific goal, and not all prevent piracy. The majority of the respondents are satisfied with their current protection but want to reinforce it for the future, due to fear of increasing piracy. Surprisingly, although encryption is seen as the core DRM technology, only few companies are currently using it. Finally, half of the respondents do not believe in the success of DRMS and their ability to reduce piracy.
Resumo:
Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^
Resumo:
The HiPER project, phase 4a, is evolving. In this study we present the progress made in the field of neutronics and radiological protection for an integrated design of the facility. In the current model, we take into account the optical systems inside the target bay, as well as the remote handling requirements and related infrastructure, together with different shields. The last reference irradiation scenario, consisting of 20 MJ of neutron yields, 5 yields per burst, one burst every week and 30 years of expected lifetime is considered for this study. We have performed a characterization of the dose rates behavior in the facility, both during operation and between bursts. The dose rates are computed for workers, regarding to maintenance and handling, and also for optical systems, regarding to damage. Furthermore, we have performed a waste management assessment of all the components inside the target bay. Results indicate that remote maintenance is mandatory in some areas. The small beam penetrations in the shields are responsible for some high doses in some specific locations. With regards to optics, the residual doses are as high as prompt doses. It is found that the whole target bay may be fully managed as a waste in 30 years by recycling and/or clearance, with no need for burial.
Resumo:
Underground coal mines explosions generally arise from the inflammation of a methane/air mixture. This explosion can also generate a subsequent coal dust explosion. Traditionally such explosions have being fought eliminating one or several of the factors needed by the explosion to take place. Although several preventive measures are taken to prevent explosions, other measures should be considered to reduce the effects or even to extinguish the flame front. Unlike other protection methods that remove one or two of the explosion triangle elements, namely; the ignition source, the oxidizing agent and the fuel, explosion barriers removes all of them: reduces the quantity of coal in suspension, cools the flame front and the steam generated by vaporization removes the oxygen present in the flame. The present paper is essentially based on the comprehensive state-of–the-art of Protective Systems in underground coal mines, and particularly on the application of Explosion Barriers to improve safety level in Spanish coal mining industry. After an exhaustive study of series EN 14591 standards covering explosion prevention and protection in underground mines, authors have proven explosion barriers effectiveness in underground galleries by Full Scale Tests performed in Polish Barbara experimental mine, showing that the barriers can reduce the effects of methane and/or flammable coal dust explosions to a satisfactory safety level.
Resumo:
In order to achieve total selectivity at electrical distribution networks it is of great importance to analyze the defect currents at ungrounded power systems. This information will help to grant selectivity at electrical distribution networks ensuring that only the defect line or feeder is removed from service. In the present work a new selective and directional protection method for ungrounded power systems is evaluated. The new method measures only defect currents to detect earth faults and works with a directional criterion to determine the line under faulty conditions. The main contribution of this new technique is that it can detect earth faults in outgoing lines at any type of substation avoiding the possible mismatch of traditional directional earth fault relays. This detection technique is based on the comparison of the direction of a reference current to the direction of all earth fault capacitive currents at all the feeders connected to the same bus bars. This new method has been validated through computer simulations. The results for the different cases studied are remarkable, proving total validity and usefulness of the new method.
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.
Resumo:
In this paper, we investigate the real demand for climate protection when the purely individual perspective of existing revealed preference studies is relaxed. This is achieved in two treatments; first, we determine the information subjects receive about the demand revealed by other subjects in a similar decision making situation, second, collective action is implemented whereby all subjects are required to purchase the group?s median quantity at a given price. Participants in the experiment were offered the opportunity to contribute to climate protection by purchasing European Union Allowances. Allowances purchased were withdrawn from the European Emissions Trading Scheme. In our experiment, information about other subjects? behaviour has no treatment effect on the demand for climate protection. Under collective action however, the probability of purchasing allowances is higher compared to the reference treatment situation, an individual contribution mechanism. Furthermore, we observe a strong correlation between subjects? demand and their expectations about other participants? behaviour. When collective action is not available, subjects? e xpectations are consistent with free rider behaviour.