864 resultados para Policy reference framework
Resumo:
O objetivo principal deste trabalho consiste em documentar o Sistema Orçamentário do Banco Nacional de Desenvolvimento Econômico e Social - BNDES - e do Banco de Desenvolvimento de Minas Gerais - BDMG - verificando: a) Quais são as principais vantagens do orçamento para esses bancos? b} Quais são as principais limitações do orçamento para esses bancos? c ) Qual é a relação existente entre a estrutura organizacional e o processo orçamentário? d) Qual é o processo de elaboração e acompanhamento do orçamento nesses bancos? e) Quais são as funções do orçamento nesses bancos? Existe conflito entre elas? O estudo foi realizado à luz dos fundamentos teóricos existentes na literatura sobre orçamento, tendo sido pesquisadas obras de autores nacionais e estrangeiros, o que possibilitou a construção de um quadros de referência que norteou a (Cap. II). A metodologia adotada é a de estudo de casos e se encontra evidenciada no Cap. III. Os dados necessários à descrição dos casos estudados (Cap. IV e Cap. V), foram obtidos de duas formas: a) aplicação de questionário contendo questões na sua maioria abertas e b) análise de documentos. Baseado nos resultados obtidos no trabalho de campo, pode-se analisar os sistemas orçamentários dos bancos de desenvolvimento pesquisados, sem perder de vista as sugestões dos autores cujas obras foram consultadas, que nos permitiu chegar a importantes conclusões (Cap.VI) de que, em termos gerais, os sistemas orçamentários dos bancos em questão, projetados com base na sua nova estrutura organizacional, se assemelham com os sistemas orçamentários sugeridos nos modelos normativos encontrados na literatura pesquisada.
Resumo:
O presente trabalho tem por objetivo entender como os fatores contextuais têm influenciado o processo de transferência do modelo de agência reguladora independente (ARI) no setor de saneamento básico brasileiro. Devido à divergência existente entre estes fatores contextuais, verificou-se que estão surgindo, pelo país, modelos distintos de ARIs. Sendo assim, a contribuição empírica desta dissertação está em, além de apresentar os fatores envolvidos na transferência do modelo, identificar os tipos de ARIs resultantes das divergências contextuais no saneamento básico. Estes modelos são: (i) ARI Estadual multissetorial, (ii) ARI estadual exclusiva de saneamento básico, (iii) ARI Municipal, (iv) ARI Regional ou Consorciada e (v) Buro/regulocracia. Entretanto, para chegar a estes resultados, foi necessário construir um modelo teórico que superasse as restrições existentes na literatura de policy transfer. Para tanto, esta teoria foi utilizada em conjunto com o path dependence, uma vez que, após as primeiras pesquisas no setor, verificou-se uma forte influência das variáveis históricas sobre o saneamento. Nesta interseção consiste a contribuição teórica deste trabalho, uma vez que, com a integração de elementos do path dependence na literatura de policy transfer foi possível romper com uma das principais limitações desta teoria: a unidirecionalidade da transferência.
Resumo:
Discute-se a teoria da Reabilitação Psicossocial, proposta por Benedetto Saraceno, tomando como referencial a teoria de Sistemas Auto-Organizados, elaborada, entre outros, por Michel Debrun. Observa-se que a proposta de Saraceno satisfaz diversos aspectos do processo de auto-organização, porém não chega a se constituir plenamente como tal. A partir dessa reflexão, pode-se entender melhor algumas das dificuldades da prática de reabilitação na área de Saúde Mental.
Resumo:
The study was realized among oncological nurses in their daily work routine and aimed to understand these professionals' subjective action, starting from their relation with patients, adopting a phenomenological reference framework based on the ideas of Alfred Schütz. The question: what does working in oncological care mean to you? Please describe, was used to collect statements, which were analyzed and clarified the typical action of a nurse caregiver in this daily routine. The study revealed that oncological care implies dealing with humans in a fragile situation; requires a relationship of affectivity; is care delivery that entails the genesis of professional burnout. Care delivery in oncology is highly complex, requiring a professional competence that goes beyond the technical-scientific sphere. Nursing professionals need to seek strategies which enable them to face the fatigue they are submitted to in their work.
Resumo:
Setup operations are significant in some production environments. It is mandatory that their production plans consider some features, as setup state conservation across periods through setup carryover and crossover. The modelling of setup crossover allows more flexible decisions and is essential for problems with long setup times. This paper proposes two models for the capacitated lot-sizing problem with backlogging and setup carryover and crossover. The first is in line with other models from the literature, whereas the second considers a disaggregated setup variable, which tracks the starting and completion times of the setup operation. This innovative approach permits a more compact formulation. Computational results show that the proposed models have outperformed other state-of-the-art formulation.
Resumo:
The thesis aims at inquiring into the issue of innovation and organizational and institutional change in the public administration with regard to the increasingly massive adoption of participatory devices and practices in various arenas of public policies. The field of reference regards transformations of the types of public actions and regulation systems, concerning governance. Together with the crisis of the public function and of the role played by the insitutions what is emerging are different levels of governement, both towards an over national and a local direction, and a plurality of social interlocutors, followed by a post-bureaucratic pattern of the public administration that is opening itself in the direction of environment and citizens. The public adminstration is no longer considered an inert object within the bureaucratic paradigm but as a series of communicative processes, choices, cultures and practices that actively builds itself and the environment it interacts with. Therefore, the output of the public administration isn’t the simple service being supplied but the relationship enacted with the citizen, relationship that becomes the constituent basis of adminstrative processes. The intention of thesis is to take into consideration the relation between innovation of the public administration and participatory experimentations and implementations regarded as exchanges in which citizens and the public administration hold talks and debates. The issue of the organizational change of the public administration as output and effect of inclusive deliberative practices has been analysed starting from an institutionalist approach, in other words examining the constituent features of institutions, “rediscovering” them with regard to their public nature, their ability to elaborate collective values and meanings, the social definition of problems and solutions. The participatory device employed by the Forlì city council that involved enterprises and cultural associations of the area in order to build a participatory Table, has been studied through a qualitative methodology (participant observation and semi-strutctured interviews). The analysis inquired into the public nature both of the participatory device and the administrative action itself as well as into elements pertaining the deliberative setting, the regulative reference framework and the actors which took part in the process.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
Negli ultimi anni lo spreco alimentare ha assunto un’importanza crescente nel dibattito internazionale, politico ed accademico, nel contesto delle tematiche sulla sostenibilità dei modelli di produzione e consumo, sull’uso efficiente delle risorse e la gestione dei rifiuti. Nei prossimi anni gli Stati Membri dell’Unione Europea saranno chiamati ad adottare specifiche strategie di prevenzione degli sprechi alimentari all’interno di una cornice di riferimento comune. Tale cornice è quella che si va delineando nel corso del progetto Europeo di ricerca “FUSIONS” (7FP) che, nel 2014, ha elaborato un framework di riferimento per la definizione di “food waste” allo scopo di armonizzare le diverse metodologie di quantificazione adottate dai paesi membri. In questo scenario, ai fini della predisposizione di un Piano Nazionale di Prevenzione degli Sprechi Alimentari per l’Italia, il presente lavoro applica per la prima volta il “definitional framework” FUSIONS per l’analisi dei dati e l’identificazione dei principali flussi nei diversi anelli della filiera e svolge un estesa consultazione degli stakeholder (e della letteratura) per identificare le possibili misure di prevenzione e le priorità di azione. I risultati ottenuti evedenziano (tra le altre cose) la necessità di predisporre e promuovere a livello nazionale l’adozione di misure uniformi di quantificazione e reporting; l’importanza del coinvolgimento degli stakeholder nel contesto di una campagna nazionale di prevenzione degli sprechi alimentari; l’esigenza di garantire una adeguata copertura economica per le attività di pianificazione e implementazione delle misure di prevenzione da parte degli enti locali e di un coordinamento a livello nazionale della programmazione regionale; la necessità di una armonizzazione/semplificazione del quadro di riferimento normativo (fiscale, igienico-sanitario, procedurale) che disciplina la donazione delle eccedenze alimentari; l’urgenza di approfondire il fenomeno degli sprechi alimentari attraverso la realizzazione di studi di settore negli stadi a valle della filiera.
Resumo:
Pollinating insects form a key component of European biodiversity, and provide a vital ecosystem service to crops and wild plants. There is growing evidence of declines in both wild and domesticated pollinators, and parallel declines in plants relying upon them. The STEP project (Status and Trends of European Pollinators, 2010-2015, www.step-project.net) is documenting critical elements in the nature and extent of these declines, examining key functional traits associated with pollination deficits, and developing a Red List for some European pollinator groups. Together these activities are laying the groundwork for future pollinator monitoring programmes. STEP is also assessing the relative importance of potential drivers of pollinator declines, including climate change, habitat loss and fragmentation, agrochemicals, pathogens, alien species, light pollution, and their interactions. We are measuring the ecological and economic impacts of declining pollinator services and floral resources, including effects on wild plant populations, crop production and human nutrition. STEP is reviewing existing and potential mitigation options, and providing novel tests of their effectiveness across Europe. Our work is building upon existing and newly developed datasets and models, complemented by spatially-replicated campaigns of field research to fill gaps in current knowledge. Findings are being integrated into a policy-relevant framework to create evidence-based decision support tools. STEP is establishing communication links to a wide range of stakeholders across Europe and beyond, including policy makers, beekeepers, farmers, academics and the general public. Taken together, the STEP research programme aims to improve our understanding of the nature, causes, consequences and potential mitigation of declines in pollination services at local, national, continental and global scales.
Resumo:
The pressuremeter test in boreholes has proven itself as a useful tool in geotechnical explorations, especially comparing its results with those obtained from a mathematical model ruled by a soil representative constitutive equation. The numerical model shown in this paper is aimed to be the reference framework for the interpretation of this test. The model analyses variables such as: the type of response, the initial state, the drainage regime and the constitutive equations. It is a model of finite elements able to work with a mesh without deformation or one adapted to it.
Resumo:
La mejora de la calidad del aire es una tarea eminentemente interdisciplinaria. Dada la gran variedad de ciencias y partes involucradas, dicha mejora requiere de herramientas de evaluación simples y completamente integradas. La modelización para la evaluación integrada (integrated assessment modeling) ha demostrado ser una solución adecuada para la descripción de los sistemas de contaminación atmosférica puesto que considera cada una de las etapas involucradas: emisiones, química y dispersión atmosférica, impactos ambientales asociados y potencial de disminución. Varios modelos de evaluación integrada ya están disponibles a escala continental, cubriendo cada una de las etapas antesmencionadas, siendo el modelo GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies) el más reconocido y usado en el contexto europeo de toma de decisiones medioambientales. Sin embargo, el manejo de la calidad del aire a escala nacional/regional dentro del marco de la evaluación integrada es deseable. Esto sin embargo, no se lleva a cabo de manera satisfactoria con modelos a escala europea debido a la falta de resolución espacial o de detalle en los datos auxiliares, principalmente los inventarios de emisión y los patrones meteorológicos, entre otros. El objetivo de esta tesis es presentar los desarrollos en el diseño y aplicación de un modelo de evaluación integrada especialmente concebido para España y Portugal. El modelo AERIS (Atmospheric Evaluation and Research Integrated system for Spain) es capaz de cuantificar perfiles de concentración para varios contaminantes (NO2, SO2, PM10, PM2,5, NH3 y O3), el depósito atmosférico de especies de azufre y nitrógeno así como sus impactos en cultivos, vegetación, ecosistemas y salud como respuesta a cambios porcentuales en las emisiones de sectores relevantes. La versión actual de AERIS considera 20 sectores de emisión, ya sea equivalentes a sectores individuales SNAP o macrosectores, cuya contribución a los niveles de calidad del aire, depósito e impactos han sido modelados a través de matrices fuentereceptor (SRMs). Estas matrices son constantes de proporcionalidad que relacionan cambios en emisiones con diferentes indicadores de calidad del aire y han sido obtenidas a través de parametrizaciones estadísticas de un modelo de calidad del aire (AQM). Para el caso concreto de AERIS, su modelo de calidad del aire “de origen” consistió en el modelo WRF para la meteorología y en el modelo CMAQ para los procesos químico-atmosféricos. La cuantificación del depósito atmosférico, de los impactos en ecosistemas, cultivos, vegetación y salud humana se ha realizado siguiendo las metodologías estándar establecidas bajo los marcos internacionales de negociación, tales como CLRTAP. La estructura de programación está basada en MATLAB®, permitiendo gran compatibilidad con software típico de escritorio comoMicrosoft Excel® o ArcGIS®. En relación con los niveles de calidad del aire, AERIS es capaz de proveer datos de media anual y media mensual, así como el 19o valor horario más alto paraNO2, el 25o valor horario y el 4o valor diario más altos para SO2, el 36o valor diario más alto para PM10, el 26o valor octohorario más alto, SOMO35 y AOT40 para O3. En relación al depósito atmosférico, el depósito acumulado anual por unidad de area de especies de nitrógeno oxidado y reducido al igual que de azufre pueden ser determinados. Cuando los valores anteriormente mencionados se relacionan con características del dominio modelado tales como uso de suelo, cubiertas vegetales y forestales, censos poblacionales o estudios epidemiológicos, un gran número de impactos puede ser calculado. Centrándose en los impactos a ecosistemas y suelos, AERIS es capaz de estimar las superaciones de cargas críticas y las superaciones medias acumuladas para especies de nitrógeno y azufre. Los daños a bosques se calculan como una superación de los niveles críticos de NO2 y SO2 establecidos. Además, AERIS es capaz de cuantificar daños causados por O3 y SO2 en vid, maíz, patata, arroz, girasol, tabaco, tomate, sandía y trigo. Los impactos en salud humana han sido modelados como consecuencia de la exposición a PM2,5 y O3 y cuantificados como pérdidas en la esperanza de vida estadística e indicadores de mortalidad prematura. La exactitud del modelo de evaluación integrada ha sido contrastada estadísticamente con los resultados obtenidos por el modelo de calidad del aire convencional, exhibiendo en la mayoría de los casos un buen nivel de correspondencia. Debido a que la cuantificación de los impactos no es llevada a cabo directamente por el modelo de calidad del aire, un análisis de credibilidad ha sido realizado mediante la comparación de los resultados de AERIS con los de GAINS para un escenario de emisiones determinado. El análisis reveló un buen nivel de correspondencia en las medias y en las distribuciones probabilísticas de los conjuntos de datos. Las pruebas de verificación que fueron aplicadas a AERIS sugieren que los resultados son suficientemente consistentes para ser considerados como razonables y realistas. En conclusión, la principal motivación para la creación del modelo fue el producir una herramienta confiable y a la vez simple para el soporte de las partes involucradas en la toma de decisiones, de cara a analizar diferentes escenarios “y si” con un bajo coste computacional. La interacción con políticos y otros actores dictó encontrar un compromiso entre la complejidad del modeladomedioambiental con el carácter conciso de las políticas, siendo esto algo que AERIS refleja en sus estructuras conceptual y computacional. Finalmente, cabe decir que AERIS ha sido creado para su uso exclusivo dentro de un marco de evaluación y de ninguna manera debe ser considerado como un sustituto de los modelos de calidad del aire ordinarios. ABSTRACT Improving air quality is an eminently inter-disciplinary task. The wide variety of sciences and stakeholders that are involved call for having simple yet fully-integrated and reliable evaluation tools available. Integrated AssessmentModeling has proved to be a suitable solution for the description of air pollution systems due to the fact that it considers each of the involved stages: emissions, atmospheric chemistry, dispersion, environmental impacts and abatement potentials. Some integrated assessment models are available at European scale that cover each of the before mentioned stages, being the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model the most recognized and widely-used within a European policy-making context. However, addressing air quality at the national/regional scale under an integrated assessment framework is desirable. To do so, European-scale models do not provide enough spatial resolution or detail in their ancillary data sources, mainly emission inventories and local meteorology patterns as well as associated results. The objective of this dissertation is to present the developments in the design and application of an Integrated Assessment Model especially conceived for Spain and Portugal. The Atmospheric Evaluation and Research Integrated system for Spain (AERIS) is able to quantify concentration profiles for several pollutants (NO2, SO2, PM10, PM2.5, NH3 and O3), the atmospheric deposition of sulfur and nitrogen species and their related impacts on crops, vegetation, ecosystems and health as a response to percentual changes in the emissions of relevant sectors. The current version of AERIS considers 20 emission sectors, either corresponding to individual SNAP sectors or macrosectors, whose contribution to air quality levels, deposition and impacts have been modeled through the use of source-receptor matrices (SRMs). Thesematrices are proportionality constants that relate emission changes with different air quality indicators and have been derived through statistical parameterizations of an air qualitymodeling system (AQM). For the concrete case of AERIS, its parent AQM relied on the WRF model for meteorology and on the CMAQ model for atmospheric chemical processes. The quantification of atmospheric deposition, impacts on ecosystems, crops, vegetation and human health has been carried out following the standard methodologies established under international negotiation frameworks such as CLRTAP. The programming structure isMATLAB ® -based, allowing great compatibility with typical software such as Microsoft Excel ® or ArcGIS ® Regarding air quality levels, AERIS is able to provide mean annual andmean monthly concentration values, as well as the indicators established in Directive 2008/50/EC, namely the 19th highest hourly value for NO2, the 25th highest daily value and the 4th highest hourly value for SO2, the 36th highest daily value of PM10, the 26th highest maximum 8-hour daily value, SOMO35 and AOT40 for O3. Regarding atmospheric deposition, the annual accumulated deposition per unit of area of species of oxidized and reduced nitrogen as well as sulfur can be estimated. When relating the before mentioned values with specific characteristics of the modeling domain such as land use, forest and crops covers, population counts and epidemiological studies, a wide array of impacts can be calculated. When focusing on impacts on ecosystems and soils, AERIS is able to estimate critical load exceedances and accumulated average exceedances for nitrogen and sulfur species. Damage on forests is estimated as an exceedance of established critical levels of NO2 and SO2. Additionally, AERIS is able to quantify damage caused by O3 and SO2 on grapes, maize, potato, rice, sunflower, tobacco, tomato, watermelon and wheat. Impacts on human health aremodeled as a consequence of exposure to PM2.5 and O3 and quantified as losses in statistical life expectancy and premature mortality indicators. The accuracy of the IAM has been tested by statistically contrasting the obtained results with those yielded by the conventional AQM, exhibiting in most cases a good agreement level. Due to the fact that impacts cannot be directly produced by the AQM, a credibility analysis was carried out for the outputs of AERIS for a given emission scenario by comparing them through probability tests against the performance of GAINS for the same scenario. This analysis revealed a good correspondence in the mean behavior and the probabilistic distributions of the datasets. The verification tests that were applied to AERIS suggest that results are consistent enough to be credited as reasonable and realistic. In conclusion, the main reason thatmotivated the creation of this model was to produce a reliable yet simple screening tool that would provide decision and policy making support for different “what-if” scenarios at a low computing cost. The interaction with politicians and other stakeholders dictated that reconciling the complexity of modeling with the conciseness of policies should be reflected by AERIS in both, its conceptual and computational structures. It should be noted however, that AERIS has been created under a policy-driven framework and by no means should be considered as a substitute of the ordinary AQM.
Resumo:
Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.
Resumo:
En la actualidad, a través de los sistemas de Tecnologías de la Información (TI) se ofrecen servicios muy diversos que tienen requisitos cada vez más específicos para garantizar su funcionamiento. Con el fin de cumplir dichas garantías, se han estandarizado diferentes normativas, así como marcos de trabajo, también llamadas recomendaciones o manuales de buenas prácticas, que permiten al proveedor de servicios verificar que se cumplen dichos requisitos y ofrecer tanto al cliente final como a la propia organización un servicio de calidad que permita generar valora todas las partes. Con la aplicación de dichas recomendaciones y normativas, las empresas y Administraciones Públicas garantizan que los servicios telemáticos que ofrecen, cumplen estándares de calidad permitiendo así ofrecer al usuario final una plataforma estable y adecuada al servicio prestado. Dichas normas y marcos hacen referencia tanto al servicio TI propiamente dicho como al propio sistema de gestión del servicio TI, de tal forma que, a través de una operación adecuada del sistema que gestiona el servicio, podemos hacer que dicho servicio esté continuamente mejorando. En concreto nos centramos en la norma más empleada, ISO 20000, y en el marco de trabajo o referencia de buenas prácticas, ITIL 2011, con el fin de dar una visión clara de los diferentes procesos, actividades y funciones que ambas definen para generar valor en la empresa a través de los sistemas de TI. Con el fin de ayudar a la comprensión tanto de ITIL como de ISO 20000, se ha desarrollado a modo de ejemplo la implementación tanto de ITIL inicialmente y luego ISO 20000 sobre de un servicio que ya está en funcionamiento definiendo para cada una de las cinco fases del ciclo de vida que, tanto en la norma como en el marco se utilizan, los procesos y funciones necesarias para su implementación, y su posterior revisión. ABSTRACT. Today, we’ve got a lot of different services thanks to Information Technologies (IT) service management: they have increasingly specific requirements to ensure a good operation on the service they support. In order to meet these requirements, it has been released different standardized regulations and frameworks, also called recommendations or good practice guides. They allow the service provider to verify that such requirements are met and offer both to the end customer and to the own organization that manage this system, a quality service that will generate value to both parts. In the end, with the implementation of these recommendations and regulations, companies and public authorities ensure that the telematics services offered meet the quality standards they seek, allowing the end user to offer a stable and appropriate service platform. So these standards and reference frames both implies on the IT service itself and on the IT service management, so that, through proper operation of the parts implied on the process that manages the service, we can offer a better service. In particular we focus on the most widely used standard, ISO 20000, and the reference framework or best practices, ITIL 2011, in order to give a clear overview of the different processes, activities and functions that define both to create value in the company through IT service management. To help the understanding of both ITIL and ISO 20000, it has been developed an example of an ITIL and then ISO 20000 implementation on a service that is already in operation defining for each of the five phases of the life cycle, both in the standard as used in the context, processes and functions necessary for its implementation, and later review.
Resumo:
Lightness fue un término muy popular en el panorama arquitectónico a mediados de los años 90. Contribuyó decisivamente a ello la exposición que, en 1995, el MoMA de Nueva York dedicó a una serie de arquitecturas agrupadas bajo el título Light Construction. Aunque fue el evento más conocido por la institución que lo albergaba, no fue el único que se ocupó entonces de ese aspecto de la arquitectura. En paralelo, durante el otoño de 1995, la Universidad de Columbia dedicó un seminario a la reflexión sobre el vidrio en la arquitectura, desde la modernidad hasta nuestros días. El seminario se denominó The Culture of Glass y la profesora fue Joan Ockman. Un año antes, la serie de eventos promovidos por ANY fijaba su atención en el tema de la levedad, y le dedicaba en exclusiva el número 5 de su revista ANY Magazine bajo el título Lightness. Estos tres acercamientos -aunque complementarios- parecían solamente rodear el centro de una propuesta arquitectónica que se intuía vinculada a la levedad. Una vista atrás permite descubrir dos elementos más que amplían el marco de esta búsqueda aportando las miradas de uno de los maestros españoles sobre el estado actual de la arquitectura y de un filósofo que se embarca nada menos que en arriesgar a crear un espacio postmoderno. La primera de estas miradas atañe al ámbito universitario. Se trata de la conferencia del profesor Sáenz de Oíza para inaugurar el curso académico 1991-92 en la E.T.S.A.M. La segunda, más lejana en el tiempo, pero fundamental para comprender la idea que se propone sobre la levedad, es la manifestación Les Immatériaux, celebrada en 1985 en el Centre Georges Pompidou de París y comisariada por J.F.Lyotard. Estos cinco eventos delimitan un marco de referencia temporal y conceptual en el que es posible caracterizar y diferenciar un concepto de levedad propio de la arquitectura contemporánea. La búsqueda de este concepto, de lo que lo hace propio de esta época, ayudará a diferenciarlo de las aproximaciones a la levedad en arquitecturas pasadas, fundamentalmente de la modernidad clásica y la postmodernidad. También permitirá aflorar el pensamiento sobre la arquitectura en relación con la obra de arte, no como objeto estético, sino como objeto que pone en cuestión nuestra mirada y nuestro conocimiento del mundo. Por tanto, alejado de estilos o meras soluciones técnicas. El campo de la levedad ya no se reduce a la oposición entre lo ligero y lo pesado, como podría corresponder a una interpretación moderna, sino que adquiere otros matices relacionados con la vista, el equilibrio o el movimiento de un cuerpo que se convierte en intermediario de la experiencia. La búsqueda de esta forma de levedad en arquitecturas construidas nos ayudará a entender los aspectos que la caracterizan y que siempre se enlazan bajo la forma de un trabajo sutil sobre el soporte de la arquitectura, cuestionando lo que entendíamos por estable. ABSTRACT Lightness was a popular term in the architectural scene in the mid nineties. The exhibition that MoMa in New York devoted to a number of architectures gathered under the title Light Construction, in 1995, was a decisive contribution to it. Although it was the best known event thanks to the institution that hosted it, it was not the only one that dealt with that aspect of architecture. In parallel, during the autumn of 1995, Columbia University dedicated a seminar to the reflection about glass in architecture, from modernity up to now. The seminar was called The Culture of Glass and the professor was Joan Ockman. A year earlier, the series of events promoted by ANY focused on the subject of lightness, and the number 5 of the magazine ANY Magazine was exclusively dedicated to it under the title Lightness. These three approaches –although complementary- only seemed to surround the core of an architectural proposal that appeared to be linked to lightness. A look back allows for discovering two more elements that broaden the framework of this search contributing the views of one of the Spanish masters on the current architecture status and a philosopher who gets involved in no less than to create a postmodern space. The first of these views concerns university scope. It is the lecture by Professor Sáenz de Oíza to inaugurate the academic year 1991-92 at the E.T.S.A.M. The second, far off in time, but critical to understand the proposal initially guessed, is the manifestation Les Immatériaux, held in 1985 at the Centre Georges Pompidou in Paris and curated by J.F.Lyotard. These five events define a temporal and conceptual reference framework in which it is possible to characterize and differentiate a lightness concept belonging to contemporary architecture. The search of this concept, of what may have related to present time, helps to differentiate it from the approaches to the lightness in past architectures, mainly of classical modernism and postmodernism. It will also allow for emerging the thinking about architecture in relation to the work of art, not as an aesthetic object, but as an object that question our view and our knowledge of the world. Therefore, far from styles or mere technical solutions. The field of lightness is no longer limited to the opposition between light and heavy, as may correspond to a modern interpretation, but acquires other nuances related to sight, balance or movement of a body that becomes an intermediary of experience. The search for this kind of lightness in built architecture will help us to understand, by the way shown, the aspects that define it and that are always linked: the subtle work on the architectural support that leads us to question ourselves about what we understand as stable.