962 resultados para Multiple abstraction levels


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aspect-Oriented Software Development (AOSD) is a technique that complements the Object- Oriented Software Development (OOSD) modularizing several concepts that OOSD approaches do not modularize appropriately. However, the current state-of-the art on AOSD suffers with software evolution, mainly because aspect definition can stop to work correctly when base elements evolve. A promising approach to deal with that problem is the definition of model-based pointcuts, where pointcuts are defined based on a conceptual model. That strategy makes pointcut less prone to software evolution than model-base elements. Based on that strategy, this work defines a conceptual model at high abstraction level where we can specify software patterns and architectures that through Model Driven Development techniques they can be instantiated and composed in architecture description language that allows aspect modeling at architecture level. Our MDD approach allows propagate concepts in architecture level to another abstraction levels (design level, for example) through MDA transformation rules. Also, this work shows a plug-in implemented to Eclipse platform called AOADLwithCM. That plug-in was created to support our development process. The AOADLwithCM plug-in was used to describe a case study based on MobileMedia System. MobileMedia case study shows step-by-step how the Conceptual Model approach could minimize Pointcut Fragile Problems, due to software evolution. MobileMedia case study was used as input to analyses evolutions on software according to software metrics proposed by KHATCHADOURIAN, GREENWOOD and RASHID. Also, we analyze how evolution in base model could affect maintenance on aspectual model with and without Conceptual Model approaches

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uma arquitetura reconfigurável e multiprocessada para a implementação física de Redes de Petri foi desenvolvida em VHDL e mapeada sobre um FPGA. Convencionalmente, as Redes de Petri são transformadas em uma linguagem de descrição de hardware no nível de transferências entre registradores e um processo de síntese de alto nível é utilizado para gerar as funções booleanas e tabelas de transição de estado para que se possa, finalmente, mapeá-las num FPGA (Morris et al., 2000) (Soto and Pereira, 2001). A arquitetura proposta possui blocos lógicos reconfiguráveis desenvolvidos exclusivamente para a implementação dos lugares e das transições da rede, não sendo necessária a descrição da rede em níveis de abstração intermediários e nem a utilização de um processo de síntese para realizar o mapeamento da rede na arquitetura. A arquitetura permite o mapeamento de modelos de Redes de Petri com diferenciação entre as marcas e associação de tempo no disparo das transições, sendo composta por um arranjo de processadores reconfiguráveis, cada um dos quais representando o comportamento de uma transição da Rede de Petri a ser mapeada e por um sistema de comunicação, implementado por um conjunto de roteadores que são capazes de enviar pacotes de dados de um processador reconfigurável a outro. A arquitetura proposta foi validada num FPGA de 10.570 elementos lógicos com uma topologia que permitiu a implementação de Redes de Petri de até 9 transições e 36 lugares, atingindo uma latência de 15,4ns e uma vazão de até 17,12GB/s com uma freqüência de operação de 64,58MHz.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Goal Programming (GP) is an important analytical approach devised to solve many realworld problems. The first GP model is known as Weighted Goal Programming (WGP). However, Multi-Choice Aspirations Level (MCAL) problems cannot be solved by current GP techniques. In this paper, we propose a Multi-Choice Mixed Integer Goal Programming model (MCMI-GP) for the aggregate production planning of a Brazilian sugar and ethanol milling company. The MC-MIGP model was based on traditional selection and process methods for the design of lots, representing the production system of sugar, alcohol, molasses and derivatives. The research covers decisions on the agricultural and cutting stages, sugarcane loading and transportation by suppliers and, especially, energy cogeneration decisions; that is, the choice of production process, including storage stages and distribution. The MCMIGP allows decision-makers to set multiple aspiration levels for their problems in which the more/higher, the better and the less/lower, the better in the aspiration levels are addressed. An application of the proposed model for real problems in a Brazilian sugar and ethanol mill was conducted; producing interesting results that are herein reported and commented upon. Also, it was made a comparison between MCMI GP and WGP models using these real cases. © 2013 Elsevier Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study the population structure and connectivity of the Mediterranean and Atlantic Raja clavata (L., 1758) were investigated by analyzing the genetic variation of six population samples (N = 144) at seven nuclear microsatellite loci. The genetic dataset was generated by selecting population samples available in the tissue databases of the GenoDREAM laboratory (University of Bologna) and of the Department of Life Sciences and Environment (University of Cagliari), all collected during past scientific surveys (MEDITS, GRUND) from different geographical locations in the Mediterranean basin and North-east Atlantic sea, as North Sea, Sardinian coasts, Tuscany coasts and Cyprus Island. This thesis deals with to estimate the genetic diversity and differentiation among 6 geographical samples, in particular, to assess the presence of any barrier (geographic, hydrogeological or biological) to gene flow evaluating both the genetic diversity (nucleotide diversity, observed and expected heterozygosity, Hardy- Weinberg equilibrium analysis) and population differentiation (Fst estimates, population structure analysis). In addition to molecular analysis, quantitative representation and statistical analysis of morphological individuals shape are performed using geometric morphometrics methods and statistical tests. Geometric coordinates call landmarks are fixed in 158 individuals belonging to two population samples of Raja clavata and in population samples of closely related species, Raja straeleni (cryptic sibling) and Raja asterias, to assess significant morphological differences at multiple taxonomic levels. The results obtained from the analysis of the microsatellite dataset suggested a geographic and genetic separation between populations from Central-Western and Eastern Mediterranean basins. Furthermore, the analysis also showed that there was no separation between geographic samples from North Atlantic Ocean and central-Western Mediterranean, grouping them to a panmictic population. The Landmark-based geometric morphometry method results showed significant differences of body shape able to discriminate taxa at tested levels (from species to populations).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The skeletal muscle phenotype is subject to considerable malleability depending on use. Low-intensity endurance type exercise leads to qualitative changes of muscle tissue characterized mainly by an increase in structures supporting oxygen delivery and consumption. High-load strength-type exercise leads to growth of muscle fibers dominated by an increase in contractile proteins. In low-intensity exercise, stress-induced signaling leads to transcriptional upregulation of a multitude of genes with Ca2+ signaling and the energy status of the muscle cells sensed through AMPK being major input determinants. Several parallel signaling pathways converge on the transcriptional co-activator PGC-1α, perceived as being the coordinator of much of the transcriptional and posttranscriptional processes. High-load training is dominated by a translational upregulation controlled by mTOR mainly influenced by an insulin/growth factor-dependent signaling cascade as well as mechanical and nutritional cues. Exercise-induced muscle growth is further supported by DNA recruitment through activation and incorporation of satellite cells. Crucial nodes of strength and endurance exercise signaling networks are shared making these training modes interdependent. Robustness of exercise-related signaling is the consequence of signaling being multiple parallel with feed-back and feed-forward control over single and multiple signaling levels. We currently have a good descriptive understanding of the molecular mechanisms controlling muscle phenotypic plasticity. We lack understanding of the precise interactions among partners of signaling networks and accordingly models to predict signaling outcome of entire networks. A major current challenge is to verify and apply available knowledge gained in model systems to predict human phenotypic plasticity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article gives an overview over the methods used in the low--level analysis of gene expression data generated using DNA microarrays. This type of experiment allows to determine relative levels of nucleic acid abundance in a set of tissues or cell populations for thousands of transcripts or loci simultaneously. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. This includes the design of probes, the experimental design, the image analysis of microarray scanned images, the normalization of fluorescence intensities, the assessment of the quality of microarray data and incorporation of quality information in subsequent analyses, the combination of information across arrays and across sets of experiments, the discovery and recognition of patterns in expression at the single gene and multiple gene levels, and the assessment of significance of these findings, considering the fact that there is a lot of noise and thus random features in the data. For all of these components, access to a flexible and efficient statistical computing environment is an essential aspect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Response of phytoplankton to increasing CO2 in seawater in terms of physiology and ecology is key to predicting changes in marine ecosystems. However, responses of natural plankton communities especially in the open ocean to higher CO2 levels have not been fully examined. We conducted CO2 manipulation experiments in the Bering Sea and the central subarctic Pacific, known as high nutrient and low chlorophyll regions, in summer 2007 to investigate the response of organic matter production in iron-deficient plankton communities to CO2 increases. During the 14-day incubations of surface waters with natural plankton assemblages in microcosms under multiple pCO2 levels, the dynamics of particulate organic carbon (POC) and nitrogen (PN), and dissolved organic carbon (DOC) and phosphorus (DOP) were examined with the plankton community compositions. In the Bering site, net production of POC, PN, and DOP relative to net chlorophyll-a production decreased with increasing pCO2. While net produced POC:PN did not show any CO2-related variations, net produced DOC:DOP increased with increasing pCO2. On the other hand, no apparent trends for these parameters were observed in the Pacific site. The contrasting results observed were probably due to the different plankton community compositions between the two sites, with plankton biomass dominated by large-sized diatoms in the Bering Sea versus ultra-eukaryotes in the Pacific Ocean. We conclude that the quantity and quality of the production of particulate and dissolved organic matter may be altered under future elevated CO2 environments in some iron-deficient ecosystems, while the impacts may be negligible in some systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El trabajo que ha dado lugar a esta Tesis Doctoral se enmarca en la invesitagación en células solares de banda intermedia (IBSCs, por sus siglas en inglés). Se trata de un nuevo concepto de célula solar que ofrece la posibilidad de alcanzar altas eficiencias de conversión fotovoltaica. Hasta ahora, se han demostrado de manera experimental los fundamentos de operación de las IBSCs; sin embargo, esto tan sólo has sido posible en condicines de baja temperatura. El concepto de banda intermedia (IB, por sus siglas en inglés) exige que haya desacoplamiento térmico entre la IB y las bandas de valencia y conducción (VB and CB, respectivamente, por sus siglas en inglés). Los materiales de IB actuales presentan un acoplamiento térmico demasiado fuerte entre la IB y una de las otras dos bandas, lo cual impide el correcto funcionamiento de las IBSCs a temperatura ambiente. En el caso particular de las IBSCs fabricadas con puntos cuánticos (QDs, por sus siglas en inglés) de InAs/GaAs - a día de hoy, la tecnología de IBSC más estudiada - , se produce un rápido intercambio de portadores entre la IB y la CB, por dos motivos: (1) una banda prohibida estrecha (< 0.2 eV) entre la IB y la CB, E^, y (2) la existencia de niveles electrónicos entre ellas. El motivo (1) implica, a su vez, que la máxima eficiencia alcanzable en estos dispositivos es inferior al límite teórico de la IBSC ideal, en la cual E^ = 0.71 eV. En este contexto, nuestro trabajo se centra en el estudio de IBSCs de alto gap (o banda prohibida) fabricadsas con QDs, o lo que es lo mismo, QD-IBSCs de alto gap. Hemos fabricado e investigado experimentalmente los primeros prototipos de QD-IBSC en los que se utiliza AlGaAs o InGaP para albergar QDs de InAs. En ellos demostramos une distribución de gaps mejorada con respecto al caso de InAs/GaAs. En concreto, hemos medido valores de E^ mayores que 0.4 eV. En los prototipos de InAs/AlGaAs, este incremento de E^ viene acompaado de un incremento, en más de 100 meV, de la energía de activación del escape térmico. Además, nuestros dispositivos de InAs/AlGaAs demuestran conversión a la alza de tensión; es decir, la producción de una tensión de circuito abierto mayor que la energía de los fotones (dividida por la carga del electrón) de un haz monocromático incidente, así como la preservación del voltaje a temperaura ambiente bajo iluminación de luz blanca concentrada. Asimismo, analizamos el potencial para detección infrarroja de los materiales de IB. Presentamos un nuevo concepto de fotodetector de infrarrojos, basado en la IB, que hemos llamado: fotodetector de infrarrojos activado ópticamente (OTIP, por sus siglas en inglés). Nuestro novedoso dispositivo se basa en un nuevo pricipio físico que permite que la detección de luz infrarroja sea conmutable (ON y OFF) mediante iluminación externa. Hemos fabricado un OTIP basado en QDs de InAs/AlGaAs con el que demostramos fotodetección, bajo incidencia normal, en el rango 2-6/xm, activada ópticamente por un diodoe emisor de luz de 590 nm. El estudio teórico del mecanismo de detección asistido por la IB en el OTIP nos lleva a poner en cuestión la asunción de quasi-niveles de Fermi planos en la zona de carga del espacio de una célula solar. Apoyados por simuaciones a nivel de dispositivo, demostramos y explicamos por qué esta asunción no es válida en condiciones de corto-circuito e iluminación. También llevamos a cabo estudios experimentales en QD-IBSCs de InAs/AlGaAs con la finalidad de ampliar el conocimiento sobre algunos aspectos de estos dispositivos que no han sido tratados aun. En particular, analizamos el impacto que tiene el uso de capas de disminución de campo (FDLs, por sus siglas en inglés), demostrando su eficiencia para evitar el escape por túnel de portadores desde el QD al material anfitrión. Analizamos la relación existente entre el escape por túnel y la preservación del voltaje, y proponemos las medidas de eficiencia cuántica en función de la tensión como una herramienta útil para evaluar la limitación del voltaje relacionada con el túnel en QD-IBSCs. Además, realizamos medidas de luminiscencia en función de la temperatura en muestras de InAs/GaAs y verificamos que los resltados obtenidos están en coherencia con la separación de los quasi-niveles de Fermi de la IB y la CB a baja temperatura. Con objeto de contribuir a la capacidad de fabricación y caracterización del Instituto de Energía Solar de la Universidad Politécnica de Madrid (IES-UPM), hemos participado en la instalación y puesta en marcha de un reactor de epitaxia de haz molecular (MBE, por sus siglas en inglés) y el desarrollo de un equipo de caracterización de foto y electroluminiscencia. Utilizando dicho reactor MBE, hemos crecido, y posteriormente caracterizado, la primera QD-IBSC enteramente fabricada en el IES-UPM. ABSTRACT The constituent work of this Thesis is framed in the research on intermediate band solar cells (IBSCs). This concept offers the possibility of achieving devices with high photovoltaic-conversion efficiency. Up to now, the fundamentals of operation of IBSCs have been demonstrated experimentally; however, this has only been possible at low temperatures. The intermediate band (IB) concept demands thermal decoupling between the IB and the valence and conduction bands. Stateof- the-art IB materials exhibit a too strong thermal coupling between the IB and one of the other two bands, which prevents the proper operation of IBSCs at room temperature. In the particular case of InAs/GaAs quantum-dot (QD) IBSCs - as of today, the most widely studied IBSC technology - , there exist fast thermal carrier exchange between the IB and the conduction band (CB), for two reasons: (1) a narrow (< 0.2 eV) energy gap between the IB and the CB, EL, and (2) the existence of multiple electronic levels between them. Reason (1) also implies that maximum achievable efficiency is below the theoretical limit for the ideal IBSC, in which EL = 0.71 eV. In this context, our work focuses on the study of wide-bandgap QD-IBSCs. We have fabricated and experimentally investigated the first QD-IBSC prototypes in which AlGaAs or InGaP is the host material for the InAs QDs. We demonstrate an improved bandgap distribution, compared to the InAs/GaAs case, in our wide-bandgap devices. In particular, we have measured values of EL higher than 0.4 eV. In the case of the AlGaAs prototypes, the increase in EL comes with an increase of more than 100 meV of the activation energy of the thermal carrier escape. In addition, in our InAs/AlGaAs devices, we demonstrate voltage up-conversion; i. e., the production of an open-circuit voltage larger than the photon energy (divided by the electron charge) of the incident monochromatic beam, and the achievement of voltage preservation at room temperature under concentrated white-light illumination. We also analyze the potential of an IB material for infrared detection. We present a IB-based new concept of infrared photodetector that we have called the optically triggered infrared photodetector (OTIP). Our novel device is based on a new physical principle that allows the detection of infrared light to be switched ON and OFF by means of an external light. We have fabricated an OTIP based on InAs/AlGaAs QDs with which we demonstrate normal incidence photodetection in the 2-6 /xm range optically triggered by a 590 nm light-emitting diode. The theoretical study of the IB-assisted detection mechanism in the OTIP leads us to questioning the assumption of flat quasi-Fermi levels in the space-charge region of a solar cell. Based on device simulations, we prove and explain why this assumption is not valid under short-circuit and illumination conditions. We perform new experimental studies on InAs/GaAs QD-IBSC prototypes in order to gain knowledge on yet unexplored aspects of the performance of these devices. Specifically, we analyze the impact of the use of field-damping layers, and demonstrate this technique to be efficient for avoiding tunnel carrier escape from the QDs to the host material. We analyze the relationship between tunnel escape and voltage preservation, and propose voltage-dependent quantum efficiency measurements as an useful technique for assessing the tunneling-related limitation to the voltage preservation of QD-IBSC prototypes. Moreover, we perform temperature-dependent luminescence studies on InAs/GaAs samples and verify that the results are consistent with a split of the quasi-Fermi levels for the CB and the IB at low temperature. In order to contribute to the fabrication and characterization capabilities of the Solar Energy Institute of the Universidad Polite´cnica de Madrid (IES-UPM), we have participated in the installation and start-up of an molecular beam epitaxy (MBE) reactor and the development of a photo and electroluminescence characterization set-up. Using the MBE reactor, we have manufactured and characterized the first QD-IBSC fully fabricated at the IES-UPM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.