991 resultados para Monolithic Microwave Integrated Circuit (MMIC)
Resumo:
The thesis work concerns X-ray spectrometry for both medical and space applications and is divided into two sections. The first section addresses an X-ray spectrometric system designed to study radiological beams and is devoted to the optimization of diagnostic procedures in medicine. A parametric semi-empirical model capable of efficiently reconstructing diagnostic X-ray spectra in 'middle power' computers was developed and tested. In addition, different silicon diode detectors were tested as real-time detectors in order to provide a real-time evaluation of the spectrum during diagnostic procedures. This project contributes to the field by presenting an improved simulation of a realistic X-ray beam emerging from a common X-ray tube with a complete and detailed spectrum that lends itself to further studies of added filtration, thus providing an optimized beam for different diagnostic applications in medicine. The second section describes the preliminary tests that have been carried out on the first version of an Application Specific Integrated Circuit (ASIC), integrated with large area position-sensitive Silicon Drift Detector (SDD) to be used on board future space missions. This technology has been developed for the ESA project: LOFT (Large Observatory for X-ray Timing), a new medium-class space mission that the European Space Agency has been assessing since February of 2011. The LOFT project was proposed as part of the Cosmic Vision Program (2015-2025).
Resumo:
In this report a new automated optical test for next generation of photonic integrated circuits (PICs) is provided by the test-bed design and assessment. After a briefly analysis of critical problems of actual optical tests, the main test features are defined: automation and flexibility, relaxed alignment procedure, speed up of entire test and data reliability. After studying varied solutions, the test-bed components are defined to be lens array, photo-detector array, and software controller. Each device is studied and calibrated, the spatial resolution, and reliability against interference at the photo-detector array are studied. The software is programmed in order to manage both PIC input, and photo-detector array output as well as data analysis. The test is validated by analysing state-of-art 16 ports PIC: the waveguide location, current versus power, and time-spatial power distribution are measured as well as the optical continuity of an entire path of PIC. Complexity, alignment tolerance, time of measurement are also discussed.
Resumo:
Though 3D computer graphics has seen tremendous advancement in the past two decades, most available mechanisms for computer interaction in 3D are high cost and targeted for industry and virtual reality applications. Recent advances in Micro-Electro-Mechanical-System (MEMS) devices have brought forth a variety of new low-cost, low-power, miniature sensors with high accuracy, which are well suited for hand-held devices. In this work a novel design for a 3D computer game controller using inertial sensors is proposed, and a prototype device based on this design is implemented. The design incorporates MEMS accelerometers and gyroscopes from Analog Devices to measure the three components of the acceleration and angular velocity. From these sensor readings, the position and orientation of the hand-held compartment can be calculated using numerical methods. The implemented prototype is utilizes a USB 2.0 compliant interface for power and communication with the host system. A Microchip dsPIC microcontroller is used in the design. This microcontroller integrates the analog to digital converters, the program memory flash, as well as the core processor, on a single integrated circuit. A PC running Microsoft Windows operating system is used as the host machine. Prototype firmware for the microcontroller is developed and tested to establish the communication between the design and the host, and perform the data acquisition and initial filtering of the sensor data. A PC front-end application with a graphical interface is developed to communicate with the device, and allow real-time visualization of the acquired data.
Resumo:
The dissipation of high heat flux from integrated circuit chips and the maintenance of acceptable junction temperatures in high powered electronics require advanced cooling technologies. One such technology is two-phase cooling in microchannels under confined flow boiling conditions. In macroscale flow boiling bubbles will nucleate on the channel walls, grow, and depart from the surface. In microscale flow boiling bubbles can fill the channel diameter before the liquid drag force has a chance to sweep them off the channel wall. As a confined bubble elongates in a microchannel, it traps thin liquid films between the heated wall and the vapor core that are subject to large temperature gradients. The thin films evaporate rapidly, sometimes faster than the incoming mass flux can replenish bulk fluid in the microchannel. When the local vapor pressure spike exceeds the inlet pressure, it forces the upstream interface to travel back into the inlet plenum and create flow boiling instabilities. Flow boiling instabilities reduce the temperature at which critical heat flux occurs and create channel dryout. Dryout causes high surface temperatures that can destroy the electronic circuits that use two-phase micro heat exchangers for cooling. Flow boiling instability is characterized by periodic oscillation of flow regimes which induce oscillations in fluid temperature, wall temperatures, pressure drop, and mass flux. When nanofluids are used in flow boiling, the nanoparticles become deposited on the heated surface and change its thermal conductivity, roughness, capillarity, wettability, and nucleation site density. It also affects heat transfer by changing bubble departure diameter, bubble departure frequency, and the evaporation of the micro and macrolayer beneath the growing bubbles. Flow boiling was investigated in this study using degassed, deionized water, and 0.001 vol% aluminum oxide nanofluids in a single rectangular brass microchannel with a hydraulic diameter of 229 µm for one inlet fluid temperature of 63°C and two constant flow rates of 0.41 ml/min and 0.82 ml/min. The power input was adjusted for two average surface temperatures of 103°C and 119°C at each flow rate. High speed images were taken periodically for water and nanofluid flow boiling after durations of 25, 75, and 125 minutes from the start of flow. The change in regime timing revealed the effect of nanoparticle suspension and deposition on the Onset of Nucelate Boiling (ONB) and the Onset of Bubble Elongation (OBE). Cycle duration and bubble frequencies are reported for different nanofluid flow boiling durations. The addition of nanoparticles was found to stabilize bubble nucleation and growth and limit the recession rate of the upstream and downstream interfaces, mitigating the spreading of dry spots and elongating the thin film regions to increase thin film evaporation.
Resumo:
OBJECTIVE The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. MATERIALS AND METHODS We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. RESULTS A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1% ± 2.2% versus 85.6% ± 5.6% (p = 0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7% ± 8.1% (with bone suppression, 46.1% ± 8%; p = 0.94); for microdose CT, nodule sensitivity was 83.6% ± 9% without MIP (with additional MIP, 92.5% ± 6%; p < 10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p < 10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. CONCLUSION Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.
Resumo:
SRAM-based FPGAs are sensitive to radiation effects. Soft errors can appear and accumulate, potentially defeating mitigation strategies deployed at the Application Layer. Therefore, Configuration Memory scrubbing is required to improve radiation tolerance of such FPGAs in space applications. Virtex FPGAs allow runtime scrubbing by means of dynamic partial reconfiguration. Even with scrubbing, intra-FPGA TMR systems are subjected to common-mode errors affecting more than one design domain. This is solved in inter-FPGA TMR systems at the expense of a higher cost, power and mass. In this context, a self-reference scrubber for device-level TMR system based on Xilinx Virtex FPGAs is presented. This scrubber allows for a fast SEU/MBU detection and correction by peer frame comparison without needing to access a golden configuration memory
Resumo:
Coupled device and process silumation tools, collectively known as technology computer-aided design (TCAD), have been used in the integrated circuit industry for over 30 years. These tools allow researchers to quickly converge on optimized devide designs and manufacturing processes with minimal experimental expenditures. The PV industry has been slower to adopt these tools, but is quickly developing competency in using them. This paper introduces a predictive defect engineering paradigm and simulation tool, while demonstrating its effectiveness at increasing the performance and throughput of current industrial processes. the impurity-to-efficiency (I2E) simulator is a coupled process and device simulation tool that links wafer material purity, processing parameters and cell desigh to device performance. The tool has been validated with experimental data and used successfully with partners in industry. The simulator has also been deployed in a free web-accessible applet, which is available for use by the industrial and academic communities.
Resumo:
El mercado de los semiconductores está saturado de productos similares y de distribuidores con una propuesta de servicios similar. Los procesos de Co-Creación en los que el cliente colabora en la definición y desarrollo del producto y proporciona información sobre su utilidad, prestaciones y valor percibido, con el resultado de un producto que soluciona sus necesidades reales, se están convirtiendo en un paso adelante en la diferenciación y expansión de la cadena de valor. El proceso de diseño y fabricación de semiconductores es bastante complejo, requiere inversiones cada vez mayores y demanda soluciones completas. Se requiere un ecosistema que soporte el desarrollo de los equipos electrónicos basados en dichos semiconductores. La facilidad para el diálogo y compartir información que proporciona internet, las herramientas basadas en web 2.0 y los servicios y aplicaciones en la nube; favorecen la generación de ideas, el desarrollo y evaluación de productos y posibilita la interacción entre diversos co-creadores. Para iniciar un proceso de co-creación se requiere métodos y herramientas adecuados para interactuar con los participantes e intercambiar experiencias, procesos para integrar la co-creación dentro de la operativa de la empresa, y desarrollar una organización y cultura que soporten y fomenten dicho proceso. Entre los métodos más efectivos están la Netnografía que estudia las conversaciones de las comunidades en internet; colaboración con usuarios pioneros que van por delante del Mercado y esperan un gran beneficio de la satisfacción de sus necesidades o deseos; los estudios de innovación que permiten al usuario definir y a menudo crear su propia solución y la externalización a la multitud, que mediante una convocatoria abierta plantea a la comunidad retos a resolver a cambio de algún tipo de recompensa. La especialización de empresas subcontratistas en el desarrollo y fabricación de semiconductores; facilita la innovación abierta colaborando con diversas entidades en las diversas fases del desarrollo del semiconductor y su ecosistema. La co-creación se emplea actualmente en el sector de los semiconductores para detectar ideas de diseños y aplicaciones, a menudo mediante concursos de innovación. El servicio de soporte técnico y la evaluación de los semiconductores con frecuencia es fruto de la colaboración entre los miembros de la comunidad fomentada y soportada por los fabricantes del producto. Con el programa EBVchips se posibilita el acceso a empresas pequeñas y medianas a la co-creación de semiconductores con los fabricantes en un proceso coordinado y patrocinado por el distribuidor EBV. Los semiconductores configurables como las FPGAs constituyen otro ejemplo de co-creación mediante el cual el fabricante proporciona el circuito integrado y el entorno de desarrollo y los clientes crean el producto final definiendo sus características y funcionalidades. Este proceso se enriquece con bloques funcionales de diseño, IP-cores, que a menudo son creados por la comunidad de usuarios. ABSTRACT. The semiconductor market is saturated of similar products and distributors with a similar proposal for services. The processes of co-creation in which the customer collaborates in the definition and development of the product and provides information about its utility, performance and perceived value, resulting in a product that solves their real needs, are becoming a step forward in the differentiation and expansion of the value chain. The design and semiconductor manufacturing process is quite complex, requires increasingly higher investments and demands complete solutions. It requires an ecosystem that supports the development of electronic equipments based on such semiconductors. The ease of dialogue and sharing information that provides internet, web 2.0-based tools and services and applications in the cloud; favor the generation of ideas, the development and evaluation of products and allows the interaction between various co-creators. To start a process of co-creation adequate methods and tools are required to interact with the participants and exchange experiences, processes to integrate the co-creation within the operations of the company, and developing an organization and culture that support and promote such process. Among the most effective methods are the Netnography that studies the conversations of the communities on the internet; collaboration with Lead Users who are ahead of the market and expect a great benefit from the satisfaction of their needs or desires; Innovation studies that allow the user to define and often create their own solution and Crowdsourcing, an open call to the community to solve challenges in exchange for some kind of reward. The specialization of subcontractors in the development and manufacture of semiconductors; facilitates open innovation in the context of collaboration with different entities working in the different phases of the development of the semiconductor and its ecosystem. Co-creation is used currently in the semiconductor sector to detect ideas of designs and applications, often through innovation’s contests. Technical support and evaluation of semiconductors frequently is the result of collaboration between members of the community fostered and supported by the manufacturers of the product. The EBVchips program provides access to small and medium-sized companies to the co-creation of semiconductors with manufacturers in a process coordinated and sponsored by the Distributor EBV. Configurable semiconductors like FPGAs are another example of co-creation whereby the manufacturer provides the integrated circuit and the development environment and customers create the final product by defining their features and functionality. This process is enriched with IP-cores, designs blocks that are often created by the user community.
Resumo:
El siguiente proyecto es un desarrollo histórico-científico acerca de la notoria importancia que supuso la aparición del microchip o circuito integrado1. El desarrollo de este trabajo ha sido una investigación bibliográfica en contenidos webs, enciclopedias y libros. El trabajo contiene un estudio sobre los transistores que fue el componente que dio paso al circuito integrado además de ser uno de los mayores inventos del siglo XX, además, se propone una pequeña inmersión a la época histórica del momento de la aparición del transistor. Al igual que con el transistor, se hace un estudio acerca del circuito integrado, pero en este caso siendo más extenso ya que es el objeto de estudio de este PFC. Para este componente sí que podemos encontrar una explicación más exhaustiva acerca de su fabricación, materiales. Además también podemos encontrar el momento históricosocial de la época bajo estudio. Para finalizar con el proyecto, se hace un breve repaso de los ejemplos de aplicación del circuito integrado y así poder hacer hincapié de la revolución tecnológica que supuso el descubrimiento del microchip. ABSTRACT. The following work is a historical and scientific development regarding the fundamental importance the emergence of the microchip. The development of this work has consisted of a bibliographic research of web contents, encyclopedias and books. The paper contains a study about the transistors, component that propitiated the integrated circuit and was one of the most important inventions of the XXth century. Also is proposed a short historical immersion in the time that preceded the coming of the transistor. As well as with the transistor, a study of the integrated circuit is carried out, yet with deeper insight, for that is the central aim of this Final Project report. For this component a more exhaustive explanation of its manufacture process, materials and theories can be provided. Also, the historical and social of that time is described. To complete the report, a brief review is done about examples of applications of the integrated circuit and thus highlight the technological revolution that the microchip development brought.
Resumo:
Este trabajo trata de la aplicación de los códigos detectores y correctores de error al diseño de los Computadores Tolerantes a Fallos, planteando varias estrategias óptimas de detección y corrección para algunos subsistemas. En primer lugar,"se justifica la necesidad de aplicar técnicas de Tolerancia a Fallos. A continuación se hacen previsiones de evolución de la tecnología de Integración, así como una tipificación de los fallos en circuitos Integrados. Partiendo de una recopilación y revisión de la teoría de códigos, se hace un desarrollo teórico cuya aplicación permite obligar a que algunos de estos códigos sean cerrados respecto de las operaciones elementales que se ejecutan en un computador. Se plantean estrategias óptimas de detección y corrección de error para sus subsistemas mas Importantes, culminando en el diseño, realización y prueba de una unidad de memoria y una unidad de proceso de datos con amplias posibilidades de detección y corrección de errores.---ABSTRACT---The present work deals with the application of error detecting and correctíng codes to the désign of Fault Tolerant Computers. Several óptimo» detection and correction strategies are presented to be applied in some subsystems. First of all, the necessity of applying Fault Tolerant techniques is explained. Later, a study on íntegration technology evolution and typification of Integrated circuit faults 1s developed. Based on a compilation and revisión of Coding Theory, a theoretical study is carried out. It allows us to force some of these codes to be closed over elementary operations. Optimum detection and correction techniques are presented for the raost important subsystems. Flnally, the design, building and testing of a memory unit and a processing unit provided with wlde error detection and correction posibilities 1s shown.
Resumo:
Nowadays integrated circuit reliability is challenged by both variability and working conditions. Environmental radiation has become a major issue when ensuring the circuit correct behavior. The required radiation and later analysis performed to the circuit boards is both fund and time expensive. The lack of tools which support pre-manufacturing radiation hardness analysis hinders circuit designers tasks. This paper describes an extensively customizable simulation tool for the characterization of radiation effects on electronic systems. The proposed tool can produce an in depth analysis of a complete circuit in almost any kind of radiation environment in affordable computation times.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
A methodology to analyze, design and implement very fast and robust controls of Buck-type converters
Resumo:
La electrónica digital moderna presenta un desafío a los diseñadores de sistemas de potencia. El creciente alto rendimiento de microprocesadores, FPGAs y ASICs necesitan sistemas de alimentación que cumplan con requirimientos dinámicos y estáticos muy estrictos. Específicamente, estas alimentaciones son convertidores DC-DC de baja tensión y alta corriente que necesitan ser diseñados para tener un pequeño rizado de tensión y una pequeña desviación de tensión de salida bajo transitorios de carga de una alta pendiente. Además, dependiendo de la aplicación, se necesita cumplir con otros requerimientos tal y como proveer a la carga con ”Escalado dinámico de tensión”, donde el convertidor necesitar cambiar su tensión de salida tan rápidamente posible sin sobreoscilaciones, o ”Posicionado Adaptativo de la Tensión” donde la tensión de salida se reduce ligeramente cuanto más grande sea la potencia de salida. Por supuesto, desde el punto de vista de la industria, las figuras de mérito de estos convertidores son el coste, la eficiencia y el tamaño/peso. Idealmente, la industria necesita un convertidor que es más barato, más eficiente, más pequeño y que aún así cumpla con los requerimienos dinámicos de la aplicación. En este contexto, varios enfoques para mejorar la figuras de mérito de estos convertidores se han seguido por la industria y la academia tales como mejorar la topología del convertidor, mejorar la tecnología de semiconducores y mejorar el control. En efecto, el control es una parte fundamental en estas aplicaciones ya que un control muy rápido hace que sea más fácil que una determinada topología cumpla con los estrictos requerimientos dinámicos y, consecuentemente, le da al diseñador un margen de libertar más amplio para mejorar el coste, la eficiencia y/o el tamaño del sistema de potencia. En esta tesis, se investiga cómo diseñar e implementar controles muy rápidos para el convertidor tipo Buck. En esta tesis se demuestra que medir la tensión de salida es todo lo que se necesita para lograr una respuesta casi óptima y se propone una guía de diseño unificada para controles que sólo miden la tensión de salida Luego, para asegurar robustez en controles muy rápidos, se proponen un modelado y un análisis de estabilidad muy precisos de convertidores DC-DC que tienen en cuenta circuitería para sensado y elementos parásitos críticos. También, usando este modelado, se propone una algoritmo de optimización que tiene en cuenta las tolerancias de los componentes y sensados distorsionados. Us ando este algoritmo, se comparan controles muy rápidos del estado del arte y su capacidad para lograr una rápida respuesta dinámica se posiciona según el condensador de salida utilizado. Además, se propone una técnica para mejorar la respuesta dinámica de los controladores. Todas las propuestas se han corroborado por extensas simulaciones y prototipos experimentales. Con todo, esta tesis sirve como una metodología para ingenieros para diseñar e implementar controles rápidos y robustos de convertidores tipo Buck. ABSTRACT Modern digital electronics present a challenge to designers of power systems. The increasingly high-performance of microprocessors, FPGAs (Field Programmable Gate Array) and ASICs (Application-Specific Integrated Circuit) require power supplies to comply with very demanding static and dynamic requirements. Specifically, these power supplies are low-voltage/high-current DC-DC converters that need to be designed to exhibit low voltage ripple and low voltage deviation under high slew-rate load transients. Additionally, depending on the application, other requirements need to be met such as to provide to the load ”Dynamic Voltage Scaling” (DVS), where the converter needs to change the output voltage as fast as possible without underdamping, or ”Adaptive Voltage Positioning” (AVP) where the output voltage is slightly reduced the greater the output power. Of course, from the point of view of the industry, the figures of merit of these converters are the cost, efficiency and size/weight. Ideally, the industry needs a converter that is cheaper, more efficient, smaller and that can still meet the dynamic requirements of the application. In this context, several approaches to improve the figures of merit of these power supplies are followed in the industry and academia such as improving the topology of the converter, improving the semiconductor technology and improving the control. Indeed, the control is a fundamental part in these applications as a very fast control makes it easier for the topology to comply with the strict dynamic requirements and, consequently, gives the designer a larger margin of freedom to improve the cost, efficiency and/or size of the power supply. In this thesis, how to design and implement very fast controls for the Buck converter is investigated. This thesis proves that sensing the output voltage is all that is needed to achieve an almost time-optimal response and a unified design guideline for controls that only sense the output voltage is proposed. Then, in order to assure robustness in very fast controls, a very accurate modeling and stability analysis of DC-DC converters is proposed that takes into account sensing networks and critical parasitic elements. Also, using this modeling approach, an optimization algorithm that takes into account tolerances of components and distorted measurements is proposed. With the use of the algorithm, very fast analog controls of the state-of-the-art are compared and their capabilities to achieve a fast dynamic response are positioned de pending on the output capacitor. Additionally, a technique to improve the dynamic response of controllers is also proposed. All the proposals are corroborated by extensive simulations and experimental prototypes. Overall, this thesis serves as a methodology for engineers to design and implement fast and robust controls for Buck-type converters.
Resumo:
Anisotropic magnetoresistive (AMR) magnetic sensors are often chosen as the magnetic transducer for magnetic field sensing in applications with low to moderate magnetic field resolution because of the relative low mass of the sensor and their ease of use. They measure magnetic fields in the order of the Earth magnetic field (with typical sensitivities of 1‰/G or 10−2‰/μT), have typical minimum detectable fields in order of nT and even 0.1 nT but they are seriously limited by the thermal drifts due to the variation of the resistivity with temperature (∼2.5‰/°C) and the variation of the magnetoresistive effect with temperature (which affects both the sensitivity of the sensors: ∼2.7‰/°C, and the offset: ±0.5‰/°C). Therefore, for lower magnetic fields, fluxgate vector sensors are generally preferred. In the present work these limitations of AMR sensors are outlined and studied. Three methods based on lock-in amplifiers are proposed as low noise techniques. Their performance has been simulated, experimentally tested and comparatively discussed. The developed model has been also used to derive a technique for temperature compensation of AMR response. The final goal to implement these techniques in a space qualified applied specific integrated circuit (ASIC) for Mars in situ exploration with compact miniaturized magnetometers.
Resumo:
LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.