986 resultados para sub-threshold
Resumo:
The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
In reaction time (RT) tasks, presentation of a startling acoustic stimulus (SAS) together with a visual imperative stimulus can dramatically reduce RT while leaving response execution unchanged. It has been suggested that a prepared motor response program is triggered early by the SAS but is not otherwise affected. Movements aimed at intercepting moving targets are usually considered to be similarly governed by a prepared program. This program is triggered when visual stimulus information about the time to arrival of the moving target reaches a specific criterion. We investigated whether a SAS could also trigger such a movement. Human experimental participants were trained to hit moving targets with movements of a specific duration. This permitted an estimate of when movement would begin (expected onset time). Startling and sub-startle threshold acoustic probe stimuli were delivered unexpectedly among control trials: 65, 85, 115 and 135 ms prior to expected onset (10:1 ratio of control to probe trials). Results showed that startling probe stimuli at 85 and 115 ms produced early response onsets but not those at 65 or 135 ms. Sub-threshold stimuli at 115 and 135 ms also produced early onsets. Startle probes led to an increased vigor in the response, but sub-threshold probes had no detectable effects. These data can be explained by a simple model in which preparatory, response-related activation builds up in the circuits responsible for generating motor commands in anticipation of the GO command. If early triggering by the acoustic probes is the mechanism underlying the findings, then the data support the hypothesis that rapid interceptions are governed by a motor program. © 2006 Published by Elsevier Ltd on behalf of IBRO.
Resumo:
We outline a scheme for the way in which early vision may handle information about shading (luminance modulation, LM) and texture (contrast modulation, CM). Previous work on the detection of gratings has found no sub-threshold summation, and no cross-adaptation, between LM and CM patterns. This strongly implied separate channels for the detection of LM and CM structure. However, we now report experiments in which adapting to LM (or CM) gratings creates tilt aftereffects of similar magnitude on both LM and CM test gratings, and reduces the perceived strength (modulation depth) of LM and CM gratings to a similar extent. This transfer of aftereffects between LM and CM might suggest a second stage of processing at which LM and CM information is integrated. The nature of this integration, however, is unclear and several simple predictions are not fulfilled. Firstly, one might expect the integration stage to lose identity information about whether the pattern was LM or CM. We show instead that the identity of barely detectable LM and CM patterns is not lost. Secondly, when LM and CM gratings are combined in-phase or out-of-phase we find no evidence for cancellation, nor for 'phase-blindness'. These results suggest that information about LM and CM is not pooled or merged - shading is not confused with texture variation. We suggest that LM and CM signals are carried by separate channels, but they share a common adaptation mechanism that accounts for the almost complete transfer of perceptual aftereffects.
Resumo:
Zinc oxide and graphene nanostructures are important technological materials because of their unique properties and potential applications in future generation of electronic and sensing devices. This dissertation investigates a brief account of the strategies to grow zinc oxide nanostructures (thin film and nanowire) and graphene, and their applications as enhanced field effect transistors, chemical sensors and transparent flexible electrodes. Nanostructured zinc oxide (ZnO) and low-gallium doped zinc oxide (GZO) thin films were synthesized by a magnetron sputtering process. Zinc oxide nanowires (ZNWs) were grown by a chemical vapor deposition method. Field effect transistors (FETs) of ZnO and GZO thin films and ZNWs were fabricated by standard photo and electron beam lithography processes. Electrical characteristics of these devices were investigated by nondestructive surface cleaning, ultraviolet irradiation treatment at high temperature and under vacuum. GZO thin film transistors showed a mobility of ∼5.7 cm2/V·s at low operation voltage of <5 V and a low turn-on voltage of ∼0.5 V with a sub threshold swing of ∼85 mV/decade. Bottom gated FET fabricated from ZNWs exhibit a very high on-to-off ratio (∼106) and mobility (∼28 cm2/V·s). A bottom gated FET showed large hysteresis of ∼5.0 to 8.0 V which was significantly reduced to ∼1.0 V by the surface treatment process. The results demonstrate charge transport in ZnO nanostructures strongly depends on its surface environmental conditions and can be explained by formation of depletion layer at the surface by various surface states. A nitric oxide (NO) gas sensor using single ZNW, functionalized with Cr nanoparticles was developed. The sensor exhibited average sensitivity of ∼46% and a minimum detection limit of ∼1.5 ppm for NO gas. The sensor also is selective towards NO gas as demonstrated by a cross sensitivity test with N2, CO and CO2 gases. Graphene film on copper foil was synthesized by chemical vapor deposition method. A hot press lamination process was developed for transferring graphene film to flexible polymer substrate. The graphene/polymer film exhibited a high quality, flexible transparent conductive structure with unique electrical-mechanical properties; ∼88.80% light transmittance and ∼1.1742Ω/sq k sheet resistance. The application of a graphene/polymer film as a flexible and transparent electrode for field emission displays was demonstrated.
Resumo:
Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network.
Resumo:
As the conventional MOSFET's scaling is approaching the limit imposed by short channel effects, Double Gate (DG) MOS transistors are appearing as the most feasible candidate in terms of technology in sub-45nm technology nodes. As the short channel effect in DG transistor is controlled by the device geometry, undoped or lightly doped body is used to sustain the channel. There exits a disparity in threshold voltage calculation criteria of undoped-body symmetric double gate transistors which uses two definitions, one is potential based and the another is charge based definition. In this paper, a novel concept of "crossover point'' is introduced, which proves that the charge-based definition is more accurate than the potential based definition.The change in threshold voltage with body thickness variation for a fixed channel length is anomalous as predicted by potential based definition while it is monotonous for charge based definition.The threshold voltage is then extracted from drain currant versus gate voltage characteristics using linear extrapolation and "Third Derivative of Drain-Source Current'' method or simply "TD'' method. The trend of threshold voltage variation is found same in both the cases which support charge-based definition.
Resumo:
As the conventional MOSFETs scaling is approaching the limit imposed by short channel effects, Double Gate (DG) MOS transistors are appearing as the most feasible andidate in terms of technology in sub-45nm technology nodes. As the short channel effect in DG transistor is controlled by the device geometry, undoped or lightly doped body, is used to sustain the channel. There exits a disparity in threshold voltage calculation criteria of undoped-body symmetric double gate transistors which uses two definitions, one is potential based and the another is charge based definition. In this paper, a novel concept of "crossover point" is introduced, which proves that the charge-based definition is more accurate than the potential based definition. The change in threshold voltage with body thickness variation for a fixed channel length is anomalous as predicted by, potential based definition while it is monotonous for change based definition. The threshold voltage is then extracted from drain currant versus gate voltage characteristics using linear extrapolation and "Third Derivative of Drain-Source Current" method or simply "TD" method. The trend of threshold voltage variation is found some in both the cases which support charge-based definition.
Resumo:
We report enhanced emission and gain narrowing in Rhodamine 590 perchlorate dye in an aqueous suspension of polystyrene microspheres. A systematic experimental study of the threshold condition for and the gain narrowing of the stimulated emission over a wide range of dye concentrations and scatterer number densities showed several interesting features, even though the transport mean free path far exceeded the system size. The conventional diffusive-reactive approximation to radiative transfer in an inhomogeneously illuminated random amplifying medium, which is valid for a transport mean-free path much smaller than the system size, is clearly inapplicable here. We propose a new probabilistic approach for the present case of dense, random, weak scatterers involving the otherwise rare and ignorable sub-mean-free-path scatterings, now made effective by the high gain in the medium, which is consistent: with experimentally observed features. (C) 1997 Optical Society of America.
Resumo:
Near threshold, mixed mode (I and II), fatigue crack growth occurs mainly by two mechanisms, coplanar (or shear) mode and branch (or tensile) mode. For a constant ratio of ΔKI/ΔKII the shear mode growth shows a self-arrest character and it would only start again when ΔKI and ΔKII are increased. Both shear crack growth and the early stages of tensile crack growth, are of a crystallographic nature; the fatigue crack proceeds along slip planes or grain boundaries. The appearance of the fracture surfaces suggest that the mechanism of crack extension is by developing slip band microcracks which join up to form a macrocrack. This process is thought to be assisted by the nature of the plastic deformation within the reversed plastic zone where high back stresses are set up by dislocation pile-ups against grain boundaries. The interaction of the crack tip stress field with that of the dislocation pile-ups leads to the formation of slip band microcracks and subsequent crack extension. The change from shear mode to tensile mode growth probably occurs when the maximum tensile stress and the microcrack density in the maximum tensile plane direction attain critical values.
Resumo:
用热舟蒸发方法在不同的沉积速率下制备了LaF3单层膜,并对部分单层膜进行了真空退火。分别采用X射线衍射(XRD),Lambda 900 光谱仪和355 nm Nd∶YAG脉冲激光测试了薄膜的晶体结构、透射光谱和激光损伤阈值(LIDT),并通过透射光谱计算得到样品的折射率和消光系数。实验结果表明,增大沉积速率有利于LaF3薄膜的结晶和择优生长,可以提高薄膜的致密性和折射率,但薄膜的抗激光损伤能力有所下降;沉积速率太大,又会恶化薄膜的结晶性能,同时薄膜中产生大量孔洞,薄膜的机械强度降低,导致薄膜的折射率减小和
Resumo:
Zirconium dioxide (ZrO2) thin films were deposited on BK7 glass substrates by the electron beam evaporation method. A continuous wave CO2 laser was used to anneal the ZrO2 thin films to investigate whether beneficial changes could be produced. After annealing at different laser scanning speeds by CO2 laser, weak absorption of the coatings was measured by the surface thermal lensing (STL) technique, and then laser-induced damage threshold (LIDT) was also determined. It was found that the weak absorption decreased first, while the laser scanning speed is below some value, then increased. The LIDT of the ZrO2 coatings decreased greatly when the laser scanning speeds were below some value. A Nomarski microscope was employed to map the damage morphology, and it was found that the damage behavior was defect-initiated both for annealed and as-deposited samples. The influences of post-deposition CO2 laser annealing on the structural and mechanical properties of the films have also been investigated by X-ray diffraction and ZYGO interferometer. It was found that the microstructure of the ZrO2 films did not change. The residual stress in ZrO2 films showed a tendency from tensile to compressive after CO, laser annealing, and the variation quantity of the residual stress increased with decreasing laser scanning speed. The residual stress may be mitigated to some extent at proper treatment parameters. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
1. This paper investigated the bioenergetic responses of the sea cucumber Apostichopus japonicus (wet weights of 36.5 +/- 1.2 g) to different water temperatures (5, 10, 15, 20, 25 and 30 degrees C) in the laboratory. 2. Results showed that theoretically the optimal temperatures for energy intake and scope for growth (SFG) of sub-adult A. japonicus was at 15.6 and 16.0 degrees C, respectively. The aestivation threshold temperature for this life-stage sea cucumber could be 29.0 degrees C by taking feeding cessation as the indication of aestivation. 3. Our data suggests that A. japonicus is thermo-sensitive to higher temperature, which prevents it from colonising sub-tropical coastal zones. Therefore, water temperature plays an important role in its southernmost distribution limit in China. 4. The potential impact of global ocean warming on A. japonicus might be a northward shift in the geographical distribution. Crown Copyright (C) 2009 Published by Elsevier Ltd, All rights reserved.
Resumo:
Recent studies predict elevated and accelerating rates of species extinctions over the 21st century, due to climate change and habitat loss. Considering that such primary species loss may initiate cascades of secondary extinctions and push systems towards critical tipping points, we urgently need to increase our understanding of if certain sequences of species extinctions can be expected to be more devastating than others Most theoretical studies addressing this question have used a topological (non-dynamical) approach to analyse the probability that food webs will collapse, below a fixed threshold value in species richness, when subjected to different sequences of species loss. Typically, these studies have neither considered the possibility of dynamical responses of species, nor that conclusions may depend on the value of the collapse threshold. Here we analyse how sensitive conclusions on the importance of different species are to the threshold value of food web collapse. Using dynamical simulations, where we expose model food webs to a range of extinction sequences, we evaluate the reliability of the most frequently used index, R<inf>50</inf>, as a measure of food web robustness. In general, we find that R<inf>50</inf> is a reliable measure and that identification of destructive deletion sequences is fairly robust, within a moderate range of collapse thresholds. At the same time, however, focusing on R<inf>50</inf> only hides a lot of interesting information on the disassembly process and can, in some cases, lead to incorrect conclusions on the relative importance of species in food webs.
Resumo:
This paper presents an approach for automatic classification of pulsed Terahertz (THz), or T-ray, signals highlighting their potential in biomedical, pharmaceutical and security applications. T-ray classification systems supply a wealth of information about test samples and make possible the discrimination of heterogeneous layers within an object. In this paper, a novel technique involving the use of Auto Regressive (AR) and Auto Regressive Moving Average (ARMA) models on the wavelet transforms of measured T-ray pulse data is presented. Two example applications are examined - the classi. cation of normal human bone (NHB) osteoblasts against human osteosarcoma (HOS) cells and the identification of six different powder samples. A variety of model types and orders are used to generate descriptive features for subsequent classification. Wavelet-based de-noising with soft threshold shrinkage is applied to the measured T-ray signals prior to modeling. For classi. cation, a simple Mahalanobis distance classi. er is used. After feature extraction, classi. cation accuracy for cancerous and normal cell types is 93%, whereas for powders, it is 98%.