949 resultados para Circuit of Sacoleiros


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal outgrowth has been proposed in many systems as a mechanism underlying memory storage. For example, sensory neuron outgrowth is widely accepted as an underlying mechanism of long-term sensitization of defensive withdrawal reflexes in Aplysia. The hypothesis is that learning leads to outgrowth and consequently to the formation of new synapses, which in turn strengthen the neural circuit underlying the behavior. However, key experiments to test this hypothesis have never been performed. ^ Four days of sensitization training leads to outgrowth of siphon sensory neurons mediating the siphon-gill withdrawal response in Aplysia . We found that a similar training protocol produced robust outgrowth in tail sensory neurons mediating the tail siphon withdrawal reflex. In contrast, 1 day of training, which effectively induces long-term behavioral sensitization and synaptic facilitation, was not associated with neuronal outgrowth. Further examination of the effect of behavioral training protocols on sensory neuron outgrowth indicated that this structural modification is associated only with the most persistent forms of sensitization, and that the induction of these changes is dependent on the spacing of the training trials over multiple days. Therefore, we suggest that neuronal outgrowth is not a universal mechanism underlying long-term sensitization, but is involved only in the most persistent forms of the memory. ^ Sensory neuron outgrowth presumably contributes to long-term sensitization through formation of new synapses with follower motor neurons, but this hypothesis has never been directly tested. The contribution of outgrowth to long-term sensitization was assessed using confocal microscopy to examine sites of contact between physiologically connected pairs of sensory and motor neurons. Following 4 days of training, the strength of both the behavior and sensorimotor synapse and the number of appositions with follower neurons was enhanced only on the trained side of the animal. In contrast, outgrowth was induced on both sides of the animal, indicating that although sensory neuron outgrowth does appear to contribute to sensitization through the formation of new synapses, outgrowth alone is not sufficient to account for the effects of sensitization. This indicates that key regulatory steps are downstream from outgrowth, possibly in the targeting of new processes and activation of new synapses. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At issue is whether or not isolated DNA is patent eligible under the U.S. Patent Law and the implications of that determination on public health. The U.S. Patent and Trademark Office has issued patents on DNA since the 1980s, and scientists and researchers have proceeded under that milieu since that time. Today, genetic research and testing related to the human breast cancer genes BRCA1 and BRCA2 is conducted within the framework of seven patents that were issued to Myriad Genetics and the University of Utah Research Foundation between 1997 and 2000. In 2009, suit was filed on behalf of multiple researchers, professional associations and others to invalidate fifteen of the claims underlying those patents. The Court of Appeals for the Federal Circuit, which hears patent cases, has invalidated claims for analyzing and comparing isolated DNA but has upheld claims to isolated DNA. The specific issue of whether isolated DNA is patent eligible is now before the Supreme Court, which is expected to decide the case by year's end. In this work, a systematic review was performed to determine the effects of DNA patents on various stakeholders and, ultimately, on public health; and to provide a legal analysis of the patent eligibility of isolated DNA and the likely outcome of the Supreme Court's decision. ^ A literature review was conducted to: first, identify principle stakeholders with an interest in patent eligibility of the isolated DNA sequences BRCA1 and BRCA2; and second, determine the effect of the case on those stakeholders. Published reports that addressed gene patents, the Myriad litigation, and implications of gene patents on stakeholders were included. Next, an in-depth legal analysis of the patent eligibility of isolated DNA and methods for analyzing it was performed pursuant to accepted methods of legal research and analysis based on legal briefs, federal law and jurisprudence, scholarly works and standard practice legal analysis. ^ Biotechnology, biomedical and clinical research, access to health care, and personalized medicine were identified as the principle stakeholders and interests herein. Many experts believe that the patent eligibility of isolated DNA will not greatly affect the biotechnology industry insofar as genetic testing is concerned; unlike for therapeutics, genetic testing does not require tremendous resources or lead time. The actual impact on biomedical researchers is uncertain, with greater impact expected for researchers whose work is intended for commercial purposes (versus basic science). The impact on access to health care has been surprisingly difficult to assess; while invalidating gene patents might be expected to decrease the cost of genetic testing and improve access to more laboratories and physicians' offices that provide the test, a 2010 study on the actual impact was inconclusive. As for personalized medicine, many experts believe that the availability of personalized medicine is ultimately a public policy issue for Congress, not the courts. ^ Based on the legal analysis performed in this work, this writer believes the Supreme Court is likely to invalidate patents on isolated DNA whose sequences are found in nature, because these gene sequences are a basic tool of scientific and technologic work and patents on isolated DNA would unduly inhibit their future use. Patents on complementary DNA (cDNA) are expected to stand, however, based on the human intervention required to craft cDNA and the product's distinction from the DNA found in nature. ^ In the end, the solution as to how to address gene patents may lie not in jurisprudence but in a fundamental change in business practices to provide expanded licenses to better address the interests of the several stakeholders. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-locus mutations in mice can express epileptic phenotypes and provide critical insights into the naturally occurring defects that alter excitability and mediate synchronization in the central nervous system (CNS). One such recessive mutation (on chromosome (Chr) 15), stargazer(stg/stg) expresses frequent bilateral 6-7 cycles per second (c/sec) spike-wave seizures associated with behavioral arrest, and provides a valuable opportunity to examine the inherited lesion associated with spike-wave synchronization.^ The existence of distinct and heterogeneous defects mediating spike-wave discharge (SWD) generation has been demonstrated by the presence of multiple genetic loci expressing generalized spike-wave activity and the differential effects of pharmacological agents on SWDs in different spike-wave epilepsy models. Attempts at understanding the different basic mechanisms underlying spike-wave synchronization have focused on $\gamma$-aminobutyric acid (GABA) receptor-, low threshold T-type Ca$\sp{2+}$ channel-, and N-methyl-D-aspartate receptor (NMDA-R)-mediated transmission. It is believed that defects in these modes of transmission can mediate the conversion of normal oscillations in a trisynaptic circuit, which includes the neocortex, reticular nucleus and thalamus, into spike-wave activity. However, the underlying lesions involved in spike-wave synchronization have not been clearly identified.^ The purpose of this research project was to locate and characterize a distinct neuronal hyperexcitability defect favoring spike-wave synchronization in the stargazer brain. One experimental approach for anatomically locating areas of synchronization and hyperexcitability involved an attempt to map patterns of hypersynchronous activity with antibodies to activity-induced proteins.^ A second approach to characterizing the neuronal defect involved examining the neuronal responses in the mutant following application of pharmacological agents with well known sites of action.^ In order to test the hypothesis that an NMDA receptor mediated hyperexcitability defect exists in stargazer neocortex, extracellular field recordings were used to examine the effects of CPP and MK-801 on coronal neocortical brain slices of stargazer and wild type perfused with 0 Mg$\sp{2+}$ artificial cerebral spinal fluid (aCSF).^ To study how NMDA receptor antagonists might promote increased excitability in stargazer neocortex, two basic hypotheses were tested: (1) NMDA receptor antagonists directly activate deep layer principal pyramidal cells in the neocortex of stargazer, presumably by opening NMDA receptor channels altered by the stg mutation; and (2) NMDA receptor antagonists disinhibit the neocortical network by blocking recurrent excitatory synaptic inputs onto inhibitory interneurons in the deep layers of stargazer neocortex.^ In order to test whether CPP might disinhibit the 0 Mg$\sp{2+}$ bursting network in the mutant by acting on inhibitory interneurons, the inhibitory inputs were pharmacologically removed by application of GABA receptor antagonists to the cortical network, and the effects of CPP under 0 Mg$\sp{2+}$aCSF perfusion in layer V of stg/stg were then compared with those found in +/+ neocortex using in vitro extracellular field recordings. (Abstract shortened by UMI.) ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-dimensional finite element model of current flow in the front surface of a PV cell is presented. In order to validate this model we perform an experimental test. Later, particular attention is paid to the effects of non-uniform illumination in the finger direction which is typical in a linear concentrator system. Fill factor, open circuit voltage and efficiency are shown to decrease with increasing degree of non-uniform illumination. It is shown that these detrimental effects can be mitigated significantly by reoptimization of the number of front surface metallization fingers to suit the degree of non-uniformity. The behavior of current flow in the front surface of a cell operating at open circuit voltage under non-uniform illumination is discussed in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transport properties of thin-film solar cells based on wide-gap CuGaSe(2) absorbers have been investigated as a function of the bulk [Ga]/[Cu] ratio ranging from 1.01 to 1.33. We find that (i) the recombination processes in devices prepared from absorbers with a composition close to stoichiometry ([Ga]/[Cu] = 1.01) are strongly tunnelling assisted resulting in low recombination activation energies (E(a)) of approx. 0.95 eV in the dark and 1.36 eV under illumination. (ii) With an increasing [Ga]/[Cu] ratio, the transport mechanism changes to be dominated by thermally activated Shockley-Read-Hall recombination with similar E(a) values of approx. 1.52-1.57 eV for bulk [Ga]/[Cu] ratios of 1.12-1.33. The dominant recombination processes take place at the interface between CdS buffer and CuGaSe(2) absorber independently from the absorber composition. The increase of E(a) with the [Ga]/[Cu] ratio correlates with the open circuit voltage and explains the better performance of corresponding solar cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An equivalent circuit model is applied in order to describe the operation characteristics of quantum dot intermediate band solar cells (QD-IBSCs), which accounts for the recombination paths of the intermediate band (IB) through conduction band (CB), the valence band (VB) through IB, and the VB-CB transition. In this work, fitting of the measured dark J-V curves for QD-IBSCs (QD region being non-doped or direct Si-doped to n-type) and a reference GaAs p-i-n solar cell (no QDs) were carried out using this model in order to extract the diode parameters. The simulation was then performed using the extracted diode parameters to evaluate solar cell characteristics under concentration. In the case of QDSC with Si-doped (hence partially-filled) QDs, a fast recovery of the open-circuit voltage (Voc) was observed in a range of low concentration due to the IB effect. Further, at around 100X concentration, Si-doped QDSC could outperform the reference GaAs p-i-n solar cell if the current source of IB current source were sixteen times to about 10mA/cm2 compared to our present cell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present an analysis that shows the Maxwell Fish Eye (MFE) only has super-resolution property for some particular frequencies (for other frequencies, the MFE behaves as conventional imaging lens). These frequencies are directly connected with the Schumann resonance frequencies of spherical symmetric systems. The analysis have been done using a thin spherical waveguide (two concentric spheres with constant index between them), which is a dual form of the MFE (the electrical fields in the MFE can be mapped into the radial electrical fields in the spherical waveguide). In the spherical waveguide the fields are guided inside the space between the concentric spheres. A microwave circuit comprising three elements: the spherical waveguide, the source and the receiver (two coaxial cables) is designed in COMSOL. The super-resolution is demonstrated by calculation of Scaterring (S) parameters for different position of the coaxial cables and different frequencies of the input signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the practical design of a portable capacitive load based on insulated gate bipolar transistors (IGBTs), which is used to measure the I–V characteristics of PV arrays with short-circuit currents up to 80 A and open circuit voltages up to 800 V. Such measurement allows on-site characterization of PV arrays under real operating conditions and also provides information for the detection of potential array anomalies, such as broken cells or defective connections. The presented I–V load is easy to reproduce and low-cost, characteristics that are within the reach of small-scale organizations involved in PV electrification projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This work is a contribution to the research and development of the intermediate band solar cell (IBSC), a high efficiency photovoltaic concept that features the advantages of both low and high bandgap solar cells. The resemblance with a low bandgap solar cell comes from the fact that the IBSC hosts an electronic energy band -the intermediate band (IB)- within the semiconductor bandgap. This IB allows the collection of sub-bandgap energy photons by means of two-step photon absorption processes, from the valence band (VB) to the IB and from there to the conduction band (CB). The exploitation of these low energy photons implies a more efficient use of the solar spectrum. The resemblance of the IBSC with a high bandgap solar cell is related to the preservation of the voltage: the open-circuit voltage (VOC) of an IBSC is not limited by any of the sub-bandgaps (involving the IB), but only by the fundamental bandgap (defined from the VB to the CB). Nevertheless, the presence of the IB allows new paths for electronic recombination and the performance of the IBSC is degraded at 1 sun operation conditions. A theoretical argument is presented regarding the need for the use of concentrated illumination in order to circumvent the degradation of the voltage derived from the increase in the recombi¬nation. This theory is supported by the experimental verification carried out with our novel characterization technique consisting of the acquisition of photogenerated current (IL)-VOC pairs under low temperature and concentrated light. Besides, at this stage of the IBSC research, several new IB materials are being engineered and our novel character¬ization tool can be very useful to provide feedback on their capability to perform as real IBSCs, verifying or disregarding the fulfillment of the “voltage preservation” principle. An analytical model has also been developed to assess the potential of quantum-dot (QD)-IBSCs. It is based on the calculation of band alignment of III-V alloyed heterojunc-tions, the estimation of the confined energy levels in a QD and the calculation of the de¬tailed balance efficiency. Several potentially useful QD materials have been identified, such as InAs/AlxGa1-xAs, InAs/GaxIn1-xP, InAs1-yNy/AlAsxSb1-x or InAs1-zNz/Alx[GayIn1-y]1-xP. Finally, a model for the analysis of the series resistance of a concentrator solar cell has also been developed to design and fabricate IBSCs adapted to 1,000 suns. Resumen Este trabajo contribuye a la investigación y al desarrollo de la célula solar de banda intermedia (IBSC), un concepto fotovoltaico de alta eficiencia que auna las ventajas de una célula solar de bajo y de alto gap. La IBSC se parece a una célula solar de bajo gap (o banda prohibida) en que la IBSC alberga una banda de energía -la banda intermedia (IB)-en el seno de la banda prohibida. Esta IB permite colectar fotones de energía inferior a la banda prohibida por medio de procesos de absorción de fotones en dos pasos, de la banda de valencia (VB) a la IB y de allí a la banda de conducción (CB). El aprovechamiento de estos fotones de baja energía conlleva un empleo más eficiente del espectro solar. La semejanza antre la IBSC y una célula solar de alto gap está relacionada con la preservación del voltaje: la tensión de circuito abierto (Vbc) de una IBSC no está limitada por ninguna de las fracciones en las que la IB divide a la banda prohibida, sino que está únicamente limitada por el ancho de banda fundamental del semiconductor (definido entre VB y CB). No obstante, la presencia de la IB posibilita nuevos caminos de recombinación electrónica, lo cual degrada el rendimiento de la IBSC a 1 sol. Este trabajo argumenta de forma teórica la necesidad de emplear luz concentrada para evitar compensar el aumento de la recom¬binación de la IBSC y evitar la degradación del voltage. Lo anterior se ha verificado experimentalmente por medio de nuestra novedosa técnica de caracterización consistente en la adquisicin de pares de corriente fotogenerada (IL)-VOG en concentración y a baja temperatura. En esta etapa de la investigación, se están desarrollando nuevos materiales de IB y nuestra herramienta de caracterizacin está siendo empleada para realimentar el proceso de fabricación, comprobando si los materiales tienen capacidad para operar como verdaderas IBSCs por medio de la verificación del principio de preservación del voltaje. También se ha desarrollado un modelo analítico para evaluar el potencial de IBSCs de puntos cuánticos. Dicho modelo está basado en el cálculo del alineamiento de bandas de energía en heterouniones de aleaciones de materiales III-V, en la estimación de la energía de los niveles confinados en un QD y en el cálculo de la eficiencia de balance detallado. Este modelo ha permitido identificar varios materiales de QDs potencialmente útiles como InAs/AlxGai_xAs, InAs/GaxIni_xP, InAsi_yNy/AlAsxSbi_x ó InAsi_zNz/Alx[GayIni_y]i_xP. Finalmente, también se ha desarrollado un modelado teórico para el análisis de la resistencia serie de una célula solar de concentración. Gracias a dicho modelo se han diseñado y fabricado IBSCs adaptadas a 1.000 soles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a model for intermediate band solar cells is built based on the generally understood physical concepts ruling semiconductor device operation, with special emphasis on the behavior at low temperature. The model is compared to JL-VOC measurements at concentrations up to about 1000 suns and at temperatures down to 20 K, as well as measurements of the radiative recombination obtained from electroluminescence. The agreement is reasonable. It is found that the main reason for the reduction of open circuit voltage is an operational reduction of the bandgap, but this effect disappears at high concentrations or at low temperatures.