990 resultados para tomografia, system-on-chip, CUDA, calcolo parallelo, GPU
Resumo:
This thesis investigates the design and implementation of a label-free optical biosensing system utilizing a robust on-chip integrated platform. The goal has been to transition optical micro-resonator based label-free biosensing from a laborious and delicate laboratory demonstration to a tool for the analytical life scientist. This has been pursued along four avenues: (1) the design and fabrication of high-$Q$ integrated planar microdisk optical resonators in silicon nitride on silica, (2) the demonstration of a high speed optoelectronic swept frequency laser source, (3) the development and integration of a microfluidic analyte delivery system, and (4) the introduction of a novel differential measurement technique for the reduction of environmental noise.
The optical part of this system combines the results of two major recent developments in the field of optical and laser physics: the high-$Q$ optical resonator and the phase-locked electronically controlled swept-frequency semiconductor laser. The laser operates at a wavelength relevant for aqueous sensing, and replaces expensive and fragile mechanically-tuned laser sources whose frequency sweeps have limited speed, accuracy and reliability. The high-$Q$ optical resonator is part of a monolithic unit with an integrated optical waveguide, and is fabricated using standard semiconductor lithography methods. Monolithic integration makes the system significantly more robust and flexible compared to current, fragile embodiments that rely on the precarious coupling of fragile optical fibers to resonators. The silicon nitride on silica material system allows for future manifestations at shorter wavelengths. The sensor also includes an integrated microfluidic flow cell for precise and low volume delivery of analytes to the resonator surface. We demonstrate the refractive index sensing action of the system as well as the specific and nonspecific adsorption of proteins onto the resonator surface with high sensitivity. Measurement challenges due to environmental noise that hamper system performance are discussed and a differential sensing measurement is proposed, implemented, and demonstrated resulting in the restoration of a high performance sensing measurement.
The instrument developed in this work represents an adaptable and cost-effective platform capable of various sensitive, label-free measurements relevant to the study of biophysics, biomolecular interactions, cell signaling, and a wide range of other life science fields. Further development is necessary for it to be capable of binding assays, or thermodynamic and kinetics measurements; however, this work has laid the foundation for the demonstration of these applications.
Resumo:
Researchers have spent decades refining and improving their methods for fabricating smaller, finer-tuned, higher-quality nanoscale optical elements with the goal of making more sensitive and accurate measurements of the world around them using optics. Quantum optics has been a well-established tool of choice in making these increasingly sensitive measurements which have repeatedly pushed the limits on the accuracy of measurement set forth by quantum mechanics. A recent development in quantum optics has been a creative integration of robust, high-quality, and well-established macroscopic experimental systems with highly-engineerable on-chip nanoscale oscillators fabricated in cleanrooms. However, merging large systems with nanoscale oscillators often require them to have extremely high aspect-ratios, which make them extremely delicate and difficult to fabricate with an "experimentally reasonable" repeatability, yield and high quality. In this work we give an overview of our research, which focused on microscopic oscillators which are coupled with macroscopic optical cavities towards the goal of cooling them to their motional ground state in room temperature environments. The quality factor of a mechanical resonator is an important figure of merit for various sensing applications and observing quantum behavior. We demonstrated a technique for pushing the quality factor of a micromechanical resonator beyond conventional material and fabrication limits by using an optical field to stiffen and trap a particular motional mode of a nanoscale oscillator. Optical forces increase the oscillation frequency by storing most of the mechanical energy in a nearly loss-less optical potential, thereby strongly diluting the effects of material dissipation. By placing a 130 nm thick SiO2 pendulum in an optical standing wave, we achieve an increase in the pendulum center-of-mass frequency from 6.2 to 145 kHz. The corresponding quality factor increases 50-fold from its intrinsic value to a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4π Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a Si3N4 membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents, drastically reducing the parasitic capacitance of our superconducting circuit to ≈ 2.5$ fF. This opens up a number of possibilities in making a new class of low-loss high-frequency electromechanics with relatively large electromechanical coupling. We present our substrate studies, fabrication methods, and device characterization.
Resumo:
We demonstrate the tunability of a silicon nitride micro-resonator using the concept of Digital Microfluidics. Our system allows driving micro-droplets on-chip, enabling the control of the effective refractive index at the vicinity of the resonator. © 2010 OSA/FiO/LS 2010.
Resumo:
A 3(rd) order complex band-pass filter (BPF) with auto-tuning architecture is proposed in this paper. It is implemented in 0.18um standard CMOS technology. The complex filter is centered at 4.092MHz with bandwidth of 2.4MHz. The in-band 3(rd) order harmonic input intercept point (IIP3) is larger than 16.2dBm, with 50 Omega as the source impedance. The input referred noise is about 80uV(rms). The RC tuning is based on Binary Search Algorithm (BSA) with tuning accuracy of 3%. The chip area of the tuning system is 0.28 x 0.22 mm(2), less than 1/8 of that of the main-filter which is 0.92 x 0.59 mm(2). After tuning is completed, the tuning system will be turned off automatically to save power and to avoid interference. The complex filter consumes 2.6mA with a 1.8V power supply.
Resumo:
A 3(rd) order complex band-pass filter (BPF) with auto-tuning architecture is proposed in this paper. It is implemented in 0.18 mu m standard CMOS technology. The complex filter is centered at 4.092MHz with bandwidth of 2.4MHz. The in-band 3(rd) order harmonic input intercept point (IIP3) is larger than 19dBm, with 50 Omega as the source impedance. The input referred noise is about 80 mu V-rms. The RC tuning is based on Binary Search Algorithm (BSA) with tuning accuracy of 3%. The chip area of the tuning system is 0.28x0.22mm(2), less than 1/8 of that of the main-filter which is 0.92x0.59mm(2). After tuning is completed, the tuning system will be turned off automatically to save power and to avoid interference. The complex filter consumes 2.6mA with a 1.8V power supply.
Resumo:
An asymmetric MOSFET-C band-pass filter(BPF)with on chip charge pump auto-tuning is presented.It is implemented in UMC (United Manufacturing Corporation)0.18μm CMOS process technology. The filter system with auto-tuning uses a master-slave technique for continuous tuning in which the charge pump OUtputs 2.663 V, much higher than the power supply voltage, to improve the linearity of the filter. The main filter with third order low-pass and second order high-pass properties is an asymmetric band-pass filter with bandwidth of 2.730-5.340 MHz. The in-band third order harmonic input intercept point(HP3) is 16.621 dBm,wim 50 Ω as the source impedance. The input referred noise iS about 47.455μVrms. The main filter dissipates 3.528 mW while the auto-tuning system dissipates 2.412 mW from a 1.8 V power supply. The filter with the auto-tuning system occupies 0.592 mm~2 and it can be utilized in GPS (global positioning system)and Bluetooth systems.
Resumo:
In this study, we describe a simple and efficient method for on-chip storage of reagents for point-of-care (POC) diagnostics. The method is based on gelification of all reagents required for on-chip PCR-based diagnostics as a ready-to-use product. The result reported here is a key step towards the development of a ready and easy to use fully integrated Lab-on-a-chip (LOC) system for fast, cost-effective and efficient POC diagnostics analysis.
Resumo:
The paper details on-chip inductor optimization for a reconfigurable continuous-time delta-sigma (Δ-Σ) modulator based radio-frequency analog-to-digital converter. Inductor optimisation enables the Δ-Σ modulator with Q enhanced LC tank circuits employing a single high Q-factor on-chip inductor and lesser quantizer levels thereby reducing the circuit complexity for excess loop delay, power dissipation and dynamic element matching. System level simulations indicate at a Q-factor of 75 Δ- Σ modulator with a 3-level quantizer achieves dynamic ranges of 106, 82 dB and 84 dB for RFID, TETRA, and Galileo over bandwidths of 200 kHz, 10 MHz and 40 MHz respectively.
Resumo:
L'élongation cellulaire de cellules cultivant bout comme hyphae fongueux, inculquez hairs, des tubes de pollen et des neurones, est limité au bout de la cellule, qui permet à ces cellules d'envahir l'encerclement substrate et atteindre une cible. Les cellules cultivant bout d'équipement sont entourées par le mur polysaccharide rigide qui régule la croissance et l'élongation de ces cellules, un mécanisme qui est radicalement différent des cellules non-walled. La compréhension du règlement du mur de cellule les propriétés mécaniques dans le contrôle de la croissance et du fonctionnement cellulaire du tube de pollen, une cellule rapidement grandissante d'équipement, est le but de ce projet. Le tube de pollen porte des spermatozoïdes du grain de pollen à l'ovule pour la fertilisation et sur sa voie du stigmate vers l'ovaire le tube de pollen envahit physiquement le stylar le tissu émettant de la fleur. Pour atteindre sa cible il doit aussi changer sa direction de croissance les temps multiples. Pour évaluer la conduite de tubes de pollen grandissants, un dans le système expérimental vitro basé sur la technologie de laboratoire-sur-fragment (LOC) et MEMS (les systèmes micro-électromécaniques) ont été conçus. En utilisant ces artifices nous avons mesuré une variété de propriétés physiques caractérisant le tube de pollen de Camélia, comme la croissance la croissance accélérée, envahissante et dilatant la force. Dans une des organisations expérimentales les tubes ont été exposés aux ouvertures en forme de fente faites de l'élastique PDMS (polydimethylsiloxane) la matière nous permettant de mesurer la force qu'un tube de pollen exerce pour dilater la croissance substrate. Cette capacité d'invasion est essentielle pour les tubes de pollen de leur permettre d'entrer dans les espaces intercellulaires étroits dans les tissus pistillar. Dans d'autres essais nous avons utilisé l'organisation microfluidic pour évaluer si les tubes de pollen peuvent s'allonger dans l'air et s'ils ont une mémoire directionnelle. Une des applications auxquelles le laboratoire s'intéresse est l'enquête de processus intracellulaires comme le mouvement d'organelles fluorescemment étiqueté ou les macromolécules pendant que les tubes de pollen grandissent dans les artifices LOC. Pour prouver que les artifices sont compatibles avec la microscopie optique à haute résolution et la microscopie de fluorescence, j'ai utilisé le colorant de styryl FM1-43 pour étiqueter le système endomembrane de tubes de pollen de cognassier du Japon de Camélia. L'observation du cône de vésicule, une agrégation d'endocytic et les vésicules exocytic dans le cytoplasme apical du bout de tube de pollen, n'a pas posé de problèmes des tubes de pollen trouvés dans le LOC. Pourtant, le colorant particulier en question a adhéré au sidewalls du LOC microfluidic le réseau, en faisant l'observation de tubes de pollen près du difficile sidewalls à cause du signal extrêmement fluorescent du mur. Cette propriété du colorant pourrait être utile de refléter la géométrie de réseau en faisant marcher dans le mode de fluorescence.
Resumo:
This paper discusses the requirements on the numerical precision for a practical Multiband Ultra-Wideband (UWB) consumer electronic solution. To this end we first present the possibilities that UWB has to offer to the consumer electronics market and the possible range of devices. We then show the performance of a model of the UWB baseband system implemented using floating point precision. Then, by simulation we find the minimal numerical precision required to maintain floating-point performance for each of the specific data types and signals present in the UWB baseband. Finally, we present a full description of the numerical requirements for both the transmit and receive components of the UWB baseband. The numerical precision results obtained in this paper can then be used by baseband designers to implement cost effective UWB systems using System-on-Chip (SoC), FPGA and ASIC technology solutions biased toward the competitive consumer electronics market(1).
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.