939 resultados para ESEO spacecraft simulator thermal power


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The resting and maximum in situ cardiac performance of Newfoundland Atlantic cod (Gadus morhua) acclimated to 10, 4 and 0°C were measured at their respective acclimation temperatures, and when acutely exposed to temperature changes: i.e. hearts from 10°C fish cooled to 4°C, and hearts from 4°C fish measured at 10 and 0°C. Intrinsic heart rate (f(H)) decreased from 41 beats min(-1) at 10°C to 33 beats min(-1) at 4°C and 25 beats min(-1) at 0°C. However, this degree of thermal dependency was not reflected in maximal cardiac output (Q(max) values were ~44, ~37 and ~34 ml min(-1) kg(-1) at 10, 4 and 0°C, respectively). Further, cardiac scope showed a slight positive compensation between 4 and 0°C (Q(10)=1.7), and full, if not a slight over compensation between 10 and 4°C (Q(10)=0.9). The maximal performance of hearts exposed to an acute decrease in temperature (i.e. from 10 to 4°C and 4 to 0°C) was comparable to that measured for hearts from 4°C- and 0°C-acclimated fish, respectively. In contrast, 4°C-acclimated hearts significantly out-performed 10°C-acclimated hearts when tested at a common temperature of 10°C (in terms of both Q(max) and power output). Only minimal differences in cardiac function were seen between hearts stimulated with basal (5 nmol l(-1)) versus maximal (200 nmol l(-1)) levels of adrenaline, the effects of which were not temperature dependent. These results: (1) show that maximum performance of the isolated cod heart is not compromised by exposure to cold temperatures; and (2) support data from other studies, which show that, in contrast to salmonids, cod cardiac performance/myocardial contractility is not dependent upon humoral adrenergic stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two clayey materials, one provided by a patner in mineral sector and the other coming from Balengou (West Region Cameroon) were subject of a comparative study in order to evaluate the influence of their crystalline structure on their pozzolanic property. These two natural materials were preliminary enriched in clay minerals by wet sieving and the fractions obtained are denoted K and H respectively. K and H were calcinated at 700 °C, with a heating rate of 5 °C/min and 10 hours dwell at the peak temperature, the products obtained were named MK and MH. Samples K, H, MK and MH were physicochemically characterized by the chemical (ICP), thermal(TGA/DTA) and mineralogical (DRX and Spectrometry IR) analyses together with the measurement of specific surface (BET), crystallinity and the pouzzolanicity test. The results confirmed K as a kaolinitic and H halloysic clay. The kaolinite and the halloysite respectively presented in these clayey materials exhibited a poor crystallinity, but the degree of disorder is higher in K than in H. These results were largely affected by the significant fraction of gibbsite in kaolinitic clay K. At the crude state, the pozzolanic activity of the material H is weak compared with that of K, but the heat treatment makes largely improve this property for both samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study was to assess the feasibility, safety and success of a system which uses radiofrequency energy (RFE) rather than a device for percutaneous closure of patent foramen ovale (PFO). METHODS: Sixteen patients (10 men, 6 women, mean age 50 years) were included in the study. All of them had a proven PFO with documented right-to-left shunt (RLS) after Valsalva manoeuvre (VM) during transoesophageal echocardiography (TEE). The patients had an average PFO diameter of 6 +/- 2 mm at TEE and an average of 23 +/- 4 microembolic signals (MES) in power M-mode transcranial Doppler sonography (pm-TCD), measured over the middle cerebral artery. An atrial septal aneurysm (ASA) was present in 7 patients (44%). Balloon measurement, performed in all patients, revealed a stretched PFO diameter of 8 +/- 3 mm. In 2 patients (stretched diameter 11 and 14 mm respectively, both with ASA >10 mm), radiofrequency was not applied (PFO too large) and the PFO was closed with an Amplatzer PFO occluder instead. A 6-month follow-up TEE was performed in all patients. RESULTS: There were no serious adverse events during the procedure or at follow-up (12 months average). TEE 6 months after the first RFE procedure showed complete closure of the PFO in 50% of the patients (7/14). Closure appeared to be influenced by PFO diameter, complete closure being achieved in 89% (7/8) with a balloon-stretched diameter < or =7 mm but in none of the patients >7 mm. Only one of the complete closure patients had an ASA. Of the remainder, 4 (29%) had an ASA. Although the PFO was not completely closed in this group, some reduction in the diameter of the PFO and in MES was documented by TEE and pm-TCD with VM. Five of the 7 residual shunt patients received an Amplatzer PFO occluder. Except for one patient with a minimal residual shunt, all showed complete closure of PFO at 6-month follow-up TEE and pm-TCD with VM. The other two refused a closure device. CONCLUSIONS: The results confirm that radiofrequency closure of the PFO is safe albeit less efficacious and more complex than device closure. The technique in its current state should not be attempted in patients with a balloon-stretched PFO diameter >7 mm and an ASA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Micro Combined Heat and Power (Micro-CHP) system produces both electricity and heat required for residential or small business applications. Use of Micro-CHP in a residential application not only creates energy and economic savings but also reduces the carbon foot print of the house or small business. Additionally, micro-CHP can subsidize its cost of operation by selling excess electricity produced back to the grid. Even though Micro-CHP remains attractive on paper, high initial cost and optimization issues in residential scale heat and electrical requirement has kept this technology from becoming a success. To understand and overcome all disadvantages posed my Micro-CHP system, a laboratory is developed to test different scenarios of Micro-CHP applications so that we can learn and improve the current technology. This report focuses on the development of this Micro-CHP laboratory including installation of Ecopower micro-CHP unit, developing fuel line and exhaust line for Ecopower unit, design of electrical and thermal loop, installing all the instrumentation required for data collection on the Ecopower unit and developing controls for heat load simulation using thermal loop. Also a simulation of Micro-CHP running on Syngas is done in Matlab. This work was supported through the donation of ‘Ecopower’ a Micro-CHP unit by Marathon Engine and through the support of Michigan Tech REF-IF grand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space Based Solar Power satellites use solar arrays to generate clean, green, and renewable electricity in space and transmit it to earth via microwave, radiowave or laser beams to corresponding receivers (ground stations). These traditionally are large structures orbiting around earth at the geo-synchronous altitude. This thesis introduces a new architecture for a Space Based Solar Power satellite constellation. The proposed concept reduces the high cost involved in the construction of the space satellite and in the multiple launches to the geo-synchronous altitude. The proposed concept is a constellation of Low Earth Orbit satellites that are smaller in size than the conventional system. For this application a Repeated Sun-Synchronous Track Circular Orbit is considered (RSSTO). In these orbits, the spacecraft re-visits the same locations on earth periodically every given desired number of days with the line of nodes of the spacecraft’s orbit fixed relative to the Sun. A wide range of solutions are studied, and, in this thesis, a two-orbit constellation design is chosen and simulated. The number of satellites is chosen based on the electric power demands in a given set of global cities. The orbits of the satellites are designed such that their ground tracks visit a maximum number of ground stations during the revisit period. In the simulation, the locations of the ground stations are chosen close to big cities, in USA and worldwide, so that the space power constellation beams down power directly to locations of high electric power demands. The j2 perturbations are included in the mathematical model used in orbit design. The Coverage time of each spacecraft over a ground site and the gap time between two consecutive spacecrafts visiting a ground site are simulated in order to evaluate the coverage continuity of the proposed solar power constellation. It has been observed from simulations that there always periods in which s spacecraft does not communicate with any ground station. For this reason, it is suggested that each satellite in the constellation be equipped with power storage components so that it can store power for later transmission. This thesis presents a method for designing the solar power constellation orbits such that the number of ground stations visited during the given revisit period is maximized. This leads to maximizing the power transmission to ground stations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report is a PhD dissertation proposal to study the in-cylinder temperature and heat flux distributions within a gasoline turbocharged direct injection (GTDI) engine. Recent regulations requiring automotive manufacturers to increase the fuel efficiency of their vehicles has led to great technological achievements in internal combustion engines. These achievements have increased the power density of gasoline engines dramatically in the last two decades. Engine technologies such as variable valve timing (VVT), direct injection (DI), and turbocharging have significantly improved engine power-to-weight and power-to-displacement ratios. A popular trend for increasing vehicle fuel economy in recent years has been to downsize the engine and add VVT, DI, and turbocharging technologies so that a lighter more efficient engine can replace a larger, heavier one. With the added power density, thermal management of the engine becomes a more important issue. Engine components are being pushed to their temperature limits. Therefore it has become increasingly important to have a greater understanding of the parameters that affect in-cylinder temperatures and heat transfer. The proposed research will analyze the effects of engine speed, load, relative air-fuel ratio (AFR), and exhaust gas recirculation (EGR) on both in-cylinder and global temperature and heat transfer distributions. Additionally, the effect of knocking combustion and fuel spray impingement will be investigated. The proposed research will be conducted on a 3.5 L six cylinder GTDI engine. The research engine will be instrumented with a large number of sensors to measure in-cylinder temperatures and pressures, as well as, the temperature, pressure, and flow rates of energy streams into and out of the engine. One of the goals of this research is to create a model that will predict the energy distribution to the crankshaft, exhaust, and cooling system based on normalized values for engine speed, load, AFR, and EGR. The results could be used to aid in the engine design phase for turbocharger and cooling system sizing. Additionally, the data collected can be used for validation of engine simulation models, since in-cylinder temperature and heat flux data is not readily available in the literature..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Plasma and Supra-Thermal Ion Composition (PLASTIC) instrument is one of four experiment packages on board of the two identical STEREO spacecraft A and B, which were successfully launched from Cape Canaveral on 26 October 2006. During the two years of the nominal STEREO mission, PLASTIC is providing us with the plasma characteristics of protons, alpha particles, and heavy ions. PLASTIC will also provide key diagnostic measurements in the form of the mass and charge state composition of heavy ions. Three measurements (E/qk, time of flight, ESSD) from the pulse height raw data are used to characterize the solar wind ions from the solar wind sector, and part of the suprathermal particles from the wide-angle partition with respect to mass, atomic number and charge state. In this paper, we present a new method for flight data analysis based on simulations of the PLASTIC response to solar wind ions. We present the response of the entrance system / energy analyzer in an analytical form. Based on stopping power theory, we use an analytical expression for the energy loss of the ions when they pass through a thin carbon foil. This allows us to model analytically the response of the time of flight mass spectrometer to solar wind ions. Thus we present a new version of the analytical response of the solid state detectors to solar wind ions. Various important parameters needed for our models were derived, based on calibration data and on the first flight measurements obtained from STEREO-A. We used information from each measured event that is registered in full resolution in the Pulse Height Analysis words and we derived a new algorithm for the analysis of both existing and future data sets of a similar nature which was tested and works well. This algorithm allows us to obtain, for each measured event, the mass, atomic number and charge state in the correct physical units. Finally, an important criterion was developed for filtering our Fe raw flight data set from the pulse height data without discriminating charge states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The successful experience of the Jose Cabrera Nuclear Power Plant Interactive Graphical Simulator implementation in the Nuclear Engineering Department in the Universidad Polite´cnica de Madrid, for the Education and Training of nuclear engineers is shown in this paper. The paper starts with the objectives and the description of the Simulator Aula, and the methodology of work following the recommendations of the IAEA for the use of nuclear reactor simulators for education. The practices and material prepared for the students, as well as the operational and accident situations simulated are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sunrise is a solar telescope, successfully flown in June 2009 with a long duration balloon from the Swedish Space Corporation Esrange launch site. The design of the thermal control of SUNRISE was quite critical because of the sensitivity to temperature of the optomechanical devices and the electronics. These problems got more complicated due the size and high power dissipation of the system. A detailed thermal mathematical model of SUNRISE was set up to predict temperatures. In this communication the thermal behaviour of SUNRISE during flight is presented. Flight temperatures of some devices are presented and analysed. The measured data have been compared with the predictions given by the thermal mathematical models. The main discrepancies between flight data and the temperatures predicted by the models have been identified. This allows thermal engineers to improve the knowledge of the thermal behaviour of the system for future missions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AlGaN/GaN high electron mobility transistors (HEMT) are key devices for the next generation of high-power, high-frequency and high-temperature electronics applications. Although significant progress has been recently achieved [1], stability and reliability are still some of the main issues under investigation, particularly at high temperatures [2-3]. Taking into account that the gate contact metallization is one of the weakest points in AlGaN/GaN HEMTs, the reliability of Ni, Mo, Pt and refractory metal gates is crucial [4-6]. This work has been focused on the thermal stress and reliability assessment of AlGaN/GaN HEMTs. After an unbiased storage at 350 o C for 2000 hours, devices with Ni/Au gates exhibited detrimental IDS-VDS degradation in pulsed mode. In contrast, devices with Mo/Au gates showed no degradation after similar storage conditions. Further capacitance-voltage characterization as a function of temperature and frequency revealed two distinct trap-related effects in both kinds of devices. At low frequency (< 1MHz), increased capacitance near the threshold voltage was present at high temperatures and more pronounced for the Ni/Au gate HEMT and as the frequency is lower. Such an anomalous “bump” has been previously related to H-related surface polar charges [7]. This anomalous behavior in the C-V characteristics was also observed in Mo/Au gate HEMTs after 1000 h at a calculated channel temperatures of around from 250 o C (T2) up to 320 ºC (T4), under a DC bias (VDS= 25 V, IDS= 420 mA/mm) (DC-life test). The devices showed a higher “bump” as the channel temperature is higher (Fig. 1). At 1 MHz, the higher C-V curve slope of the Ni/Au gated HEMTs indicated higher trap density than Mo/Au metallization (Fig. 2). These results highlight that temperature is an acceleration factor in the device degradation, in good agreement with [3]. Interface state density analysis is being performed in order to estimate the trap density and activation energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.