984 resultados para temperature-programmed techniques
Resumo:
Varved lake sediments are excellent natural archives providing quantitative insights into climatic and environmental changes at very high resolution and chronological accuracy. However, due to the multitude of responses within lake ecosystems it is often difficult to understand how climate variability interacts with other environmental pressures such as eutrophication, and to attribute observed changes to specific causes. This is particularly challenging during the past 100 years when multiple strong trends are superposed. Here we present a high-resolution multi-proxy record of sedimentary pigments and other biogeochemical data from the varved sediments of Lake Żabińskie (Masurian Lake District, north-eastern Poland, 54°N–22°E, 120 m a.s.l.) spanning AD 1907 to 2008. Lake Żabińskie exhibits biogeochemical varves with highly organic late summer and winter layers separated by white layers of endogenous calcite precipitated in early summer. The aim of our study is to investigate whether climate-driven changes and anthropogenic changes can be separated in a multi-proxy sediment data set, and to explore which sediment proxies are potentially suitable for long quantitative climate reconstructions. We also test if convoluted analytical techniques (e.g. HPLC) can be substituted by rapid scanning techniques (visible reflectance spectroscopy VIS-RS; 380–730 nm). We used principal component analysis and cluster analysis to show that the recent eutrophication of Lake Żabińskie can be discriminated from climate-driven changes for the period AD 1907–2008. The eutrophication signal (PC1 = 46.4%; TOC, TN, TS, Phe-b, high TC/CD ratios total carotenoids/chlorophyll-a derivatives) is mainly expressed as increasing aquatic primary production, increasing hypolimnetic anoxia and a change in the algal community from green algae to blue-green algae. The proxies diagnostic for eutrophication show a smooth positive trend between 1907 and ca 1980 followed by a very rapid increase from ca. 1980 ± 2 onwards. We demonstrate that PC2 (24.4%, Chl-a-related pigments) is not affected by the eutrophication signal, but instead is sensitive to spring (MAM) temperature (r = 0.63, pcorr < 0.05, RMSEP = 0.56 °C; 5-yr filtered). Limnological monitoring data (2011–2013) support this finding. We also demonstrate that scanning visible reflectance spectroscopy (VIS-RS) data can be calibrated to HPLC-measured chloropigment data and be used to infer concentrations of sedimentary Chl-a derivatives {pheophytin a + pyropheophytin a}. This offers the possibility for very high-resolution (multi)millennial-long paleoenvironmental reconstructions.
Resumo:
This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.
Resumo:
Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^
Novel Imaging-Based Techniques Reveal a Role for PD-1/PD-L1 in Tumor Immune Surveillance in the Lung
Resumo:
The binding of immune inhibitory receptor Programmed Death 1 (PD-1) on T cells to its ligand PD-L1 has been implicated as a major contributor to tumor induced immune suppression. Clinical trials of PD-L1 blockade have proven effective in unleashing therapeutic anti-tumor immune responses in a subset of patients with advanced melanoma, yet current response rates are low for reasons that remain unclear. Hypothesizing that the PD-1/PD-L1 pathway regulates T cell surveillance within the tumor microenvironment, we employed intravital microscopy to investigate the in vivo impact of PD-L1 blocking antibody upon tumor-associated immune cell migration. However, current analytical methods of intravital dynamic microscopy data lack the ability to identify cellular targets of T cell interactions in vivo, a crucial means for discovering which interactions are modulated by therapeutic intervention. By developing novel imaging techniques that allowed us to better analyze tumor progression and T cell dynamics in the microenvironment; we were able to explore the impact of PD-L1 blockade upon the migratory properties of tumor-associated immune cells, including T cells and antigen presenting cells, in lung tumor progression. Our results demonstrate that early changes in tumor morphology may be indicative of responsiveness to anti-PD-L1 therapy. We show that immune cells in the tumor microenvironment as well as tumors themselves express PD-L1, but immune phenotype alone is not a predictive marker of effective anti-tumor responses. Through a novel method in which we quantify T cell interactions, we show that T cells are largely engaged in interactions with dendritic cells in the tumor microenvironment. Additionally, we show that during PD-L1 blockade, non-activated T cells are recruited in greater numbers into the tumor microenvironment and engage more preferentially with dendritic cells. We further show that during PD-L1 blockade, activated T cells engage in more confined, immune synapse-like interactions with dendritic cells, as opposed to more dynamic, kinapse-like interactions with dendritic cells when PD-L1 is free to bind its receptor. By advancing the contextual analysis of anti-tumor immune surveillance in vivo, this study implicates the interaction between T cells and tumor-associated dendritic cells as a possible modulator in targeting PD-L1 for anti-tumor immunotherapy.
Resumo:
This Atlas summarises the global distribution of extant organic-walled dinoflagellate cysts in the form of 61 maps illustrated by the relative abundance of individual cyst taxa in recent marine sediments from the Atlantic Ocean and adjacent basins, the Antarctic region (South Atlantic, southwestern Pacific and southern Indian Ocean sections), the Arabian Sea and the northwestern Pacific. This synthesis is based on the integration of literature sources together with data from 835 marine surface sediments prepared on a comparable methodology and taxonomy. The relationships between distribution patterns of cyst species and the surface-water parameters (temperature, salinity, phosphate and nitrate concentrations) are documented with graphs depicting the relative abundance of species in relation to seasonal and annual values of the above mentioned parameters at the sample sites. Two ordination techniques (detrended correspondence analysis and canonical correspondence analysis) have been carried out to statistically illustrate the relationships between species distribution and sea-surface conditions. Results have been compared with previously published records and an overview of the ecological significance of each individual species is presented. Characterisations of selected environments as well as a discussion about how additional processes such as preservation and transport could have affected the present dataset are included.
Resumo:
GEOMAR's autonomous underwater vehicle (AUV Abyss REMUS 6000) was deployed within the framework of a multi-platform experiment in June 2012 with R/V Maria S. Merian cruise MSM21/1b at about 180 km downstream of Denmark Strait. The scientific payload included a pumped Seabird 49 FastCAT CTD system, a paroscientific pressure sensor, and shear and temperature microstructure profiler from Rockland Scientific Inc.. In total, six of eight AUV dives were carried out successfully. Aborts on three dives were caused by strong counter currents the AUV experienced in the Denmark Strait Overflow plume, which made the AUV fail to reach its waypoints on schedule. During all missions the AUV was programmed to dive at constant depth levels along? straight legs approximately parallel to chosen isobaths with a constant speed of 1.6 m s-1 through the water.
Resumo:
The intermediatebandsolarcell (IBSC) is a photovoltaic device with a theoretical conversion efficiency limit of 63.2%. In recent years many attempts have been made to fabricate an intermediateband material which behaves as the theory states. One characteristic feature of an IBSC is its luminescence spectrum. In this work the temperature dependence of the photoluminescence (PL) and electroluminescence (EL) spectra of InAs/GaAs QD-IBSCs together with their reference cell have been studied. It is shown that EL measurements provide more reliable information about the behaviour of the IB material inside the IBSC structure than PL measurements. At low temperatures, the EL spectra are consistent with the quasi-Fermi level splits described by the IBSC model, whereas at room temperature they are not. This result is in agreement with previously reported analysis of the quantum efficiency of the solarcells
Resumo:
Some experiments have been performed to investigate the cyclic freeze-thaw deterioration of concrete, using traditional and non-traditional techniques. Two concrete mixes, with different pore structure, were tested in order to compare the behavior of a freeze-thaw resistant concrete from one that is not. One of the concretes was air entrained, high content of cement and low w/c ratio, and the other one was a lower cement content and higher w/c ratio, without air-entraining agent. Concrete specimens were studied under cyclic freeze-thaw conditions according to UNE-CENT/TS 12390-9 test, using 3% NaCl solution as freezing medium (CDF test: Capillary Suction, De-icing agent and Freeze-thaw Test). The temperature and relative humidity were measured during the cycles inside the specimens using embedded sensors placed at different heights from the surface in contact with the de-icing agent solution. Strain gauges were used to measure the strain variations at the surface of the specimens. Also, measurements of ultrasonic pulse velocity through the concrete specimens were taken before, during, and after the freeze-thaw cycles. According to the CDF test, the failure of the non-air-entraining agent concrete was observed before 28 freeze-thaw cycles; contrariwise, the scaling of the air-entraining agent concrete was only 0.10 kg/m 2 after 28 cycles, versus 3.23 kg/m 2 in the deteriorated concrete, after 28 cycles. Similar behavior was observed on the strain measurements. The residual strain in the deteriorated concrete after 28 cycles was 1150 m versus 65 m, in the air-entraining agent concrete. By means of monitoring the changes of ultrasonic pulse velocity during the freeze-thaw cycles, the deterioration of the tested specimens were assessed
Resumo:
Abstract This work is a contribution to the research and development of the intermediate band solar cell (IBSC), a high efficiency photovoltaic concept that features the advantages of both low and high bandgap solar cells. The resemblance with a low bandgap solar cell comes from the fact that the IBSC hosts an electronic energy band -the intermediate band (IB)- within the semiconductor bandgap. This IB allows the collection of sub-bandgap energy photons by means of two-step photon absorption processes, from the valence band (VB) to the IB and from there to the conduction band (CB). The exploitation of these low energy photons implies a more efficient use of the solar spectrum. The resemblance of the IBSC with a high bandgap solar cell is related to the preservation of the voltage: the open-circuit voltage (VOC) of an IBSC is not limited by any of the sub-bandgaps (involving the IB), but only by the fundamental bandgap (defined from the VB to the CB). Nevertheless, the presence of the IB allows new paths for electronic recombination and the performance of the IBSC is degraded at 1 sun operation conditions. A theoretical argument is presented regarding the need for the use of concentrated illumination in order to circumvent the degradation of the voltage derived from the increase in the recombi¬nation. This theory is supported by the experimental verification carried out with our novel characterization technique consisting of the acquisition of photogenerated current (IL)-VOC pairs under low temperature and concentrated light. Besides, at this stage of the IBSC research, several new IB materials are being engineered and our novel character¬ization tool can be very useful to provide feedback on their capability to perform as real IBSCs, verifying or disregarding the fulfillment of the “voltage preservation” principle. An analytical model has also been developed to assess the potential of quantum-dot (QD)-IBSCs. It is based on the calculation of band alignment of III-V alloyed heterojunc-tions, the estimation of the confined energy levels in a QD and the calculation of the de¬tailed balance efficiency. Several potentially useful QD materials have been identified, such as InAs/AlxGa1-xAs, InAs/GaxIn1-xP, InAs1-yNy/AlAsxSb1-x or InAs1-zNz/Alx[GayIn1-y]1-xP. Finally, a model for the analysis of the series resistance of a concentrator solar cell has also been developed to design and fabricate IBSCs adapted to 1,000 suns. Resumen Este trabajo contribuye a la investigación y al desarrollo de la célula solar de banda intermedia (IBSC), un concepto fotovoltaico de alta eficiencia que auna las ventajas de una célula solar de bajo y de alto gap. La IBSC se parece a una célula solar de bajo gap (o banda prohibida) en que la IBSC alberga una banda de energía -la banda intermedia (IB)-en el seno de la banda prohibida. Esta IB permite colectar fotones de energía inferior a la banda prohibida por medio de procesos de absorción de fotones en dos pasos, de la banda de valencia (VB) a la IB y de allí a la banda de conducción (CB). El aprovechamiento de estos fotones de baja energía conlleva un empleo más eficiente del espectro solar. La semejanza antre la IBSC y una célula solar de alto gap está relacionada con la preservación del voltaje: la tensión de circuito abierto (Vbc) de una IBSC no está limitada por ninguna de las fracciones en las que la IB divide a la banda prohibida, sino que está únicamente limitada por el ancho de banda fundamental del semiconductor (definido entre VB y CB). No obstante, la presencia de la IB posibilita nuevos caminos de recombinación electrónica, lo cual degrada el rendimiento de la IBSC a 1 sol. Este trabajo argumenta de forma teórica la necesidad de emplear luz concentrada para evitar compensar el aumento de la recom¬binación de la IBSC y evitar la degradación del voltage. Lo anterior se ha verificado experimentalmente por medio de nuestra novedosa técnica de caracterización consistente en la adquisicin de pares de corriente fotogenerada (IL)-VOG en concentración y a baja temperatura. En esta etapa de la investigación, se están desarrollando nuevos materiales de IB y nuestra herramienta de caracterizacin está siendo empleada para realimentar el proceso de fabricación, comprobando si los materiales tienen capacidad para operar como verdaderas IBSCs por medio de la verificación del principio de preservación del voltaje. También se ha desarrollado un modelo analítico para evaluar el potencial de IBSCs de puntos cuánticos. Dicho modelo está basado en el cálculo del alineamiento de bandas de energía en heterouniones de aleaciones de materiales III-V, en la estimación de la energía de los niveles confinados en un QD y en el cálculo de la eficiencia de balance detallado. Este modelo ha permitido identificar varios materiales de QDs potencialmente útiles como InAs/AlxGai_xAs, InAs/GaxIni_xP, InAsi_yNy/AlAsxSbi_x ó InAsi_zNz/Alx[GayIni_y]i_xP. Finalmente, también se ha desarrollado un modelado teórico para el análisis de la resistencia serie de una célula solar de concentración. Gracias a dicho modelo se han diseñado y fabricado IBSCs adaptadas a 1.000 soles.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
A broadband primary standard for thermal noise measurements is presented and its thermal and electromagnetic behaviour is analysed by means of a novel hybrid analytical?numerical simulation methodology. The standard consists of a broadband termination connected to a 3.5mm coaxial airline partially immersed in liquid nitrogen and is designed in order to obtain a low reflectivity and a low uncertainty in the noise temperature. A detailed sensitivity analysis is made in order to highlight the critical characteristics that mostly affect the uncertainty in the noise temperature, and also to determine the manufacturing and operation tolerances for a proper performance in the range 10MHz to 26.5 GHz. Aspects such as the thermal bead design, the level of liquid nitrogen or the uncertainties associated with the temperatures, the physical properties of the materials in the standard and the simulation techniques are discussed.
Resumo:
Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.