947 resultados para Low-Power Image Sensors
Resumo:
The current design life of nuclear power plant (NPP) could potentially be extended to 80 years. During this extended plant life, all safety and operationally relevant Instrumentation & Control (I&C) systems are required to meet their designed performance requirements to ensure safe and reliable operation of the NPP, both during normal operation and subsequent to design base events. This in turn requires an adequate and documented qualification and aging management program. It is known that electrical insulation of I&C cables used in safety related circuits can degrade during their life, due to the aging effect of environmental stresses, such as temperature, radiation, vibration, etc., particularly if located in the containment area of the NPP. Thus several condition monitoring techniques are required to assess the state of the insulation. Such techniques can be used to establish a residual lifetime, based on the relationship between condition indicators and ageing stresses, hence, to support a preventive and effective maintenance program. The object of this thesis is to investigate potential electrical aging indicators (diagnostic markers) testing various I&C cable insulations subjected to an accelerated multi-stress (thermal and radiation) aging.
Resumo:
Il presente lavoro di tesi è stato svolto presso il servizio di Fisica Sanitaria del Policlinico Sant'Orsola-Malpighi di Bologna. Lo studio si è concentrato sul confronto tra le tecniche di ricostruzione standard (Filtered Back Projection, FBP) e quelle iterative in Tomografia Computerizzata. Il lavoro è stato diviso in due parti: nella prima è stata analizzata la qualità delle immagini acquisite con una CT multislice (iCT 128, sistema Philips) utilizzando sia l'algoritmo FBP sia quello iterativo (nel nostro caso iDose4). Per valutare la qualità delle immagini sono stati analizzati i seguenti parametri: il Noise Power Spectrum (NPS), la Modulation Transfer Function (MTF) e il rapporto contrasto-rumore (CNR). Le prime due grandezze sono state studiate effettuando misure su un fantoccio fornito dalla ditta costruttrice, che simulava la parte body e la parte head, con due cilindri di 32 e 20 cm rispettivamente. Le misure confermano la riduzione del rumore ma in maniera differente per i diversi filtri di convoluzione utilizzati. Lo studio dell'MTF invece ha rivelato che l'utilizzo delle tecniche standard e iterative non cambia la risoluzione spaziale; infatti gli andamenti ottenuti sono perfettamente identici (a parte le differenze intrinseche nei filtri di convoluzione), a differenza di quanto dichiarato dalla ditta. Per l'analisi del CNR sono stati utilizzati due fantocci; il primo, chiamato Catphan 600 è il fantoccio utilizzato per caratterizzare i sistemi CT. Il secondo, chiamato Cirs 061 ha al suo interno degli inserti che simulano la presenza di lesioni con densità tipiche del distretto addominale. Lo studio effettuato ha evidenziato che, per entrambi i fantocci, il rapporto contrasto-rumore aumenta se si utilizza la tecnica di ricostruzione iterativa. La seconda parte del lavoro di tesi è stata quella di effettuare una valutazione della riduzione della dose prendendo in considerazione diversi protocolli utilizzati nella pratica clinica, si sono analizzati un alto numero di esami e si sono calcolati i valori medi di CTDI e DLP su un campione di esame con FBP e con iDose4. I risultati mostrano che i valori ricavati con l'utilizzo dell'algoritmo iterativo sono al di sotto dei valori DLR nazionali di riferimento e di quelli che non usano i sistemi iterativi.
Resumo:
To intraindividually compare a low-tube-voltage (80 kVp), high-tube-current (675 mA) computed tomographic (CT) technique with a high-tube-voltage (140 kVp) CT protocol for the detection of pancreatic tumors, image quality, and radiation dose during the pancreatic parenchymal phase.
Resumo:
To investigate whether an adaptive statistical iterative reconstruction (ASIR) algorithm improves the image quality at low-tube-voltage (80-kVp), high-tube-current (675-mA) multidetector abdominal computed tomography (CT) during the late hepatic arterial phase.
Resumo:
The temporal bone is ideal for low-dose CT because of its intrinsic high contrast. The aim of this study was to retrospectively evaluate image quality and radiation doses of a new low-dose versus a standard high-dose pediatric temporal bone CT protocol and to review dosimetric data from the literature.
Resumo:
A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.
Resumo:
OBJECTIVE: Measures to reduce radiation exposure and injected iodine mass are becoming more important with the widespread and often repetitive use of pulmonary CT angiography (CTA) in patients with suspected pulmonary embolism. In this retrospective study, we analyzed the capability of 2 low-kilovoltage CTA-protocols to achieve these goals. MATERIALS AND METHODS: Ninety patients weighing less than 100 kg were examined by a pulmonary CTA protocol using either 100 kVp (group A) or 80 kVp (group B). Volume and flow rate of contrast medium were reduced in group B (75 mL at 3 mL/s) compared with group A (100 mL at 4 mL/s). Attenuation was measured in the central and peripheral pulmonary arteries, and the contrast-to-noise ratios (CNR) were calculated. Entrance skin dose was estimated by measuring the surface dose in an ovoid-cylindrical polymethyl methacrylate chest phantom with 2 various dimensions corresponding to the range of chest diameters in our patients. Quantitative image parameters, estimated effective dose, and skin dose in both groups were compared by the t test. Arterial enhancement, noise, and overall quality were independently assessed by 3 radiologists, and results were compared between the groups using nonparametric tests. RESULTS: Mean attenuation in the pulmonary arteries in group B (427.6 +/- 116 HU) was significantly higher than in group A (342.1 +/- 87.7 HU; P < 0.001), whereas CNR showed no difference (group A, 20.6 +/- 7.3 and group B, 22.2 +/- 7.1; P = 0.302). Effective dose was lower by more than 40% with 80 kVp (1.68 +/- 0.23 mSv) compared with 100 kVp (2.87 +/- 0.88 mSv) (P < 0.001). Surface dose was significantly lower at 80 kVp compared with 100 kVp at both phantom dimensions (2.75 vs. 3.22 mGy; P = 0.027 and 2.22 vs. 2.73 mGy; P = 0.005, respectively). Image quality did not differ significantly between the groups (P = 0.151). CONCLUSIONS: Using 80 kVp in pulmonary CTA permits reduced patient exposure by 40% and CM volume by 25% compared with 100 kVp without deterioration of image quality in patients weighing less than 100 kg.
Resumo:
The objective of this retrospective study was to assess image quality with pulmonary CT angiography (CTA) using 80 kVp and to find anthropomorphic parameters other than body weight (BW) to serve as selection criteria for low-dose CTA. Attenuation in the pulmonary arteries, anteroposterior and lateral diameters, cross-sectional area and soft-tissue thickness of the chest were measured in 100 consecutive patients weighing less than 100 kg with 80 kVp pulmonary CTA. Body surface area (BSA) and contrast-to-noise ratios (CNR) were calculated. Three radiologists analyzed arterial enhancement, noise, and image quality. Image parameters between patients grouped by BW (group 1: 0-50 kg; groups 2-6: 51-100 kg, decadally increasing) were compared. CNR was higher in patients weighing less than 60 kg than in the BW groups 71-99 kg (P between 0.025 and <0.001). Subjective ranking of enhancement (P = 0.165-0.605), noise (P = 0.063), and image quality (P = 0.079) did not differ significantly across all patient groups. CNR correlated moderately strongly with weight (R = -0.585), BSA (R = -0.582), cross-sectional area (R = -0.544), and anteroposterior diameter of the chest (R = -0.457; P < 0.001 all parameters). We conclude that 80 kVp pulmonary CTA permits diagnostic image quality in patients weighing up to 100 kg. Body weight is a suitable criterion to select patients for low-dose pulmonary CTA.
Resumo:
RATIONALE AND OBJECTIVES: To evaluate the effect of automatic tube current modulation on radiation dose and image quality for low tube voltage computed tomography (CT) angiography. MATERIALS AND METHODS: An anthropomorphic phantom was scanned with a 64-section CT scanner using following tube voltages: 140 kVp (Protocol A), 120 kVp (Protocol B), 100 kVp (Protocol C), and 80 kVp (Protocol D). To achieve similar noise, combined z-axis and xy-axes automatic tube current modulation was applied. Effective dose (ED) for the four tube voltages was assessed. Three plastic vials filled with different concentrations of iodinated solution were placed on the phantom's abdomen to obtain attenuation measurements. The signal-to-noise ratio (SNR) was calculated and a figure of merit (FOM) for each iodinated solution was computed as SNR(2)/ED. RESULTS: The ED was kept similar for the four different tube voltages: (A) 5.4 mSv +/- 0.3, (B) 4.1 mSv +/- 0.6, (C) 3.9 mSv +/- 0.5, and (D) 4.2 mSv +/- 0.3 (P > .05). As the tube voltage decreased from 140 to 80 kVp, image noise was maintained (range, 13.8-14.9 HU) (P > .05). SNR increased as the tube voltage decreased, with an overall gain of 119% for the 80-kVp compared to the 140-kVp protocol (P < .05). The FOM results indicated that with a reduction of the tube voltage from 140 to 120, 100, and 80 kVp, at constant SNR, ED was reduced by a factor of 2.1, 3.3, and 5.1, respectively, (P < .001). CONCLUSIONS: As tube voltage decreases, automatic tube current modulation for CT angiography yields either a significant increase in image quality at constant radiation dose or a significant decrease in radiation dose at a constant image quality.
Resumo:
The purpose of this retrospective study was to intra-individually compare the image quality of computed radiography (CR) and low-dose linear-slit digital radiography (LSDR) for supine chest radiographs. A total of 90 patients (28 female, 62 male; mean age, 55.1 years) imaged with CR and LSDR within a mean time interval of 2.8 days +/- 3.0 were included in this study. Two independent readers evaluated the image quality of CR and LSDR based on modified European Guidelines for Quality Criteria for chest X-ray. The Wilcoxon test was used to analyse differences between the techniques. The overall image quality of LSDR was significantly better than the quality of CR (9.75 vs 8.16 of a maximum score of 10; p < 0.001). LSDR performed significantly better than CR for delineation of anatomical structures in the mediastinum and the retrocardiac lung (p < 0.001). CR was superior to LSDR for visually sharp delineation of the lung vessels and the thin linear structures in the lungs. We conclude that LSDR yields better image quality and may be more suitable for excluding significant pathological features of the chest in areas with high attenuation compared with CR.
Resumo:
PURPOSE To determine the image quality of an iterative reconstruction (IR) technique in low-dose MDCT (LDCT) of the chest of immunocompromised patients in an intraindividual comparison to filtered back projection (FBP) and to evaluate the dose reduction capability. MATERIALS AND METHODS 30 chest LDCT scans were performed in immunocompromised patients (Brilliance iCT; 20-40 mAs; mean CTDIvol: 1.7 mGy). The raw data were reconstructed using FBP and the IR technique (iDose4™, Philips, Best, The Netherlands) set to seven iteration levels. 30 routine-dose MDCT (RDCT) reconstructed with FBP served as controls (mean exposure: 116 mAs; mean CDTIvol: 7.6 mGy). Three blinded radiologists scored subjective image quality and lesion conspicuity. Quantitative parameters including CT attenuation and objective image noise (OIN) were determined. RESULTS In LDCT high iDose4™ levels lead to a significant decrease in OIN (FBP vs. iDose7: subscapular muscle 139.4 vs. 40.6 HU). The high iDose4™ levels provided significant improvements in image quality and artifact and noise reduction compared to LDCT FBP images. The conspicuity of subtle lesions was limited in LDCT FBP images. It significantly improved with high iDose4™ levels (> iDose4). LDCT with iDose4™ level 6 was determined to be of equivalent image quality as RDCT with FBP. CONCLUSION iDose4™ substantially improves image quality and lesion conspicuity and reduces noise in low-dose chest CT. Compared to RDCT, high iDose4™ levels provide equivalent image quality in LDCT, hence suggesting a potential dose reduction of almost 80%.
Resumo:
OBJECTIVES To find a threshold body weight (BW) below 100 kg above which computed tomography pulmonary angiography (CTPA) using reduced radiation and a reduced contrast material (CM) dose provides significantly impaired quality and diagnostic confidence compared with standard-dose CTPA. METHODS In this prospectively randomised study of 501 patients with suspected pulmonary embolism and BW <100 kg, 246 were allocated into the low-dose group (80 kVp, 75 ml CM) and 255 into the normal-dose group (100 kVp, 100 ml CM). Contrast-to-noise ratio (CNR) in the pulmonary trunk was calculated. Two blinded chest radiologists independently evaluated subjective image quality and diagnostic confidence. Data were compared between the normal-dose and low-dose groups in five BW subgroups. RESULTS Vessel attenuation did not differ between the normal-dose and low-dose groups within each BW subgroup (P = 1.0). The CNR was higher with the normal-dose compared with the low-dose protocol (P < 0.006) in all BW subgroups except for the 90-99 kg subgroup (P = 0.812). Subjective image quality and diagnostic confidence did not differ between CT protocols in all subgroups (P between 0.960 and 1.0). CONCLUSIONS Subjective image quality and diagnostic confidence with 80 kVp CTPA is not different from normal-dose protocol in any BW group up to 100 kg. KEY POINTS • 80 kVp CTPA is safe in patients weighing <100 kg • Reduced radiation and iodine dose still provide high vessel attenuation • Image quality and diagnostic confidence with low-dose CTPA is good • Diagnostic confidence does not deteriorate in obese patients weighing <100 kg.
Resumo:
The power generated by large grid-connected photovoltaic (PV) plants depends greatly on the solar irradiance. This paper studies the effects of the solar irradiance variability analyzing experimental 1-s data collected throughout a year at six PV plants, totaling 18 MWp. Each PV plant was modeled as a first order filter function based on an analysis in the frequency domain of the irradiance data and the output power signals. An empiric expression which relates the filter parameters and the PV plant size has been proposed. This simple model has been successfully validated precisely determining the daily maximum output power fluctuation from incident irradiance measurements.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.