951 resultados para Over-voltage problem
Resumo:
Endovascular aneurysm repair has matured significantly over the last 20 years and is becoming increasingly popular as a minimally invasive treatment option for patients with abdominal aortic aneurysms (AAA). Long-term durability of this fascinating treatment, however, is in doubt as continuing aneurysmal degeneration of the aortoiliac graft attachment zones is clearly associated with late adverse sequelae. In recent years, our growing understanding of the physiopathology of AAA formation has facilitated scrutiny of various potential drug treatment concepts. In this article we review the mechanical and biological challenges associated with endovascular treatment of infrarenal AAAs and discuss potential approaches to ongoing aneurysmal degeneration, which hampers long-term outcomes of this minimally invasive therapy.
Resumo:
Over the past 30 years the Marlborough Family Service in London has pioneered multi-family work with marginalized families presenting simultaneously with abuse and neglect, family violence, substance misuse, educational failure and mental illness. The approach is based on a systemic multi-contextual mode and this chapter describes the evolving work, including the establishment of the first permanent multiple family day setting, specifically designed for and solely dedicated to the work with seemingly ‘hopeless’ families. The ingredients of ‘therapeutic assessments’ of parents and families are outlined and the importance of initial network meetings with professionals and family members is emphasized.
Resumo:
OBJECTIVE: To determine the minimum alveolar concentration (MAC) of isoflurane in Shetland ponies using a sequence of three different supramaximal noxious stimulations at each tested concentration of isoflurane rather than a single stimulation. STUDY DESIGN: Prospective, experimental trial. ANIMALS: Seven 4-year-old, gelding Shetland ponies. METHODS: The MAC of isoflurane was determined for each pony. Three different modes of electrical stimulation were applied consecutively (2 minute intervals): two using constant voltage (90 V) on the gingiva via needle- (CVneedle) or surface-electrodes (CVsurface) and one using constant current (CC; 40 mA) via surface electrodes applied to the skin over the digital nerve. The ability to clearly interpret the responses as positive, the latency of the evoked responses and the inter-electrode resistance were recorded for each stimulus. RESULTS: Individual isoflurane MAC (%) values ranged from 0.60 to 1.17 with a mean (+/-SD) of 0.97 (+/-0.17). The responses were more clearly interpreted with CC, but did not reach statistical significance. The CVsurface mode produced responses with a longer delay. The CVneedle mode was accompanied by variable inter-electrode resistances resulting in uncontrolled stimulus intensity. At 0.9 MAC, the third stimulation induced more positive responses than the first stimulation, independent of the mode of stimulation used. CONCLUSIONS: The MAC of isoflurane in the Shetland ponies was lower than expected with considerable variability among individuals. Constant current surface electrode stimulations were the most repeatable. A summation over the sequence of three supramaximal stimulations was observed around 0.9 MAC. CLINICAL RELEVANCE: The possibility that Shetland ponies require less isoflurane than horses needs further investigation. Constant current surface-electrode stimulations were the most repeatable. Repetitive supramaximal stimuli may have evoked movements at isoflurane concentrations that provide immobility when single supramaximal stimulation was applied.
Resumo:
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Resumo:
CONTEXT The necessity of specific intervention components for the successful treatment of patients with posttraumatic stress disorder is the subject of controversy. OBJECTIVE To investigate the complexity of clinical problems as a moderator of relative effects between specific and nonspecific psychological interventions. METHODS We included 18 randomized controlled trials, directly comparing specific and nonspecific psychological interventions. We conducted moderator analyses, including the complexity of clinical problems as predictor. RESULTS Our results have confirmed the moderate superiority of specific over nonspecific psychological interventions; however, the superiority was small in studies with complex clinical problems and large in studies with noncomplex clinical problems. CONCLUSIONS For patients with complex clinical problems, our results suggest that particular nonspecific psychological interventions may be offered as an alternative to specific psychological interventions. In contrast, for patients with noncomplex clinical problems, specific psychological interventions are the best treatment option.
Children's performance estimation in mathematics and science tests over a school year: A pilot study
Resumo:
The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.
Resumo:
The proliferation of multimedia content and the demand for new audio or video services have fostered the development of a new era based on multimedia information, which allowed the evolution of Wireless Multimedia Sensor Networks (WMSNs) and also Flying Ad-Hoc Networks (FANETs). In this way, live multimedia services require real-time video transmissions with a low frame loss rate, tolerable end-to-end delay, and jitter to support video dissemination with Quality of Experience (QoE) support. Hence, a key principle in a QoE-aware approach is the transmission of high priority frames (protect them) with a minimum packet loss ratio, as well as network overhead. Moreover, multimedia content must be transmitted from a given source to the destination via intermediate nodes with high reliability in a large scale scenario. The routing service must cope with dynamic topologies caused by node failure or mobility, as well as wireless channel changes, in order to continue to operate despite dynamic topologies during multimedia transmission. Finally, understanding user satisfaction on watching a video sequence is becoming a key requirement for delivery of multimedia content with QoE support. With this goal in mind, solutions involving multimedia transmissions must take into account the video characteristics to improve video quality delivery. The main research contributions of this thesis are driven by the research question how to provide multimedia distribution with high energy-efficiency, reliability, robustness, scalability, and QoE support over wireless ad hoc networks. The thesis addresses several problem domains with contributions on different layers of the communication stack. At the application layer, we introduce a QoE-aware packet redundancy mechanism to reduce the impact of the unreliable and lossy nature of wireless environment to disseminate live multimedia content. At the network layer, we introduce two routing protocols, namely video-aware Multi-hop and multi-path hierarchical routing protocol for Efficient VIdeo transmission for static WMSN scenarios (MEVI), and cross-layer link quality and geographical-aware beaconless OR protocol for multimedia FANET scenarios (XLinGO). Both protocols enable multimedia dissemination with energy-efficiency, reliability and QoE support. This is achieved by combining multiple cross-layer metrics for routing decision in order to establish reliable routes.
Resumo:
We characterized a sample of metal-oxide resistors and measured their breakdown voltage in liquid argon by applying high voltage (HV) pulses over a 3 second period. This test mimics the situation in a HV-divider chain when a breakdown occurs and the voltage across resistors rapidly rise from the static value to much higher values. All resistors had higher breakdown voltages in liquid argon than their vendor ratings in air at room temperature. Failure modes range from full destruction to coating damage. In cases where breakdown was not catastrophic, subsequent breakdown voltages were lower in subsequent measuring runs. One resistor type withstands 131 kV pulses, the limit of the test setup.
Resumo:
For executing the activities of a project, one or several resources are required, which are in general scarce. Many resource-allocation methods assume that the usage of these resources by an activity is constant during execution; in practice, however, the project manager may vary resource usage by individual activities over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and precedence and various work-content-related constraints are met.
Resumo:
In training networks, particularly small and medium-sized enterprises pool their resources to train apprentices within the framework of the dual VET system, while an intermediary organisation is tasked with managing operations. Over the course of their apprenticeship, the apprentices switch from one training company to another on a (half-) yearly basis. Drawing on a case study of four training networks in Switzerland and the theoretical framework of the sociology of conventions, this paper aims to understand the reasons for the slow dissemination and reluctant adoption of this promising form of organising VET in Switzerland. The results of the study show that the system of moving from one company to another creates a variety of free-rider constellations in the distribution of the collectively generated corporative benefits. This explains why companies are reluctant to participate in this model. For the network to be sustainable, the intermediary organisation has to address discontent arising from free-rider problems while taking into account that the solutions found are always tentative and will often result in new free-rider problems.
Resumo:
Foodborne illness has always been with us, and food safety is an increasingly important public health issue affecting populations worldwide. In the United States of America, foodborne illness strikes millions of people and kills thousands annually, costing our economy billions of dollars in medical care expense and lost productivity. The nature of food and foodborne illness has changed dramatically in the last century. The regulatory systems have evolved to better assure a safe food supply. The food production industry has invested heavily to meet regulatory requirement and to improve the safety of their products. Educational efforts have increased public awareness of safe food handling practices, empowering consumers to fulfill their food safety role. Despite the advances made, none of the Healthy People 2010 targets for reduction of foodborne pathogens has been reached. There is no single solution to eliminating pathogen contamination from all classes of food products. However, irradiation seems especially suited for certain higher-risk foods such as meat and poultry and its use should advance the goal of reducing foodborne illness by minimizing the presence of pathogenic organisms in the food supply. This technology has been studied extensively for over 50 years. The Food and Drug Administration has determined that food irradiation is safe for use as approved by the Agency. It is time to take action to educate consumers about the benefits of food irradiation. Consumer demand will compel industry to meet demand by investing in facilities and processes to assure a consistent supply of irradiated food products. ^
Resumo:
High voltage-activated (HVA) calcium channels from rat brain and rabbit heart are expressed in Xenopus laevis oocytes and their modulation by protein kinases studied. A subtype of the HVA calcium current expressed by rat brain RNA is potentiated by the phospholipid- and calcium-dependent protein kinase (PKC). The calcium channel clone $\alpha\sb{\rm1C}$ from rabbit heart is modulated by the cAMP-dependent protein kinase (PKA), and another factor present in the cytoplasm.^ The HVA calcium channels from rat brain do not belong to the L-type subclass since they are insensensitive to dihydropyridine (DHP) agonists and antagonists. The expressed currents do contain a N-type fraction which is identified by inactivation at depolarized potentials, and a P-type fraction as defined by blockade by the venom of the funnel web spider Agelenopsis Aperta. A non N-type fraction of this current is potentiated, by using phorbol esters to activate PKC. This residual fraction of current resembles the newly described Q-type channel from cerebellar granule cells in its biophysical properties, and potentiation by activation of PKC.^ The $\alpha\sb{\rm1C}$ clone from rabbit heart is expressed in oocytes and single-channel currents are measured using the cell-attached and cell-excised patch clamp technique. The single-channel current runs down within two minutes after patch excision into normal saline bath solution. The catalytic subunit of PKA + MgATP is capable of reversing this rundown for over 15 minutes. There also appears to be an additional factor present in the cytoplasm necessary for channel activity as revealed in experiments where PKA failed to prevent rundown.^ These data are important in that these types of channels are involved in synaptic transmission at many different types of synapses. The mammalian synapse is not accessible for these types of studies, however, the oocyte expression system allows access to HVA calcium channels for the study of their modulation by phosphorylation. ^
Resumo:
The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 µatm at a target pCO2 of 1000 µatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.
Resumo:
During a four weeks anchoring station of R.V. ,,Meteor" on the equator at 30° W longitude, vertical profiles of wind, temperature, and humidity were measured by means of a meteorological buoy carrying a mast of 10 m height. After eliminating periods of instrumental failure, 18 days are available for the investigation of the diurnal variations of the meteorological parameters and 9 days for the investigation of the vertical heat fluxes. The diurnal variations of the above mentioned quantities are caused essentially by two periodic processes: the 24-hourly changing solar energy supply and the 12-hourly oscillation of air pressure, which both originate in the daily rotation of the earth. While the temperature of the water and of the near water layers of the air show a 24 hours period in their diurnal course, the wind speed, as a consequence of the pressure wave, has a 12 hours period, which is also observable in evaporation and, consequently, in the water vapor content of the surface layer. Concerning the temperature, a weak dependence of the daily amplitude on height was determined. Further investigation of the profiles yields relations between the vertical gradients of wind, temperature, and water vapor and the wind speed, the difference between sea and air of temperature and water vapor, respectively, thus giving a contribution to the problem of parameterizing the vertical fluxes. Mean profile coefficients for the encountered stabilities, which were slightly unstable, are presented, and correction terms are given due to the fact that the conditions at the very surface are not sufficiently represented by measuring in a water depth of 20 cm and assuming water vapor saturation. This is especially true for the water vapor content, where the relation between the gradient and the air-sea difference suggests a reduction of relative humidity to appr. 96% at the very surface, if the gradients are high. This effect may result in an overestimation of the water vapor flux, if a ,,bulk"-formula is used. Finally sensible and latent heat fluxes are computed by means of a gradient-formula. The influence of stability on the transfer process is taken into account. As the air-sea temperature differences are small, sensible heat plays no important role in that region, but latent heat shows several interesting features. Within the measuring period of 18 days, a regular variation by a factor of ten is observed. Unperiodic short term variations are superposed by periodic diurnal variations. The mean diurnal course shows a 12-hours period caused by the vertical wind speed gradient superposed by a 24-hours period due to the changing stabilities. Mean values within the measuring period are 276 ly/day for latent heat and 9.41y/day for sensible heat.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.