924 resultados para Reasonable Lenght of Process
Resumo:
BACKGROUND We aimed to assess whether elderly patients with acute venous thromboembolism (VTE) receive recommended initial processes of care and to identify predictors of process adherence. METHODS We prospectively studied in- and outpatients aged ≥65 years with acute symptomatic VTE in a multicenter cohort study from nine Swiss university- and non-university hospitals between September 2009 and March 2011. We systematically assessed whether initial processes of care, which are recommended by the 2008 American College of Chest Physicians guidelines, were performed in each patient. We used multivariable logistic models to identify patient factors independently associated with process adherence. RESULTS Our cohort comprised 950 patients (mean age 76 years). Of these, 86% (645/750) received parenteral anticoagulation for ≥5 days, 54% (405/750) had oral anticoagulation started on the first treatment day, and 37% (274/750) had an international normalized ratio (INR) ≥2 for ≥24 hours before parenteral anticoagulation was discontinued. Overall, 35% (53/153) of patients with cancer received low-molecular-weight heparin monotherapy and 72% (304/423) of patients with symptomatic deep vein thrombosis were prescribed compression stockings. In multivariate analyses, symptomatic pulmonary embolism, hospital-acquired VTE, and concomitant antiplatelet therapy were associated with a significantly lower anticoagulation-related process adherence. CONCLUSIONS Adherence to several recommended processes of care was suboptimal in elderly patients with VTE. Quality of care interventions should particularly focus on processes with low adherence, such as the prescription of continued low-molecular-weight heparin therapy in patients with cancer and the achievement of an INR ≥2 for ≥24 hours before parenteral anticoagulants are stopped.
Resumo:
Determining the contribution of wood smoke to air pollution in large cities such as London is becoming increasingly important due to the changing nature of domestic heating in urban areas. During winter, biomass burning emissions have been identified as a major cause of exceedances of European air quality limits. The aim of this work was to quantify the contribution of biomass burning in London to concentrations of PM2:5 and determine whether local emissions or regional contributions were the main source of biomass smoke. To achieve this, a number of biomass burning chemical tracers were analysed at a site within central London and two sites in surrounding rural areas. Concentrations of levoglucosan, elemental carbon (EC), organic carbon (OC) and K+ were generally well correlated across the three sites. At all the sites, biomass burning was found to be a source of OC and EC, with the largest contribution of EC from traffic emissions, while for OC the dominant fraction included contributions from secondary organic aerosols, primary biogenic and cooking sources. Source apportionment of the EC and OC was found to give reasonable estimation of the total carbon from non-fossil and fossil fuel sources based upon comparison with estimates derived from 14C analysis. Aethalometer-derived black carbon data were also apportioned into the contributions frombiomass burning and traffic and showed trends similar to those observed for EC. Mean wood smoke mass at the sites was estimated to range from 0.78 to 1.0 μgm-3 during the campaign in January–February 2012. Measurements on a 160m tower in London suggested a similar ratio of brown to black carbon (reflecting wood burning and traffic respectively) in regional and London air. Peaks in the levoglucosan and K+ concentrations were observed to coincide with low ambient temperature, consistent with domestic heating as a major contributing local source in London. Overall, the source of biomass smoke in London was concluded to be a background regional source overlaid by contributions from local domestic burning emissions. This could have implications when considering future emission control strategies during winter and may be the focus of future work in order to better determine the contributing local sources.
Resumo:
Effective strategies for recruiting volunteers who are prepared to make a long-term commitment to formal positions are essential for the survival of voluntary sport clubs. This article examines the decision-making processes in relation to these efforts. Under the assumption of bounded rationality, the garbage can model is used to grasp these decision-making processes theoretically and access them empirically. Based on case study framework an in-depth analysis of recruitment practices was conducted in nine selected sport clubs. Results showed that the decision-making processes are generally characterized by a reactive approach in which dominant actors try to handle personnel problems of recruitment in the administration and sport domains through routine formal committee work and informal networks. In addition, it proved possible to develop a typology that deliver an overview of different decision-making practices in terms of the specific interplay of the relevant components of process control (top-down vs. bottom-up) and problem processing (situational vs. systematic).
Resumo:
In 2014, the Dispute Settlement Body (DSB) of the World Trade Organization (WTO) adopted seven panel reports and six Appellate Body rulings. Two of the cases relate to anti-dumping measures. Three cases, comprising five complaints, are of particular interest and these are summarized and discussed below. China – Rare Earths further refines the relationship between protocols of accession and the general provisions of WTO agreements, in particular the exceptions of Article XX GATT. Recourse to that provision is no longer excluded but depends on a careful case-by-case analysis. While China failed to comply with the conditions for export restrictions, the case reiterates the problem of insufficiently developed disciplines on export restrictions on strategic minerals and other commodities in WTO law. EC – Seals Products is a landmark case for two reasons. Firstly, it limits the application of the Agreement on Technical Barriers to Trade (TBT Agreement) resulting henceforth in a narrow reading of technical regulations. Normative rules prescribing conditions for importation are to be dealt with under the rules of the General Agreement on Tariffs and Trade (GATT) instead. Secondly, the ruling permits recourse to public morals in justifying import restrictions essentially on the basis of process and production methods (PPMs). Meanwhile, the more detailed implications for extraterritorial application of such rules and for the concept of PPMs remain open as these key issues were not raised by the parties to the case. Peru – Agricultural Products adds to the interpretation of the Agreement on Agriculture (AoA), but most importantly, it confirms the existing segregation of WTO law and the law of free trade agreements. The case is of particular importance for Switzerland in its relations with the European Union (EU). The case raises, but does not fully answer, the question whether in a bilateral agreement, Switzerland or the EU can, as a matter of WTO law, lawfully waive their right of lodging complaints against each other under WTO law within the scope of their bilateral agreement, for example the Agreement on Agriculture where such a clause exists.
Resumo:
In this article, we present a new microscopic theoretical approach to the description of spin crossover in molecular crystals. The spin crossover crystals under consideration are composed of molecular fragments formed by the spin-crossover metal ion and its nearest ligand surrounding and exhibiting well defined localized (molecular) vibrations. As distinguished from the previous models of this phenomenon, the developed approach takes into account the interaction of spin-crossover ions not only with the phonons but also a strong coupling of the electronic shells with molecular modes. This leads to an effective coupling of the local modes with phonons which is shown to be responsible for the cooperative spin transition accompanied by the structural reorganization. The transition is characterized by the two order parameters representing the mean values of the products of electronic diagonal matrices and the coordinates of the local modes for the high- and low-spin states of the spin crossover complex. Finally, we demonstrate that the approach provides a reasonable explanation of the observed spin transition in the [Fe(ptz)6](BF4)2 crystal. The theory well reproduces the observed abrupt low-spin → high-spin transition and the temperature dependence of the high-spin fraction in a wide temperature range as well as the pronounced hysteresis loop. At the same time within the limiting approximations adopted in the developed model, the evaluated high-spin fraction vs. T shows that the cooperative spin-lattice transition proves to be incomplete in the sense that the high-spin fraction does not reach its maximum value at high temperature.
Resumo:
Medication reconciliation, with the aim to resolve medication discrepancy, is one of the Joint Commission patient safety goals. Medication errors and adverse drug events that could result from medication discrepancy affect a large population. At least 1.5 million adverse drug events and $3.5 billion of financial burden yearly associated with medication errors could be prevented by interventions such as medication reconciliation. This research was conducted to answer the following research questions: (1a) What are the frequency range and type of measures used to report outpatient medication discrepancy? (1b) Which effective and efficient strategies for medication reconciliation in the outpatient setting have been reported? (2) What are the costs associated with medication reconciliation practice in primary care clinics? (3) What is the quality of medication reconciliation practice in primary care clinics? (4) Is medication reconciliation practice in primary care clinics cost-effective from the clinic perspective? Study designs used to answer these questions included a systematic review, cost analysis, quality assessments, and cost-effectiveness analysis. Data sources were published articles in the medical literature and data from a prospective workflow study, which included 150 patients and 1,238 medications. The systematic review confirmed that the prevalence of medication discrepancy was high in ambulatory care and higher in primary care settings. Effective strategies for medication reconciliation included the use of pharmacists, letters, a standardized practice approach, and partnership between providers and patients. Our cost analysis showed that costs associated with medication reconciliation practice were not substantially different between primary care clinics using or not using electronic medical records (EMR) ($0.95 per patient per medication in EMR clinics vs. $0.96 per patient per medication in non-EMR clinics, p=0.78). Even though medication reconciliation was frequently practiced (97-98%), the quality of such practice was poor (0-33% of process completeness measured by concordance of medication numbers and 29-33% of accuracy measured by concordance of medication names) and negatively (though not significantly) associated with medication regimen complexity. The incremental cost-effectiveness ratios for concordance of medication number per patient per medication and concordance of medication names per patient per medication were both 0.08, favoring EMR. Future studies including potential cost-savings from medication features of the EMR and potential benefits to minimize severity of harm to patients from medication discrepancy are warranted. ^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
Melt rate and surface temperature on the Greenland ice sheet are parameterized in terms of snow accumulation, mean annual air temperatur and mean July air temperature. Melt rates are calculated using positive degree-days, and firn warming (i.e. the positive deviation of the temperature at 10-15 m depth from the mean annual air temperature) is estimated from the calculated amount of refrozen melt water in the firn. A comparison between observed and calculated melt rates shows that the parameterization provides a reasonable estimate of the present ablation rates in West Greenland between 61°N and 76°N. The average equilibrium line elevation is estimated to be about 1150 m and 1000 m for West and East Greenland respectively, which is several hundred meter lower than previous estimates. However, the total annual ablation from the ice sheet is found to be about 280 km**3 of water per year which agrees well with most other estimates. The melt-rate model predicts significant melting and consequently significant firn warming even at the highest elevations of the South Greenland ice sheet, whereas a large region of central Greenland north of 70° N experiences little or no summer melting. This agrees with the distribution of the dry snow facics as given by BENSON (1962).
Resumo:
Six whole rocks from the basaltic lava series drilled in the Vavilov basin have been analyzed by 39Ar-40Ar stepwise heating method. One sample from the upper part of the Hole 655B basement gave a plateau-age at 4.3 ± 0.3 Ma whereas the other ones showed disturbed age spectra caused by alteration processes. The weighted averages of ages measured at low and intermediate temperatures on these five samples are concordant (1) one to each other and (2) with independent estimates deduced from paleontological and paleomagnetical arguments. Ages of 4.3 ± 0.3 Ma and from 3 to 2.6 Ma may represent reasonable estimates of the crystallization ages of the basaltic lava series of the Holes 655B and 651A, respectively. These ages must be taken with caution because they correspond to argon released from secondary phases characterized by low argon retention.
Resumo:
Geochemical analyses have been performed on sediment samples collected during Ocean Drilling Program Leg 178 from the continental rise and outer continental shelf of the Antarctic Peninsula. A suite of 21 trace elements was measured by neutron activation analysis in 39 sediment samples, and major element oxides were determined in 67 samples by electron microprobe analyses of fused glass beads. These geochemical data, combined with the X-ray diffraction and X-ray fluorescence data from shipboard analyses, provide a reasonable estimate of the mineral and chemical composition of sediments deposited along the western margin of the Antarctic Peninsula.
Resumo:
The assimilation and regeneration of dissolved inorganic nitrogen, and the concentration of N2O, was investigated at stations located in the NW European shelf sea during June/July 2011. These observational measurements within the photic zone demonstrated the simultaneous regeneration and assimilation of NH4+, NO2- and NO3-. NH4+ was assimilated at 1.82-49.12 nmol N/L/h and regenerated at 3.46-14.60 nmol N/L/h; NO2- was assimilated at 0-2.08 nmol N/L/h and regenerated at 0.01-1.85 nmol N/L/h; NO3-was assimilated at 0.67-18.75 nmol N/L/h and regenerated at 0.05-28.97 nmol N/L/h. Observations implied that these processes were closely coupled at the regional scale and that nitrogen recycling played an important role in sustaining phytoplankton growth during the summer. The [N2O], measured in water column profiles, was 10.13 ± 1.11 nmol/L and did not strongly diverge from atmospheric equilibrium indicating that sampled marine regions were neither a strong source nor sink of N2O to the atmosphere. Multivariate analysis of data describing water column biogeochemistry and its links to N-cycling activity failed to explain the observed variance in rates of N-regeneration and N-assimilation, possibly due to the limited number of process rate observations. In the surface waters of five further stations, ocean acidification (OA) bioassay experiments were conducted to investigate the response of NH4+ oxidising and regenerating organisms to simulated OA conditions, including the implications for [N2O]. Multivariate analysis was undertaken which considered the complete bioassay data set of measured variables describing changes in N-regeneration rate, [N2O] and the biogeochemical composition of seawater. While anticipating biogeochemical differences between locations, we aimed to test the hypothesis that the underlying mechanism through which pelagic N-regeneration responded to simulated OA conditions was independent of location. Our objective was to develop a mechanistic understanding of how NH4+ regeneration, NH4+ oxidation and N2O production responded to OA. Results indicated that N-regeneration process responses to OA treatments were location specific; no mechanistic understanding of how N-regeneration processes respond to OA in the surface ocean of the NW European shelf sea could be developed.
Resumo:
A good and early fault detection and isolation system along with efficient alarm management and fine sensor validation systems are very important in today¿s complex process plants, specially in terms of safety enhancement and costs reduction. This paper presents a methodology for fault characterization. This is a self-learning approach developed in two phases. An initial, learning phase, where the simulation of process units, without and with different faults, will let the system (in an automated way) to detect the key variables that characterize the faults. This will be used in a second (on line) phase, where these key variables will be monitored in order to diagnose possible faults. Using this scheme the faults will be diagnosed and isolated in an early stage where the fault still has not turned into a failure.
Resumo:
Although there are many definitions of SME's there is no globally accepted definition of a small or medium-sized enterprise. Small Medium Enterprises (SME) the catalyst in economic growth & development of the country, are facing tough competition in market place and in establishing themselves as credible supplier of quality product and services. In India they are producing more than 8000 different products. The common perception is that small to medium businesses have very little options in terms of CRM solutions. This is clearly not the case. SME's now have a lot of options and can exercise same. Businesses are shifting from product centric to customer centric. Long before the advent of technology, businesses have always recognized that the customer is the soul of every business. Businesses try to have personal relationship with their customers. Moving towards customer centric approach is a multi prolonged efforts that requires transformation of process, culture and strategy from top level to every individual employee. Technology has a crucial role in providing tools and infrastructure to support this. CRM supports SMEs in their business customer loyalty.
Resumo:
The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.