794 resultados para 120300 DESIGN PRACTICE AND MANAGEMENT
Resumo:
Purpose: The study objective was to understand the meaning of evidence-based management for senior nurse leaders in accredited, public hospitals in the State of Sao Paulo, Brazil. Design and Method: A phenomenological approach was used to analyze interviews conducted with 10 senior nurse leaders between August 2011 and March 2012. The analytic method was developed by the Brazilian phenomenologist, Martins. Findings: Senior nurse leaders described how they critically appraise many sources of evidence when making managerial decisions. They emphasized the importance of working with their teams to locally adapt and evaluate best evidence associated with managerial decision making and organizational innovations. Their statements also demonstrated how they use evidence-based management to support the adoption of evidence-based practices. They did not, however, provide specific strategies for seeking out and obtaining evidence. Notable challenges were traditional cultures and rigid bureaucracies, while major facilitators included accreditation, teamwork, and shared decision making. Conclusions: Evidence-based management necessitates a continuous process of locating, implementing, and evaluating evidence. In this study leaders provided multiple, concrete examples of all these processes except seeking out and locating evidence. They also gave examples of other leadership skills associated with successful adoption of evidence-based practice and management, particularly interdisciplinary teamwork and shared decision making. Clinical Relevance: This study demonstrates senior nurse leaders' awareness and utilization of evidence-based management. The study also suggests what aspects of evidence-based management need further development, such as more active identification of potential, new organizational innovations. © 2013 Sigma Theta Tau International.
Resumo:
The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.
Resumo:
With this dissertation research we investigate intersections between design and marketing and in this respect, which factors do contribute that a product design becomes brand formative. We have developed a Brand Formative Design (BFD) framework, which investigates individual design features in a holistic, comparable, brand relevant, and consumer specific context. We discuss what kinds of characteristics contribute to BFD but also illuminate how they should be applied and examine: rnA holistic framework leading to Brand Formative Design. Identification and assessment of BFD Drivers. The dissection of products into three Distinctive Design Levels. The detection of surprising design preferences. The appropriate degree of scheme deviation with evolutionary design. Simulated BFD development processes with three different products and the integration of consumers. Future oriented objectification, comparability and assessment of design. Recommendations for the management of design in a brand specific context. Design is a product feature, which contributes significantly to the success of products. However, the development of new design contains challenges. Design can hardly be objectified; many people have an opinion concerning the attractiveness of new products but cannot formulate their future preferences. Product design is widely developed based on intuition, which can be difficult for the management of design. Here the concept of Brand Formative Design can provide a framework which contributes to structure, objectify, develop and assess new evolutionary design in brand and future relevant contexts, but also integrates consumers and their preferences without restricting creativity too much.
Resumo:
Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".
Clinical aspects, diagnostic challenges and management of patients with neuroendocrine tumors (NETs)
Resumo:
Neuroendocrine tumor (NET) entities are rare malignancies. Higher awareness and improved diagnostic methods have led to an increasing incidence of these diseases, and most oncologists deal with such patients in their daily practice. The symposium on NETs that was held in Merano (Italy) in October 2009 was organized by the German-speaking European School of Oncology (dESO) and gathered specialists from different disciplines of transalpine countries to bring together experiences and observations regarding these tumors. The goal of the meeting and of this review was to illustrate both well- and poorly differentiated NETs and to encourage interdisciplinary approaches.
Resumo:
The central challenge to educators in the liberal arts as in all areas of study is transfer of learning i.e. how can we design learning environments and instruction to that students will be able to use what they learn in appropriate new contexts? Alfred North Whitehead described this as the problem of ‘inert knowledge’ nearly a century ago and Dewey noted that instruction which helps students reproduce what is studied on exams might not produce the depth of understanding that allows for recognizing the relevance of what is known to a particular situation and the ability to apply it. Knowledge that is not conditionalized (i.e. in which the learner does not know when where and why it is to be used) is inert.
Resumo:
OBJECTIVE: To simultaneously determine perceived vs. practiced adherence to recommended interventions for the treatment of severe sepsis or septic shock. DESIGN: One-day cross-sectional survey. SETTING: Representative sample of German intensive care units stratified by hospital size. PATIENTS: Adult patients with severe sepsis or septic shock. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Practice recommendations were selected by German Sepsis Competence Network (SepNet) investigators. External intensivists visited intensive care units randomly chosen and asked the responsible intensive care unit director how often these recommendations were used. Responses "always" and "frequently" were combined to depict perceived adherence. Thereafter patient files were audited. Three hundred sixty-six patients on 214 intensive care units fulfilled the criteria and received full support. One hundred fifty-two patients had acute lung injury or acute respiratory distress syndrome. Low-tidal volume ventilation < or = 6 mL/kg/predicted body weight was documented in 2.6% of these patients. A total of 17.1% patients had tidal volume between 6 and 8 mL/kg predicted body weight and 80.3% > 8 mL/kg predicted body weight. Mean tidal volume was 10.0 +/- 2.4 mL/kg predicted body weight. Perceived adherence to low-tidal volume ventilation was 79.9%. Euglycemia (4.4-6.1 mmol/L) was documented in 6.2% of 355 patients. A total of 33.8% of patients had blood glucose levels < or = 8.3 mmol/L and 66.2% were hyperglycemic (blood glucose > 8.3 mmol/L). Among 207 patients receiving insulin therapy, 1.9% were euglycemic, 20.8% had blood glucose levels < or = 8.3 mmol/L, and 1.0% were hypoglycemic. Overall, mean maximal glucose level was 10.0 +/- 3.6 mmol/L. Perceived adherence to strict glycemic control was 65.9%. Although perceived adherence to recommendations was higher in academic and larger hospitals, actual practice was not significantly influenced by hospital size or university affiliation. CONCLUSIONS: This representative survey shows that current therapy of severe sepsis in German intensive care units complies poorly with practice recommendations. Intensive care unit directors perceive adherence to be higher than it actually is. Implementation strategies involving all intensive care unit staff are needed to overcome this gap between current evidence-based knowledge, practice, and perception.
Resumo:
This paper is focused on the integration of state-of-the-art technologies in the fields of telecommunications, simulation algorithms, and data mining in order to develop a Type 1 diabetes patient's semi to fully-automated monitoring and management system. The main components of the system are a glucose measurement device, an insulin delivery system (insulin injection or insulin pumps), a mobile phone for the GPRS network, and a PDA or laptop for the Internet. In the medical environment, appropriate infrastructure for storage, analysis and visualizing of patients' data has been implemented to facilitate treatment design by health care experts.
Resumo:
SETTING Drug resistance threatens tuberculosis (TB) control, particularly among human immunodeficiency virus (HIV) infected persons. OBJECTIVE To describe practices in the prevention and management of drug-resistant TB under antiretroviral therapy (ART) programs in lower-income countries. DESIGN We used online questionnaires to collect program-level data on 47 ART programs in Southern Africa (n = 14), East Africa (n = 8), West Africa (n = 7), Central Africa (n = 5), Latin America (n = 7) and the Asia-Pacific (n = 6 programs) in 2012. Patient-level data were collected on 1002 adult TB patients seen at 40 of the participating ART programs. RESULTS Phenotypic drug susceptibility testing (DST) was available in 36 (77%) ART programs, but was only used for 22% of all TB patients. Molecular DST was available in 33 (70%) programs and was used in 23% of all TB patients. Twenty ART programs (43%) provided directly observed therapy (DOT) during the entire course of treatment, 16 (34%) during the intensive phase only, and 11 (23%) did not follow DOT. Fourteen (30%) ART programs reported no access to second-line anti-tuberculosis regimens; 18 (38%) reported TB drug shortages. CONCLUSIONS Capacity to diagnose and treat drug-resistant TB was limited across ART programs in lower-income countries. DOT was not always implemented and drug supplies were regularly interrupted, which may contribute to the global emergence of drug resistance.
Resumo:
The objective of this review study was to encompass the relevant literature and current best practice options for this challenging, sometimes incurable problem. The source of the data was Ovid MEDLINE from 1946 to 2014. Review methods consisted of articles with clinical correlates. The most important cause of recurrence is enucleation with rupture and incomplete tumor excision at operation. Incomplete pseudocapsule, extracapsular extension, pseudopods of pleomorphic adenoma tissue, and satellite pleomorphic beyond the pseudocapsule are also likely linked to recurrent pleomorphic adenoma. Most recurrent pleomorphic adenoma are multinodular. Magnetic resonance imaging is the imaging study of choice for recurrent pleomorphic adenoma. Nerve integrity monitoring may reduce morbidity for recurrent pleomorphic adenoma. Treatment of recurrent pleomorphic adenoma must be individualized. Total parotidectomy, given the multicentricity of recurrent pleomorphic adenoma, is appropriate in many patients, but may be inadequate to control recurrent pleomorphic. There is accumulating evidence from retrospective series that postoperative radiation therapy results in significantly better local control. LEVEL OF EVIDENCE NA Laryngoscope, 2014.
Resumo:
Through the correct implementation of lean manufacturing methods, a company can greatly improve their business. Over a period of three months at TTM Technologies, I utilized my knowledge to fix existing problems ans streamline production. In addition, other trouble areas in their production process were discovered and proper lean methods were used to address them. TTM Technologies saw many changed in the right direction over this time period.
Resumo:
The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.
Resumo:
The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.