959 resultados para Robust Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis focuses on the problem of robust output regulation for minimum phase nonlinear systems by means of identification techniques. Given a controlled plant and an exosystem (an autonomous system that generates eventual references or disturbances), the control goal is to design a proper regulator able to process the only measure available, i.e the error/output variable, in order to make it asymptotically vanishing. In this context, such a regulator can be designed following the well known “internal model principle” that states how it is possible to achieve the regulation objective by embedding a replica of the exosystem model in the controller structure. The main problem shows up when the exosystem model is affected by parametric or structural uncertainties, in this case, it is not possible to reproduce the exact behavior of the exogenous system in the regulator and then, it is not possible to achieve the control goal. In this work, the idea is to find a solution to the problem trying to develop a general framework in which coexist both a standard regulator and an estimator able to guarantee (when possible) the best estimate of all uncertainties present in the exosystem in order to give “robustness” to the overall control loop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this thesis fo cus on the preliminary design and the p erformance analysis of a multirotor platform. A multirotor is an electrically p owered Vertical Take Off (VTOL) machine with more than two rotors that lift and control the platform. Multirotor are agile, compact and robust, making them ideally suited for b oth indo or and outdo or application especially to carry-on several sensors like electro optical multisp ectral sensor or gas sensor. The main disadvantage is the limited endurance due to heavy Li-Po batteries and high disk loading through the use of different small prop ellers. At the same time, the design of the multirotor do es not follow any engineering principle but it follow the ideas of amateurs’ builder. An adaptation of the classic airplane design theory for the preliminary design is implemented to fill the gap and detailed study of the endurance is p erformed to define the right way to make this kind of VTOL platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Arbeit ist ein Beitrag zu den schnell wachsenden Forschungsgebieten der Nano-Biotechnologie und Nanomedizin. Sie behandelt die spezifische Gestaltung magnetischer Nanomaterialien für verschiedene biomedizinische Anwendungsgebiete, wie beispielsweise Kontrastmittel für die magnetische Resonanztomographie (MRT) oder "theragnostische" Agenzien für simultane optische/MR Detektion und Behandlung mittels photodynamischer Therapie (PDT).rnEine Vielzahl magnetischer Nanopartikel (NP) mit unterschiedlichsten magnetischen Eigenschaften wurden im Rahmen dieser Arbeit synthetisiert und erschöpfend charakterisiert. Darüber hinaus wurde eine ganze Reihe von Oberflächenmodifizierungsstrategien entwickelt, um sowohl die kolloidale als auch die chemische Stabilität der Partikel zu verbessern, und dadurch den hohen Anforderungen der in vitro und in vivo Applikation gerecht zu werden. Diese Strategien beinhalteten nicht nur die Verwendung bi-funktionaler und multifunktioneller Polymerliganden, sondern auch die Kondensation geeigneter Silanverbindungen, um eine robuste, chemisch inerte und hydrophile Siliziumdioxid- (SiO2) Schale um die magnetischen NP auszubilden.rnGenauer gesagt, der Bildungsmechanismus und die magnetischen Eigenschaften monodisperser MnO NPs wurden ausgiebig untersucht. Aufgrund ihres einzigartigen magnetischen Verhaltens eignen sich diese NPs besonders als (positive) Kontrastmittel zur Verkürzung der longitudinalen Relaxationszeit T1, was zu einer Aufhellung im entsprechenden MRT-Bild führt. Tatsächlich wurde dieses kontrastverbessernde Potential in mehreren Studien mit unterschiedlichen Oberflächenliganden bestätigt. Au@MnO „Nanoblumen“, auf der anderen Seite, sind Vertreter einer weiteren Klasse von Nanomaterialien, die in den vergangenen Jahren erhebliches Interesse in der wissenschaftlichen Welt geweckt hat und oft „Nano-hetero-Materialien“ genannt wird. Solche Nano-hetero-partikel vereinen die individuellen physikalischen und chemischen Eigenschaften der jeweiligen Komponenten in einem nanopartikulärem System und erhöhen dadurch die Vielseitigkeit der möglichen Anwendungen. Sowohl die magnetischen Merkmale von MnO, als auch die optischen Eigenschaften von Au bieten die Möglichkeit, diese „Nanoblumen“ für die kombinierte MRT und optische Bildgebung zu verwenden. Darüber hinaus erlaubt das Vorliegen zweier chemisch unterschiedlicher Oberflächen die gleichzeitige selektive Anbindung von Katecholliganden (auf MnO) und Thiolliganden (auf Au). Außerdem wurde das therapeutische Potential von magnetischen NPs anhand von MnO NPs demonstriert, die mit dem Photosensibilisator Protoporhyrin IX (PP) funktionalisiert waren. Bei Bestrahlung mit sichtbarem Licht initiiert PP die Produktion von zytotoxisch-reaktivem Sauerstoff. Wir zeigen, dass Nierenkrebszellen, die mit PP-funktionalisierten MnO NPs inkubiert wurden nach Bestrahlung mit Laserlicht verenden, während sie ohne Bestrahlung unverändert bleiben. In einem ähnlichen Experiment untersuchten wir die Eigenschaften von SiO2 beschichteten MnO NPs. Dafür wurde eigens eine neuartige SiO2-Beschichtungsmethode entwickelt, die einer nachfolgende weitere Anbindung verschiedenster Liganden und die Einlagerung von Fluoreszenzfarbstoffen durch herkömmliche Silan- Sol-Gel Chemie erlaubt. Die Partikel zeigten eine ausgezeichnete Stabilität in einer ganzen Reihe wässriger Lösungen, darunter auch physiologische Kochsalzlösung, Pufferlösungen und humanes Blutserum, und waren weniger anfällig gegenüber Mn-Ionenauswaschung als einfache PEGylierte MnO NPs. Des Weiteren konnte bewiesen werden, dass die dünne SiO2 Schicht nur einen geringen Einfluss auf das magnetische Verhalten der NPs hatte, so dass sie weiterhin als T1-Kontrastmittel verwendet werden können. Schließlich konnten zusätzlich FePt@MnO NPs hergestellt werden, welche die individuellen magnetischen Merkmale eines ferromagnetischen (FePt) und eines antiferromagnetischen (MnO) Materials vereinen. Wir zeigen, dass wir die jeweiligen Partikelgrößen, und damit das resultierende magnetische Verhalten, durch Veränderung der experimentellen Parameter variieren können. Die magnetische Wechselwirkung zwischen beiden Materialien kann dabei auf Spinkommunikation an der Grenzfläche zwischen beiden NP-Sorten zurückgeführt werden.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein scaffolds that support molecular recognition have multiple applications in biotechnology. Thus, protein frames with robust structural cores but adaptable surface loops are in continued demand. Recently, notable progress has been made in the characterization of Ig domains of intracellular origin--in particular, modular components of the titin myofilament. These Ig belong to the I(intermediate)-type, are remarkably stable, highly soluble and undemanding to produce in the cytoplasm of Escherichia coli. Using the Z1 domain from titin as representative, we show that the I-Ig fold tolerates the drastic diversification of its CD loop, constituting an effective peptide display system. We examine the stability of CD-loop-grafted Z1-peptide chimeras using differential scanning fluorimetry, Fourier transform infrared spectroscopy and nuclear magnetic resonance and demonstrate that the introduction of bioreactive affinity binders in this position does not compromise the structural integrity of the domain. Further, the binding efficiency of the exogenous peptide sequences in Z1 is analyzed using pull-down assays and isothermal titration calorimetry. We show that an internally grafted, affinity FLAG tag is functional within the context of the fold, interacting with the anti-FLAG M2 antibody in solution and in affinity gel. Together, these data reveal the potential of the intracellular Ig scaffold for targeted functionalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional liquid liquid extraction (LLE) methods require large volumes of fluids to achieve the desired mass transfer of a solute, which is unsuitable for systems dealing with a low volume or high value product. An alternative to these methods is to scale down the process. Millifluidic devices share many of the benefits of microfluidic systems, including low fluid volumes, increased interfacial area-to-volume ratio, and predictability. A robust millifluidic device was created from acrylic, glass, and aluminum. The channel is lined with a hydrogel cured in the bottom half of the device channel. This hydrogel stabilizes co-current laminar flow of immiscible organic and aqueous phases. Mass transfer of the solute occurs across the interface of these contacting phases. Using a y-junction, an aqueous emulsion is created in an organic phase. The emulsion travels through a length of tubing and then enters the co-current laminar flow device, where the emulsion is broken and each phase can be collected separately. The inclusion of this emulsion formation and separation increases the contact area between the organic and aqueous phases, therefore increasing the area over which mass transfer can occur. Using this design, 95% extraction efficiency was obtained, where 100% is represented by equilibrium. By continuing to explore this LLE process, the process can be optimized and with better understanding may be more accurately modeled. This system has the potential to scale up to the industrial level and provide the efficient extraction required with low fluid volumes and a well-behaved system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of thrombosis and haemostasis, many preanalytical variables influence the results of coagulation assays and measures to limit potential results variations should be taken. To our knowledge, no paper describing the development and maintenance of a haemostasis biobank has been previously published. Our description of the biobank of the Swiss cohort of elderly patients with venous thromboembolism (SWITCO65+) is intended to facilitate the set-up of other biobanks in the field of thrombosis and haemostasis. SWITCO65+ is a multicentre cohort that prospectively enrolled consecutive patients aged ≥65 years with venous thromboembolism at nine Swiss hospitals from 09/2009 to 03/2012. Patients will be followed up until December 2013. The cohort includes a biobank with biological material from each participant taken at baseline and after 12 months of follow-up. Whole blood from all participants is assayed with a standard haematology panel, for which fresh samples are required. Two buffy coat vials, one PAXgene Blood RNA System tube and one EDTA-whole blood sample are also collected at baseline for RNA/DNA extraction. Blood samples are processed and vialed within 1 h of collection and transported in batches to a central laboratory where they are stored in ultra-low temperature archives. All analyses of the same type are performed in the same laboratory in batches. Using multiple core laboratories increased the speed of sample analyses and reduced storage time. After recruiting, processing and analyzing the blood of more than 1,000 patients, we determined that the adopted methods and technologies were fit-for-purpose and robust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wealth of genetic associations for cardiovascular and metabolic phenotypes in humans has been accumulating over the last decade, in particular a large number of loci derived from recent genome wide association studies (GWAS). True complex disease-associated loci often exert modest effects, so their delineation currently requires integration of diverse phenotypic data from large studies to ensure robust meta-analyses. We have designed a gene-centric 50 K single nucleotide polymorphism (SNP) array to assess potentially relevant loci across a range of cardiovascular, metabolic and inflammatory syndromes. The array utilizes a "cosmopolitan" tagging approach to capture the genetic diversity across approximately 2,000 loci in populations represented in the HapMap and SeattleSNPs projects. The array content is informed by GWAS of vascular and inflammatory disease, expression quantitative trait loci implicated in atherosclerosis, pathway based approaches and comprehensive literature searching. The custom flexibility of the array platform facilitated interrogation of loci at differing stringencies, according to a gene prioritization strategy that allows saturation of high priority loci with a greater density of markers than the existing GWAS tools, particularly in African HapMap samples. We also demonstrate that the IBC array can be used to complement GWAS, increasing coverage in high priority CVD-related loci across all major HapMap populations. DNA from over 200,000 extensively phenotyped individuals will be genotyped with this array with a significant portion of the generated data being released into the academic domain facilitating in silico replication attempts, analyses of rare variants and cross-cohort meta-analyses in diverse populations. These datasets will also facilitate more robust secondary analyses, such as explorations with alternative genetic models, epistasis and gene-environment interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE A number of factors limit the effectiveness of current aortic arch studies in assessing optimal neuroprotection strategies, including insufficient patient numbers, heterogenous definitions of clinical variables, multiple technical strategies, inadequate reporting of surgical outcomes and a lack of collaborative effort. We have formed an international coalition of centres to provide more robust investigations into this topic. METHODS High-volume aortic arch centres were identified from the literature and contacted for recruitment. A Research Steering Committee of expert arch surgeons was convened to oversee the direction of the research. RESULTS The International Aortic Arch Surgery Study Group has been formed by 41 arch surgeons from 10 countries to better evaluate patient outcomes after aortic arch surgery. Several projects, including the establishment of a multi-institutional retrospective database, randomized controlled trials and a prospectively collected database, are currently underway. CONCLUSIONS Such a collaborative effort will herald a turning point in the surgical management of aortic arch pathologies and will provide better powered analyses to assess the impact of varying surgical techniques on mortality and morbidity, identify predictors for neurological and operative risk, formulate and validate risk predictor models and review long-term survival outcomes and quality-of-life after arch surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Sutureless aortic valve replacement (SU-AVR) is an innovative approach which shortens cardiopulmonary bypass and cross-clamp durations and may facilitate minimally invasive approach. Evidence outlining its safety, efficacy, hemodynamic profile and potential complications is replete with small-volume observational studies and few comparative publications. METHODS Minimally invasive aortic valve surgery and high-volume SU-AVR replacement centers were contacted for recruitment into a global collaborative coalition dedicated to sutureless valve research. A Research Steering Committee was formulated to direct research and support the mission of providing registry evidence warranted for SU-AVR. RESULTS The International Valvular Surgery Study Group (IVSSG) was formed under the auspices of the Research Steering Committee, comprised of 36 expert valvular surgeons from 27 major centers across the globe. IVSSG Sutureless Projects currently proceeding include the Retrospective and Prospective Phases of the SU-AVR International Registry (SU-AVR-IR). CONCLUSIONS The global pooling of data by the IVSSG Sutureless Projects will provide required robust clinical evidence on the safety, efficacy and hemodynamic outcomes of SU-AVR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several tests for the comparison of different groups in the randomized complete block design exist. However, there is a lack of robust estimators for the location difference between one group and all the others on the original scale. The relative marginal effects are commonly used in this situation, but they are more difficult to interpret and use by less experienced people because of the different scale. In this paper two nonparametric estimators for the comparison of one group against the others in the randomized complete block design will be presented. Theoretical results such as asymptotic normality, consistency, translation invariance, scale preservation, unbiasedness, and median unbiasedness are derived. The finite sample behavior of these estimators is derived by simulations of different scenarios. In addition, possible confidence intervals with these estimators are discussed and their behavior derived also by simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overhead rigid conductor arrangements for current collection for railway traction have some advantages compared to other, more conventional, energy supply systems. They are simple, robust and easily maintained, not to mention their flexibility as to the required height for installation, which makes them particularly suitable for use in subway infrastructures. Nevertheless, due to the increasing speeds of new vehicles running on modern subway lines, a more efficient design is required for this kind of system. In this paper, the authors present a dynamic analysis of overhead conductor rail systems focused on the design of a new conductor profile with a dynamic behaviour superior to that of the system currently in use. This means that either an increase in running speed can be attained, which at present does not exceed 110 km/h, or an increase in the distance between the rigid catenary supports with the ensuing saving in installation costs. This study has been carried out using simulation techniques. The ANSYS programme has been used for the finite element modelling and the SIMPACK programme for the elastic multibody systems analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.