915 resultados para Simulated annealing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

By the end of the 19th century, geodesy has contributed greatly to the knowledge of regional tectonics and fault movement through its ability to measure, at sub-centimetre precision, the relative positions of points on the Earth’s surface. Nowadays the systematic analysis of geodetic measurements in active deformation regions represents therefore one of the most important tool in the study of crustal deformation over different temporal scales [e.g., Dixon, 1991]. This dissertation focuses on motion that can be observed geodetically with classical terrestrial position measurements, particularly triangulation and leveling observations. The work is divided into two sections: an overview of the principal methods for estimating longterm accumulation of elastic strain from terrestrial observations, and an overview of the principal methods for rigorously inverting surface coseismic deformation fields for source geometry with tests on synthetic deformation data sets and applications in two different tectonically active regions of the Italian peninsula. For the long-term accumulation of elastic strain analysis, triangulation data were available from a geodetic network across the Messina Straits area (southern Italy) for the period 1971 – 2004. From resulting angle changes, the shear strain rates as well as the orientation of the principal axes of the strain rate tensor were estimated. The computed average annual shear strain rates for the time period between 1971 and 2004 are γ˙1 = 113.89 ± 54.96 nanostrain/yr and γ˙2 = -23.38 ± 48.71 nanostrain/yr, with the orientation of the most extensional strain (θ) at N140.80° ± 19.55°E. These results suggests that the first-order strain field of the area is dominated by extension in the direction perpendicular to the trend of the Straits, sustaining the hypothesis that the Messina Straits could represents an area of active concentrated deformation. The orientation of θ agree well with GPS deformation estimates, calculated over shorter time interval, and is consistent with previous preliminary GPS estimates [D’Agostino and Selvaggi, 2004; Serpelloni et al., 2005] and is also similar to the direction of the 1908 (MW 7.1) earthquake slip vector [e.g., Boschi et al., 1989; Valensise and Pantosti, 1992; Pino et al., 2000; Amoruso et al., 2002]. Thus, the measured strain rate can be attributed to an active extension across the Messina Straits, corresponding to a relative extension rate ranges between < 1mm/yr and up to ~ 2 mm/yr, within the portion of the Straits covered by the triangulation network. These results are consistent with the hypothesis that the Messina Straits is an important active geological boundary between the Sicilian and the Calabrian domains and support previous preliminary GPS-based estimates of strain rates across the Straits, which show that the active deformation is distributed along a greater area. Finally, the preliminary dislocation modelling has shown that, although the current geodetic measurements do not resolve the geometry of the dislocation models, they solve well the rate of interseismic strain accumulation across the Messina Straits and give useful information about the locking the depth of the shear zone. Geodetic data, triangulation and leveling measurements of the 1976 Friuli (NE Italy) earthquake, were available for the inversion of coseismic source parameters. From observed angle and elevation changes, the source parameters of the seismic sequence were estimated in a join inversion using an algorithm called “simulated annealing”. The computed optimal uniform–slip elastic dislocation model consists of a 30° north-dipping shallow (depth 1.30 ± 0.75 km) fault plane with azimuth of 273° and accommodating reverse dextral slip of about 1.8 m. The hypocentral location and inferred fault plane of the main event are then consistent with the activation of Periadriatic overthrusts or other related thrust faults as the Gemona- Kobarid thrust. Then, the geodetic data set exclude the source solution of Aoudia et al. [2000], Peruzza et al. [2002] and Poli et al. [2002] that considers the Susans-Tricesimo thrust as the May 6 event. The best-fit source model is then more consistent with the solution of Pondrelli et al. [2001], which proposed the activation of other thrusts located more to the North of the Susans-Tricesimo thrust, probably on Periadriatic related thrust faults. The main characteristics of the leveling and triangulation data are then fit by the optimal single fault model, that is, these results are consistent with a first-order rupture process characterized by a progressive rupture of a single fault system. A single uniform-slip fault model seems to not reproduce some minor complexities of the observations, and some residual signals that are not modelled by the optimal single-fault plane solution, were observed. In fact, the single fault plane model does not reproduce some minor features of the leveling deformation field along the route 36 south of the main uplift peak, that is, a second fault seems to be necessary to reproduce these residual signals. By assuming movements along some mapped thrust located southward of the inferred optimal single-plane solution, the residual signal has been successfully modelled. In summary, the inversion results presented in this Thesis, are consistent with the activation of some Periadriatic related thrust for the main events of the sequence, and with a minor importance of the southward thrust systems of the middle Tagliamento plain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most interesting challenge of the next years will be the Air Space Systems automation. This process will involve different aspects as the Air Traffic Management, the Aircrafts and Airport Operations and the Guidance and Navigation Systems. The use of UAS (Uninhabited Aerial System) for civil mission will be one of the most important steps in this automation process. In civil air space, Air Traffic Controllers (ATC) manage the air traffic ensuring that a minimum separation between the controlled aircrafts is always provided. For this purpose ATCs use several operative avoidance techniques like holding patterns or rerouting. The use of UAS in these context will require the definition of strategies for a common management of piloted and piloted air traffic that allow the UAS to self separate. As a first employment in civil air space we consider a UAS surveillance mission that consists in departing from a ground base, taking pictures over a set of mission targets and coming back to the same ground base. During all mission a set of piloted aircrafts fly in the same airspace and thus the UAS has to self separate using the ATC avoidance as anticipated. We consider two objective, the first consists in the minimization of the air traffic impact over the mission, the second consists in the minimization of the impact of the mission over the air traffic. A particular version of the well known Travelling Salesman Problem (TSP) called Time-Dependant-TSP has been studied to deal with traffic problems in big urban areas. Its basic idea consists in a cost of the route between two clients depending on the period of the day in which it is crossed. Our thesis supports that such idea can be applied to the air traffic too using a convenient time horizon compatible with aircrafts operations. The cost of a UAS sub-route will depend on the air traffic that it will meet starting such route in a specific moment and consequently on the avoidance maneuver that it will use to avoid that conflict. The conflict avoidance is a topic that has been hardly developed in past years using different approaches. In this thesis we purpose a new approach based on the use of ATC operative techniques that makes it possible both to model the UAS problem using a TDTSP framework both to use an Air Traffic Management perspective. Starting from this kind of mission, the problem of the UAS insertion in civil air space is formalized as the UAS Routing Problem (URP). For this reason we introduce a new structure called Conflict Graph that makes it possible to model the avoidance maneuvers and to define the arc cost function of the departing time. Two Integer Linear Programming formulations of the problem are proposed. The first is based on a TDTSP formulation that, unfortunately, is weaker then the TSP formulation. Thus a new formulation based on a TSP variation that uses specific penalty to model the holdings is proposed. Different algorithms are presented: exact algorithms, simple heuristics used as Upper Bounds on the number of time steps used, and metaheuristic algorithms as Genetic Algorithm and Simulated Annealing. Finally an air traffic scenario has been simulated using real air traffic data in order to test our algorithms. Graphic Tools have been used to represent the Milano Linate air space and its air traffic during different days. Such data have been provided by ENAV S.p.A (Italian Agency for Air Navigation Services).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Dissertation wurden die Methoden Homologiemodellierung und Molekulardynamik genutzt, um die Struktur und das Verhalten von Proteinen in Lösung zu beschreiben. Mit Hilfe der Röntgenkleinwinkelstreuung wurden die mit den Computermethoden erzeugten Vorhersagen verifiziert. Für das alpha-Hämolysin, ein Toxin von Staphylococcus aureus, das eine heptamere Pore formen kann, wurde erstmalig die monomere Struktur des Protein in Lösung beschrieben. Homologiemodellierung auf Basis verwandter Proteine, deren monomere Struktur bekannt war, wurde verwendet, um die monomere Struktur des Toxins vorherzusagen. Flexibilität von Strukturelementen in einer Molekulardynamiksimulation konnte mit der Funktionalität des Proteines korreliert werden: Intrinsische Flexibilität versetzt das Protein in die Lage den Konformationswechsel zur Pore nach Assemblierung zu vollziehen. Röntgenkleinwinkelstreuung bewies die Unterschiede der monomeren Struktur zu den Strukturen der verwandten Proteine und belegt den eigenen Vorschlag zur Struktur. Überdies konnten Arbeiten an einer Mutante, die in einer sogenannten Präporenkonformation arretiert und nicht in der Lage ist eine Pore zu formen, zeigen, dass dieser Übergangszustand mit der Rotationsachse senkrecht zur Membran gelagert ist. Eine geometrische Analyse beweist, dass es sterisch möglich ist ausgehend von dieser Konformation die Konformation der Pore zu erreichen. Eine energetische und kinetische Analyse dieses Konformationswechsels steht noch aus. Ein weiterer Teil der Arbeit befasst sich mit den Konformationswechseln von Hämocyaninen. Diese wurden experimentell mittels Röntgenkleinwinkelstreuung verfolgt. Konformationswechsel im Zusammenhang mit der Oxygenierung konnten für die 24meren Hämocyanine von Eurypelma californicum und Pandinus imperator beschrieben werden. Für eine Reihe von Hämocyaninen ist nachgewiesen, dass sie unter Einfluss des Agenz SDS Tyrosinaseaktivität entfalten können. Der Konformationswechsel der Hämocyanine von E. californicum und P. imperator bei der Aktivierung zur Tyrosinase mittels SDS wurde experimentell bestätigt und die Stellung der Dodekamere der Hämocyanine als wesentlich bei der Aktivierung festgestellt. Im Zusammenhang mit anderen Arbeiten gilt damit die Relaxierung der Struktur unter SDS-Einfluss und der sterische Einfluss auf die verbindenden Untereinheiten b & c als wahrscheinliche Ursache für die Aktivierung zur Tyrosinase. Eigene Software zum sogenannten rigid body-Modellierung auf der Basis von Röntgenkleinwinkelstreudaten wurde erstellt, um die Streudaten des hexameren Hämocyanins von Palinurus elephas und Palinurus argus unter Einfluss der Effektoren Urat und Koffein strukturell zu interpretieren. Die Software ist die erste Implementierung eines Monte Carlo-Algorithmus zum rigid body-Modelling. Sie beherrscht zwei Varianten des Algorithmus: In Verbindung mit simulated annealing können wahrscheinliche Konformationen ausgefiltert werden und in einer anschließenden systematischen Analyse kann eine Konformation geometrisch beschrieben werden. Andererseits ist ein weiterer, reiner Monte Carlo-Algorithmus in der Lage die Konformation als Dichteverteilung zu beschreiben.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hypersensitivity dermatitides (HD) are commonly seen in cats, and they are usually caused by environmental, food and/or flea allergens. Affected cats normally present with one of the following clinical reaction patterns: head and neck excoriations, usually symmetrical self-induced alopecia, eosinophilic skin lesions or miliary dermatitis. Importantly, none of these clinical presentations is considered to be pathognomonic for HD skin diseases, and the diagnosis of HD is usually based on the exclusion of other pruritic diseases and on a positive response to therapy. The objectives of this study were to propose sets of criteria for the diagnosis of nonflea-induced HD (NFHD). We recruited 501 cats with pruritus and skin lesions and compared clinical parameters between cats with NFHD (encompassing those with nonflea, nonfood HD and those with food HD), flea HD and other pruritic conditions. Using simulated annealing techniques, we established two sets of proposed criteria for the following two different clinical situations: (i) the diagnosis of NFHD in a population of pruritic cats; and (ii) the diagnosis of NFHD after exclusion of cats with flea HD. These criteria sets were associated with good sensitivity and specificity and may be useful for homogeneity of enrolment in clinical trials and to evaluate the probability of diagnosis of NFHD in clinical practice. Finally, these criteria were not useful to differentiate cats with NFHD from those with food HD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Successful software systems cope with complexity by organizing classes into packages. However, a particular organization may be neither straightforward nor obvious for a given developer. As a consequence, classes can be misplaced, leading to duplicated code and ripple effects with minor changes effecting multiple packages. We claim that contextual information is the key to rearchitecture a system. Exploiting contextual information, we propose a technique to detect misplaced classes by analyzing how client packages access the classes of a given provider package. We define locality as a measure of the degree to which classes reused by common clients appear in the same package. We then use locality to guide a simulated annealing algorithm to obtain optimal placements of classes in packages. The result is the identification of classes that are candidates for relocation. We apply the technique to three applications and validate the usefulness of our approach via developer interviews.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Radiomics is the high-throughput extraction and analysis of quantitative image features. For non-small cell lung cancer (NSCLC) patients, radiomics can be applied to standard of care computed tomography (CT) images to improve tumor diagnosis, staging, and response assessment. The first objective of this work was to show that CT image features extracted from pre-treatment NSCLC tumors could be used to predict tumor shrinkage in response to therapy. This is important since tumor shrinkage is an important cancer treatment endpoint that is correlated with probability of disease progression and overall survival. Accurate prediction of tumor shrinkage could also lead to individually customized treatment plans. To accomplish this objective, 64 stage NSCLC patients with similar treatments were all imaged using the same CT scanner and protocol. Quantitative image features were extracted and principal component regression with simulated annealing subset selection was used to predict shrinkage. Cross validation and permutation tests were used to validate the results. The optimal model gave a strong correlation between the observed and predicted shrinkages with . The second objective of this work was to identify sets of NSCLC CT image features that are reproducible, non-redundant, and informative across multiple machines. Feature sets with these qualities are needed for NSCLC radiomics models to be robust to machine variation and spurious correlation. To accomplish this objective, test-retest CT image pairs were obtained from 56 NSCLC patients imaged on three CT machines from two institutions. For each machine, quantitative image features with concordance correlation coefficient values greater than 0.90 were considered reproducible. Multi-machine reproducible feature sets were created by taking the intersection of individual machine reproducible feature sets. Redundant features were removed through hierarchical clustering. The findings showed that image feature reproducibility and redundancy depended on both the CT machine and the CT image type (average cine 4D-CT imaging vs. end-exhale cine 4D-CT imaging vs. helical inspiratory breath-hold 3D CT). For each image type, a set of cross-machine reproducible, non-redundant, and informative image features was identified. Compared to end-exhale 4D-CT and breath-hold 3D-CT, average 4D-CT derived image features showed superior multi-machine reproducibility and are the best candidates for clinical correlation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is known that the Minimum Weight Triangulation problem is NP-hard. Also the complexity of the Minimum Weight Pseudo-Triangulation problem is unknown, yet it is suspected to be also NP-hard. Therefore we focused on the development of approximate algorithms to find high quality triangulations and pseudo-triangulations of minimum weight. In this work we propose two metaheuristics to solve these problems: Ant Colony Optimization (ACO) and Simulated Annealing (SA). For the experimental study we have created a set of instances for MWT and MWPT problems, since no reference to benchmarks for these problems were found in the literature. Through experimental evaluation, we assess the applicability of the ACO and SA metaheuristics for MWT and MWPT problems. These results are compared with those obtained from the application of deterministic algorithms for the same problems (Delaunay Triangulation for MWT and a Greedy algorithm respectively for MWT and MWPT).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El artículo aborda el problema del encaje de diversas imágenes de una misma escena capturadas por escáner 3d para generar un único modelo tridimensional. Para ello se utilizaron algoritmos genéticos. ABSTRACT: This work introduces a solution based on genetic algorithms to find the overlapping area between two point cloud captures obtained from a three-dimensional scanner. Considering three translation coordinates and three rotation angles, the genetic algorithm evaluates the matching points in the overlapping area between the two captures given that transformation. Genetic simulated annealing is used to improve the accuracy of the results obtained by the genetic algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los modelos de simulación de cultivos permiten analizar varias combinaciones de laboreo-rotación y explorar escenarios de manejo. El modelo DSSAT fue evaluado bajo condiciones de secano en un experimento de campo de 16 años en la semiárida España central. Se evaluó el efecto del sistema de laboreo y las rotaciones basadas en cereales de invierno, en el rendimiento del cultivo y la calidad del suelo. Los modelos CERES y CROPGRO se utilizaron para simular el crecimiento y rendimiento del cultivo, mientras que el modelo DSSAT CENTURY se utilizó en las simulaciones de SOC y SN. Tanto las observaciones de campo como las simulaciones con CERES-Barley, mostraron que el rendimiento en grano de la cebada era mas bajo para el cereal continuo (BB) que para las rotaciones de veza (VB) y barbecho (FB) en ambos sistemas de laboreo. El modelo predijo más nitrógeno disponible en el laboreo convencional (CT) que en el no laboreo (NT) conduciendo a un mayor rendimiento en el CT. El SOC y el SN en la capa superficial del suelo, fueron mayores en NT que en CT, y disminuyeron con la profundidad en los valores tanto observados como simulados. Las mejores combinaciones para las condiciones de secano estudiadas fueron CT-VB y CT-FB, pero CT presentó menor contenido en SN y SOC que NT. El efecto beneficioso del NT en SOC y SN bajo condiciones Mediterráneas semiáridas puede ser identificado por observaciones de campo y por simulaciones de modelos de cultivos. La simulación del balance de agua en sistemas de cultivo es una herramienta útil para estudiar como el agua puede ser utilizado eficientemente. La comparación del balance de agua de DSSAT , con una simple aproximación “tipping bucket”, con el modelo WAVE más mecanicista, el cual integra la ecuación de Richard , es un potente método para valorar el funcionamiento del modelo. Los parámetros de suelo fueron calibrados usando el método de optimización global Simulated Annealing (SA). Un lisímetro continuo de pesada en suelo desnudo suministró los valores observados de drenaje y evapotranspiración (ET) mientras que el contenido de agua en el suelo (SW) fue suministrado por sensores de capacitancia. Ambos modelos funcionaron bien después de la optimización de los parámetros de suelo con SA, simulando el balance de agua en el suelo para el período de calibración. Para el período de validación, los modelos optimizados predijeron bien el contenido de agua en el suelo y la evaporación del suelo a lo largo del tiempo. Sin embargo, el drenaje fue predicho mejor con WAVE que con DSSAT, el cual presentó mayores errores en los valores acumulados. Esto podría ser debido a la naturaleza mecanicista de WAVE frente a la naturaleza más funcional de DSSAT. Los buenos resultados de WAVE indican que, después de la calibración, este puede ser utilizado como "benchmark" para otros modelos para periodos en los que no haya medidas de campo del drenaje. El funcionamiento de DSSAT-CENTURY en la simulación de SOC y N depende fuertemente del proceso de inicialización. Se propuso como método alternativo (Met.2) la inicialización de las fracciones de SOC a partir de medidas de mineralización aparente del suelo (Napmin). El Met.2 se comparó con el método de inicialización de Basso et al. (2011) (Met.1), aplicando ambos métodos a un experimento de campo de 4 años en un área en regadío de España central. Nmin y Napmin fueron sobreestimados con el Met.1, ya que la fracción estable obtenida (SOC3) en las capas superficiales del suelo fue más baja que con Met.2. El N lixiviado simulado fue similar en los dos métodos, con buenos resultados en los tratamientos de barbecho y cebada. El Met.1 subestimó el SOC en la capa superficial del suelo cuando se comparó con una serie observada de 12 años. El crecimiento y rendimiento del cultivo fueron adecuadamente simulados con ambos métodos, pero el N en la parte aérea de la planta y en el grano fueron sobreestimados con el Met.1. Los resultados variaron significativamente con las fracciones iniciales de SOC, resaltando la importancia del método de inicialización. El Met.2 ofrece una alternativa para la inicialización del modelo CENTURY, mejorando la simulación de procesos de N en el suelo. La continua emergencia de nuevas variedades de híbridos modernos de maíz limita la aplicación de modelos de simulación de cultivos, ya que estos nuevos híbridos necesitan ser calibrados en el campo para ser adecuados para su uso en los modelos. El desarrollo de relaciones basadas en la duración del ciclo, simplificaría los requerimientos de calibración facilitando la rápida incorporación de nuevos cultivares en DSSAT. Seis híbridos de maiz (FAO 300 hasta FAO 700) fueron cultivados en un experimento de campo de dos años en un área semiárida de regadío en España central. Los coeficientes genéticos fueron obtenidos secuencialmente, comenzando con los parámetros de desarrollo fenológico (P1, P2, P5 and PHINT), seguido de los parámetros de crecimiento del cultivo (G2 and G3). Se continuó el procedimiento hasta que la salida de las simulaciones estuvo en concordancia con las observaciones fenológicas de campo. Después de la calibración, los parámetros simulados se ajustaron bien a los parámetros observados, con bajos RMSE en todos los casos. Los P1 y P5 calibrados, incrementaron con la duración del ciclo. P1 fue una función lineal del tiempo térmico (TT) desde emergencia hasta floración y P5 estuvo linealmente relacionada con el TT desde floración a madurez. No hubo diferencias significativas en PHINT entre híbridos de FAO-500 a 700 , ya que tuvieron un número de hojas similar. Como los coeficientes fenológicos estuvieron directamente relacionados con la duración del ciclo, sería posible desarrollar rangos y correlaciones que permitan estimar dichos coeficientes a partir de la clasificación del ciclo. ABSTRACT Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. DSSAT model was tested under rainfed conditions in a 16-year field experiment in semiarid central Spain. The effect of tillage system and winter cereal-based rotations on the crop yield and soil quality was evaluated. The CERES and CROPGRO models were used to simulate crop growth and yield, while the DSSAT CENTURY was used in the SOC and SN simulations. Both field observations and CERES-Barley simulations, showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The model predicted higher nitrogen availability in the conventional tillage (CT) than in the no tillage (NT) leading to a higher yield in the CT. The SOC and SN in the top layer, were higher in NT than in CT, and decreased with depth in both simulated and observed values. The best combinations for the dry land conditions studied were CT-VB and CT-FB, but CT presented lower SN and SOC content than NT. The beneficial effect of NT on SOC and SN under semiarid Mediterranean conditions can be identified by field observations and by crop model simulations. The simulation of the water balance in cropping systems is a useful tool to study how water can be used efficiently. The comparison of DSSAT soil water balance, with a simpler “tipping bucket” approach, with the more mechanistic WAVE model, which integrates Richard’s equation, is a powerful method to assess model performance. The soil parameters were calibrated by using the Simulated Annealing (SA) global optimizing method. A continuous weighing lysimeter in a bare fallow provided the observed values of drainage and evapotranspiration (ET) while soil water content (SW) was supplied by capacitance sensors. Both models performed well after optimizing soil parameters with SA, simulating the soil water balance components for the calibrated period. For the validation period, the optimized models predicted well soil water content and soil evaporation over time. However, drainage was predicted better by WAVE than by DSSAT, which presented larger errors in the cumulative values. That could be due to the mechanistic nature of WAVE against the more functional nature of DSSAT. The good results from WAVE indicate that, after calibration, it could be used as benchmark for other models for periods when no drainage field measurements are available. The performance of DSSAT-CENTURY when simulating SOC and N strongly depends on the initialization process. Initialization of the SOC pools from apparent soil N mineralization (Napmin) measurements was proposed as alternative method (Met.2). Method 2 was compared to the Basso et al. (2011) initialization method (Met.1), by applying both methods to a 4-year field experiment in a irrigated area of central Spain. Nmin and Napmin were overestimated by Met.1, since the obtained stable pool (SOC3) in the upper layers was lower than from Met.2. Simulated N leaching was similar for both methods, with good results in fallow and barley treatments. Method 1 underestimated topsoil SOC when compared with a 12-year observed serial. Crop growth and yield were properly simulated by both methods, but N in shoots and grain were overestimated by Met.1. Results varied significantly with the initial SOC pools, highlighting the importance of the initialization procedure. Method 2 offers an alternative to initialize the CENTURY model, enhancing the simulation of soil N processes. The continuous emergence of new varieties of modern maize hybrids limits the application of crop simulation models, since these new hybrids should be calibrated in the field to be suitable for model use. The development of relationships based on the cycle duration, would simplify the calibration requirements facilitating the rapid incorporation of new cultivars into DSSAT. Six maize hybrids (FAO 300 through FAO 700) were grown in a 2-year field experiment in a semiarid irrigated area of central Spain. Genetic coefficients were obtained sequentially, starting with the phenological development parameters (P1, P2, P5 and PHINT), followed by the crop growth parameters (G2 and G3). The procedure was continued until the simulated outputs were in good agreement with the field phenological observations. After calibration, simulated parameters matched observed parameters well, with low RMSE in most cases. The calibrated P1 and P5 increased with the duration of the cycle. P1 was a linear function of the thermal time (TT) from emergence to silking and P5 was linearly related with the TT from silking to maturity . There were no significant differences in PHINT between hybrids from FAO-500 to 700 , as they had similar leaf number. Since phenological coefficients were directly related with the cycle duration, it would be possible to develop ranges and correlations which allow to estimate such coefficients from the cycle classification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.