993 resultados para THERMAL-DEPENDENCE
Resumo:
Lake Baikal, the world's most voluminous freshwater lake, has experienced unprecedented warming during the last decades. A uniquely diverse amphipod fauna inhabits the littoral zone and can serve as a model system to identify the role of thermal tolerance under climate change. This study aimed to identify sublethal thermal constraints in two of the most abundant endemic Baikal amphipods, Eulimnogammarus verrucosus and Eulimnogammarus cyaneus, and Gammarus lacustris, a ubiquitous gammarid of the Holarctic. As the latter is only found in some shallow isolated bays of the lake, we further addressed the question whether rising temperatures could promote the widespread invasion of this non-endemic species into the littoral zone. Animals were exposed to gradual temperature increases (4 week, 0.8 °C/d; 24 h, 1 °C/h) starting from the reported annual mean temperature of the Baikal littoral (6 °C). Within the framework of oxygen- and capacity-limited thermal tolerance (OCLTT), we used a nonlinear regression approach to determine the points at which the changing temperature-dependence of relevant physiological processes indicates the onset of limitation. Limitations in ventilation representing the first limits of thermal tolerance (pejus (= "getting worse") temperatures (Tp)) were recorded at 10.6 (95% confidence interval; 9.5, 11.7), 19.1 (17.9, 20.2), and 21.1 (19.8, 22.4) °C in E. verrucosus, E. cyaneus, and G. lacustris, respectively. Field observations revealed that E. verrucosus retreated from the upper littoral to deeper and cooler waters once its Tp was surpassed, identifying Tp as the ecological thermal boundary. Constraints in oxygen consumption at higher than critical temperatures (Tc) led to an exponential increase in mortality in all species. Exposure to short-term warming resulted in higher threshold values, consistent with a time dependence of thermal tolerance. In conclusion, species-specific limits to oxygen supply capacity are likely key in the onset of constraining (beyond pejus) and then life-threatening (beyond critical) conditions. Ecological consequences of these limits are mediated through behavioral plasticity in E. verrucosus. However, similar upper thermal limits in E. cyaneus (endemic, Baikal) and G. lacustris (ubiquitous, Holarctic) indicate that the potential invader G. lacustris would not necessarily benefit from rising temperatures. Secondary effects of increasing temperatures remain to be investigated.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Systematic data on the effect of irradiation with swift ions (Zn at 735 MeV and Xe at 929 MeV) on NaCl single crystals have been analysed in terms of a synergetic two-spike approach (thermal and excitation spikes). The coupling of the two spikes, simultaneously generated by the irradiation, contributes to the operation of a non-radiative exciton decay model as proposed for purely ionization damage. Using this scheme, we have accounted for the π-emission yield of self-trapped excitons and its temperature dependence under ion-beam irradiation. Moreover, the initial production rates of F-centre growth have also been reasonably simulated for irradiation at low temperatures ( < 100 K), where colour centre annealing and aggregation can be neglected.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
An advantage of laser crystallization over conventional heating methods is its ability to limit rapid heating and cooling to thin surface layers. Laser energy is used to heat the a-Si thin film to change the microstructure to poly-Si. Thin film samples of a-Si were irradiated with a CW-green laser source. Laser irradiated spots were produced by using different laser powers and irradiation times. These parameters are identified as key variables in the crystallization process. The power threshold for crystallization is reduced as the irradiation time is increased. When this threshold is reached the crystalline fraction increases lineally with power for each irradiation time. The experimental results are analysed with the aid of a numerical thermal model and the presence of two crystallization mechanisms are observed: one due to melting and the other due to solid phase transformation.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
Point mutants of three unrelated antifluorescein antibodies were constructed to obtain nine different single-chain Fv fragments, whose on-rates, off-rates, and equilibrium binding affinities were determined in solution. Additionally, activation energies for unbinding were estimated from the temperature dependence of the off-rate in solution. Loading rate-dependent unbinding forces were determined for single molecules by atomic force microscopy, which extrapolated at zero force to a value close to the off-rate measured in solution, without any indication for multiple transition states. The measured unbinding forces of all nine mutants correlated well with the off-rate in solution, but not with the temperature dependence of the reaction, indicating that the same transition state must be crossed in spontaneous and forced unbinding and that the unbinding path under load cannot be too different from the one at zero force. The distance of the transition state from the ground state along the unbinding pathway is directly proportional to the barrier height, regardless of the details of the binding site, which most likely reflects the elasticity of the protein in the unbinding process. Atomic force microscopy thus can be a valuable tool for the characterization of solution properties of protein-ligand systems at the single molecule level, predicting relative off-rates, potentially of great value for combinatorial chemistry and biology.
Resumo:
In this paper, we study the effect of solid surface mediation on the intermolecular potential energy of nitrogen, and its impact on the adsorption of nitrogen on a graphitized carbon black surface and in carbon slit-shaped pores. This effect arises from the lower effective interaction potential energy between two particles close to the surface compared to the potential energy of the same two particles when they are far away from the surface. A simple equation is proposed to calculate the reduction factor and this is used in the Grand Canonical Monte Carlo (GCMC) simulation of nitrogen adsorption on graphitized thermal carbon black. With this modification, the GCMC simulation results agree extremely well with the experimental data over a wide range of pressure; the simulation results with the original potential energy (i.e. no surface mediation) give rise to a shoulder in the neighbourhood of monolayer coverage and a significant over-prediction of the second and higher layer coverages. The influence of this surface mediation on the dependence of the pore-filling pressure on the pore width is also studied. It is shown that such surface mediation has a significant effect on the pore-filling pressure. This implies that the use of the local isotherms obtained from the potential model without surface mediation could give rise to a serious error in the determination of the pore-size distribution.
Resumo:
The temperature dependence of the structure of the mixed-anion Tutton salt K-2[Cu(H2O)(6)](SO4)(2x)(SeO4)(2-2x) has been determined for crystals with 0, 17, 25, 68, 78, and 100% sulfate over the temperature range of 85-320 K. In every case, the [Cu(H2O)(6)](2+) ion adopts a tetragonally elongated coordination geometry with an orthorhombic distortion. However, for the compounds with 0, 17, and 25% sulfate, the long and intermediate bonds occur on a different pair of water molecules from those with 68, 78, and 100% sulfate. A thermal equilibrium between the two forms is observed for each crystal, with this developing more readily as the proportions of the two counterions become more similar. Attempts to prepare a crystal with approximately equal amounts of sulfate and selenate were unsuccessful. The temperature dependence of the bond lengths has been analyzed using a model in which the Jahn-Teller potential surface of the [Cu(H2O)(6)](2+) ion is perturbed by a lattice-strain interaction. The magnitude and sign of the orthorhombic component of this strain interaction depends on the proportion of sulfate to selenate. Significant deviations from Boltzmann statistics are observed for those crystals exhibiting a large temperature dependence of the average bond lengths, and this may be explained by cooperative interactions between neighboring complexes.
Resumo:
The thesis is divided into four chapters. They are: introduction, experimental, results and discussion about the free ligands and results and discussion about the complexes. The First Chapter, the introductory chapter, is a general introduction to the study of solid state reactions. The Second Chapter is devoted to the materials and experimental methods that have been used for carrying out tile experiments. TIle Third Chapter is concerned with the characterisations of free ligands (Picolinic acid, nicotinic acid, and isonicotinic acid) by using elemental analysis, IR spectra, X-ray diffraction, and mass spectra. Additionally, the thermal behaviour of free ligands in air has been studied by means of thermogravimetry (TG), derivative thermogravimetry (DTG), and differential scanning calorimetry (DSC) measurements. The behaviour of thermal decomposition of the three free ligands was not identical Finally, a computer program has been used for kinetic evaluation of non-isothermal differential scanning calorimetry data according to a composite and single heating rate methods in comparison with the methods due to Ozawa and Kissinger methods. The most probable reaction mechanism for the free ligands was the Avrami-Erofeev equation (A) that described the solid-state nucleation-growth mechanism. The activation parameters of the decomposition reaction for free ligands were calculated and the results of different methods of data analysis were compared and discussed. The Fourth Chapter, the final chapter, deals with the preparation of cobalt, nickel, and copper with mono-pyridine carboxylic acids in aqueous solution. The prepared complexes have been characterised by analyses, IR spectra, X-ray diffraction, magnetic moments, and electronic spectra. The stoichiometry of these compounds was ML2x(H20), (where M = metal ion, L = organic ligand and x = water molecule). The environments of cobalt, nickel, and copper nicotinates and the environments of cobalt and nickel picolinates were octahedral, whereas the environment of copper picolinate [Cu(PA)2] was tetragonal. However, the environments of cobalt, nickel, and copper isonicotinates were polymeric octahedral structures. The morphological changes that occurred throughout the decomposition were followed by SEM observation. TG, DTG, and DSC measurements have studied the thermal behaviour of the prepared complexes in air. During the degradation processes of the hydrated complexes, the crystallisation water molecules were lost in one or two steps. This was also followed by loss of organic ligands and the metal oxides remained. Comparison between the DTG temperatures of the first and second steps of the dehydration suggested that the water of crystallisation was more strongly bonded with anion in Ni(II) complexes than in the complexes of Co(II) and Cu(II). The intermediate products of decomposition were not identified. The most probable reaction mechanism for the prepared complexes was also Avrami-Erofeev equation (A) characteristic of solid-state nucleation-growth mechanism. The tempemture dependence of conductivity using direct current was determined for cobalt, nickel, Cl.nd copper isonicotinates. An activation energy (ΔΕ), the activation energy (ΔΕ ) were calculated.The ternperature and frequency dependence of conductivity, the frequency dependence of dielectric constant, and the dielectric loss for nickel isonicotinate were determined by using altemating current. The value of s paralneter and the value of'density of state [N(Ef)] were calculated. Keyword Thermal decomposition, kinetic, electrical conduclion, pyridine rnono~ carboxylic acid, cOlnplex, transition metal compJex.
Resumo:
Metallocene ethylene-1-octene copolymers having different densities and comonomer content ranging from 11 to 36 wt% (m-LLDPE), and a Ziegler copolymer (z-LLDPE) containing the same level of short-chain branching (SCB) corresponding to one of the m-LLDPE polymers, were subjected to extrusion. The effects of temperature (210-285 °C) and multi-pass extrusions (up to five passes) on the rheological and structural characteristics of these polymers were investigated using melt index and capillary rheometry, along with spectroscopic characterisation of the evolution of various products by FTIR, C-NMR and colour measurements. The aim is to develop a better understanding of the effects of processing variables on the structure and thermal degradation of these polymers. Results from rheology show that both extrusion temperature and the amount of comonomer have a significant influence on the polymer melt thermo-oxidative behaviour. At low to intermediate processing temperatures, all m-LLDPE polymers exhibited similar behaviour with crosslinking reactions dominating their thermal oxidation. By contrast, at higher processing temperatures, the behaviour of the metallocene polymers changed depending on the level of comonomer content: higher SCB gave rise to predominantly chain scission reactions whereas polymers with lower level of SCB continued to be dominated by crosslinking. This temperature dependence was attributed to changes in the different evolution of carbonyl and unsaturated compounds including vinyl, vinylidene and trans-vinylene. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Transition metals (Ti, Zr, Hf, Mo, W, V, Nb, Ta, Pd, Pt, Cu, Ag, and Au) are essential building units of many materials and have important industrial applications. Therefore, it is important to understand their thermal and physical behavior when they are subjected to extreme conditions of pressure and temperature. This dissertation presents: • An improved experimental technique to use lasers for the measurement of thermal conductivity of materials under conditions of very high pressure (P, up to 50 GPa) and temperature (T up to 2500 K). • An experimental study of the phase relationship and physical properties of selected transition metals, which revealed new and unexpected physical effects of thermal conductivity in Zr, and Hf under high P-T. • New phase diagrams created for Hf, Ti and Zr from experimental data. • P-T dependence of the lattice parameters in α-hafnium. Contrary to prior reports, the α-ω phase transition in hafnium has a negative dT/dP slope. • New data on thermodynamic and physical properties of several transition metals and their respective high P-T phase diagrams. • First complete thermodynamic database for solid phases of 13 common transition metals was created. This database has: All the thermochemical data on these elements in their standard state (mostly available and compiled); All the equations of state (EoS) formulated from pressure-volume-temperature data (measured as a part of this study and from literature); Complete thermodynamic data for selected elements from standard to extreme conditions. The thermodynamic database provided by this study can be used with available thermodynamic software to calculate all thermophysical properties and phase diagrams at high P-T conditions. For readers who do not have access to this software, tabulated values of all thermodynamic and volume data for the 13 metals at high P-T are included in the APPENDIX. In the APPENDIX, a description of several other high-pressure studies of selected oxide systems is also included. Thermophysical properties (Cp, H, S, G) of the high P-T ω-phase of Ti, Zr and Hf were determined during the optimization of the EoS parameters and are presented in this study for the first time. These results should have important implications in understanding hexagonal-close-packed to simple-hexagonal phase transitions in transition metals and other materials.
Resumo:
The anharmonic phonon properties of SnSe in the Pnma phase were investigated with a combination of experiments and first-principles simulations. Using inelastic neutron scattering (INS) and nuclear resonant inelastic X-ray scattering (NRIXS), we have measured the phonon dispersions and density of states (DOS) and their temperature dependence, which revealed a strong, inhomogeneous shift and broadening of the spectrum on warming. First-principles simulations were performed to rationalize these measurements, and to explain the previously reported anisotropic thermal expansion, in particular the negative thermal expansion within the Sn-Se bilayers. Including the anisotropic strain dependence of the phonon free energy, in addition to the electronic ground state energy, is essential to reproduce the negative thermal expansion. From the phonon DOS obtained with INS and additional calorimetry measurements, we quantify the harmonic, dilational, and anharmonic components of the phonon entropy, heat capacity, and free energy. The origin of the anharmonic phonon thermodynamics is linked to the electronic structure.
Resumo:
Zr-Excel alloy (Zr-3.5Sn-0.8Nb-0.8Mo) is a dual phase (α + β) alloy in the as-received pressure tube condition. It has been proposed to be the pressure tube candidate material for the Generation-IV CANDU-Supercritical Water Reactor (CANDU-SCWR). In this dissertation, the effects of heavy ion irradiation, deformation and heat treatment on the microstructures of the alloy were investigated to enable us to have a better understanding of the potential in-reactor performance of this alloy. In-situ heavy ion (1 MeV) irradiation was performed to study the nucleation and evolution of dislocation loops in both α- and β-Zr. Small and dense type dislocation loops form under irradiation between 80 and 450 °C. The number density tends to saturate at ~ 0.1 dpa. Compared with the α-Zr, the defect yield is much lower in β-Zr. The stabilities of the metastable phases (β-Zr and ω-Zr) and the thermal-dynamically equilibrium phase, fcc Zr(Mo, Nb)2, under irradiation were also studied at different temperatures. Chemi-STEM elemental mapping was carried out to study the elemental redistribution caused by irradiation. The stability of these phases and the elemental redistribution are strongly dependent on irradiation temperature. In-situ time-of-flight neutron diffraction tensile and compressive tests were carried out at different temperatures to monitor lattice strain evolutions of individual grain families during these tests. The β-Zr is the strengthening phase in this alloy in the as-received plate material. Load is transferred to the β-Zr after yielding of the α-Zr grains. The temperature dependence of static strain aging and the yielding sequence of the individual grain families were discussed. Strong tensile/compressive asymmetry was observed in the {0002} grain family at room temperature. The microstructures of the sample deformed at 400 °C and the samples only subjected to heat treatment at the same temperature were characterized with TEM. Concentration of β phase stabilizers in the β grain and the morphology of β grain have significant effect on the stability of β- and ω-Zr under thermal treatment. Applied stress/strain enhances the decomposition of isothermal ω phase but suppresses α precipitation inside the β grains at high temperature. An α → ω/ZrO phase transformation was observed in the thin foils of Zr-Excel alloy and pure Zr during in-situ heating at 700 °C in TEM.
Resumo:
Since the 1980s, different devices based on superelastic alloys have been developed to fulfill orthodontic applications. Particularly in the last decades several researches have been carried out to evaluate the mechanical behavior of Ni-Ti alloys, including their tensile, torsion and fatigue properties. However, studies regarding the dependence of elastic properties on residence time of Ni-Ti wires in the oral cavity are scarce. Such approach is essential since metallic alloys are submitted to mechanical stresses during orthodontic treatment as well as pH and temperature fluctuations. The goal of the present contribution is to provide elastic stress-strain results to guide the orthodontic choice between martensitic thermal activated and austenitic superelastic Ni-Ti alloys. From the point of view of an orthodontist, the selection of appropriate materials and the correct maintenance of the orthodontic apparatus are essential needs during clinical treatment. The present work evaluated the elastic behavior of Ni-Ti alloy wires with diameters varying from 0.014 to 0.020 inches, submitted to hysteresis tensile tests with 8% strain. Tensile tests were performed after periods of use of 1, 2 and 3 months in the oral cavity of patients submitted to orthodontic treatment. The results from the hysteresis tests allowed to exam the strain range covered by isostress lines upon loading and unloading, as well as the residual strain after unloading for both superelastic and thermal activated Ni-Ti wires. Superelastic Ni-Ti wires exhibited higher load isostress values compared to thermal activated wires. It was found that such differences in the load isostress values can increase with increasing residence time.