12 resultados para impact fatigue (repeated impulsive loading)
em Universidad Politécnica de Madrid
Resumo:
El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.
Resumo:
The latest technology and architectural trends have significantly improved the use of a large variety of glass products in construction which, in function of their own characteristocs, allow to design and calculate structural glass elements under safety conditions. This paper presents the evaluation and analysis of the damping properties of rectangular laminated glass plates of 1.938 m x 0.876 m with different thickness depending on the number of PVB interlayers arranged. By means of numerical simulation and experimental verification, using modal analysis, natural frequencies and damping of the glass plates were calculated, both under free boundary conditions and operational conditions for the impact test equipment used in the experimental program, as the European standard UNE-EN 12600:2003 specifies.
Resumo:
Transports of radioactive wastes in Spain are becoming issues of renewed interest, due to the increased mobility of these materials which can be expected after the building and operation of the planned central repository for this country in a near future. Such types of residues will be mainly of the medium and high activity classes and have raised concerns on the safety of the operations, the radiological protection of the individuals, the compliance with the legal regulations and their environmental consequences of all kind. In this study, relevant information for the assessment of radiological risk of road transport were taken into account, as the sources and destination of the radioactive transports, the amount of traveling to be done, the preferred routes and populations affected, the characterization of the residues and containers, their corresponding testing, etc. These data were supplied by different organizations fully related with these activities, like the nuclear power stations, the companies in charge of radioactive transports, the enterprises for inspection and control of the activities, etc., as well as the government institutions which are responsible for the selection and location of the storage facility and other decisions on the nuclear policies of the country. Thus, we have developed a program for computing the data in such a form that by entering the radiation levels at one meter of the transport loads and by choosing a particular displacement, the computer application is capable to calculate the corresponding radiological effects, like the global estimated impact, its relevance to the population in general or on those people living and driving near the main road routes, the doses received by the most exposed individuals (e.g. the workers for loading or driving the vehicle), or the probability of detrimental on the human health. The results of this work could be of help for a better understanding and management of these activities and their related impacts; at the same time that the generated reports of the computer application are considered of particular interest as innovative and complementary information to the current legal documentation, which is basically required for transporting radioactive wastes in the country, according with the international safety rules (like IAEA and ADR).Though main studies are still in progress, as the definite location for the Spanish storage facility has not been decided yet, preliminary results with the existing transports of residues of medium activity indicate that the radiological impact is very low in conventional operations. Nevertheless, the management of these transports is complex and laborious, making it convenient to progress further in the analysis and quantification of this kind of events, which constitutes one of the main objectives of the present study for the radioactive road mobility in Spain.
Resumo:
Recently, broadcasted 3D video content has reached households with the first generation of 3DTV. However, few studies have been done to analyze the Quality of Experience (QoE) perceived by the end-users in this scenario. This paper studies the impact of trans- mission errors in 3DTV, considering that the video is delivered in side-by-side format over a conventional packet-based network. For this purpose, a novel evaluation methodology based on standard sin- gle stimulus methods and with the aim of keeping as close as pos- sible the home environment viewing conditions has been proposed. The effects of packet losses in monoscopic and stereoscopic videos are compared from the results of subjective assessment tests. Other aspects were also measured concerning 3D content as naturalness, sense of presence and visual fatigue. The results show that although the final perceived QoE is acceptable, some errors cause important binocular rivalry, and therefore, substantial visual discomfort.
Resumo:
Agronomic management in Ciudad Real, a province in central Spain, is characteristic of semi-arid cropped areas whose water supplies have high nitrate (NO3?) content due to environmental degradation. This situation is aggravated by the existence of a restrictive subsurface layer of ?caliche? or hardpan at a depth of 0.60 m. Under these circumstances, fertirrigation rates, including nitrogen (N) fertilizer schedules, must be carefully calibrated to optimize melon yields while minimizing the N pollution and water supply. Such optimization was sought by fertilizing with different doses of N and irrigating at 100% of the ETc (crop evapotranspiration), adjusted for this crop and area. The N content in the four fertilizer doses used was: 0, 55, 82 and 109 kg N ha?1. Due to the NO3? content in the irrigation water, however, the actual N content was 30 kg ha?1 higher in all four treatments repeated in two different years. The results showed correlation between melon plant N uptake and drainage (Dr), which in turn affects the amount of N leached, as well as correlation between Dr and LAI (leaf area index) for each treatment. A fertilizer factor (?) was estimated through two methods, from difference in Dr and in LAI ratio with respect to the maximum N dose, to correct ETc based on N doses. The difference was found in the adjusted evapotranspiration in both years using the corresponding ? achieved 42?49 mm at vegetative period, depending on the method, and it was not significant at senescent period. Finally, a growth curve between N uptake and plant dry weight (DW) for each treatment was defined to confirm that the observed higher plant vigour, showing higher LAI and reduced Dr, was due mainly to higher N doses.
Resumo:
Damage models based on the Continuum Damage Mechanics (CDM) include explicitly the coupling between damage and mechanical behavior and, therefore, are consistent with the definition of damage as a phenomenon with mechanical consequences. However, this kind of models is characterized by their complexity. Using the concept of lumped models, possible simplifications of the coupled models have been proposed in the literature to adapt them to the study of beams and frames. On the other hand, in most of these coupled models damage is associated only with the damage energy release rate which is shown to be the elastic strain energy. According to this, damage is a function of the maximum amplitude of cyclic deformation but does not depend on the number of cycles. Therefore, low cycle effects are not taking into account. From the simplified model proposed by Flórez-López, it is the purpose of this paper to present a formulation that allows to take into account the degradation produced not only by the peak values but also by the cumulative effects such as the low cycle fatigue. For it, the classical damage dissipative potential based on the concept of damage energy release rate is modified using a fatigue function in order to include cumulative effects. The fatigue function is determined through parameters such as the cumulative rotation and the total rotation and the number of cycles to failure. Those parameters can be measured or identified physically through the haracteristics of the RC. So the main advantage of the proposed model is the possibility of simulating the low cycle fatigue behavior without introducing parameters with no suitable physical meaning. The good performance of the proposed model is shown through a comparison between numerical and test results under cycling loading.
Resumo:
La aparición de la fatiga ha sido ampliamente investigada en el acero y en otros materiales metálicos, sin embargo no se conoce en tanta profundidad en el hormigón estructural. Esto crea falta de uniformidad y enfoque en el proceso de verificación de estructuras de hormigón para el estado límite último de la fatiga. A medida que se llevan a cabo más investigaciones, la información sobre los parámetros que afectan a la fatiga en el hormigón comienzan a ser difundidos e incluso los que les afectan de forma indirecta. Esto conlleva a que se estén incorporando en las guías de diseño de todo el mundo, a pesar de que la comprobación del estado límite último no se trata por igual entre los distintos órganos de diseño. Este trabajo presentará un conocimiento básico del fenómeno de la fatiga, qué lo causa y qué condiciones de carga o propiedades materiales amplían o reducen la probabilidad de fallo por fatiga. Cuatro distintos códigos de diseño serán expuestos y su proceso de verificación ha sido examinado, comparados y valorados cualitativa y cuantitativamente. Una torre eólica, como ejemplo, fue analizada usando los procedimientos de verificación como se indica en sus respectivos códigos de referencia. The occurrence of fatigue has been extensively researched in steel and other metallic materials it is however, not as broadly understood in concrete. This produces a lack of uniformity in the approach and process in the verification of concrete structures for the ultimate limit state of fatigue. As more research is conducted and more information is known about the parameters which cause, propagate, and indirectly affect fatigue in concrete, they are incorporated in design guides around the world. Nevertheless, this ultimate limit state verification is not addressed equally by various design governing bodies. This report presents a baseline understanding of what the phenomenon of fatigue is, what causes it, and what loading or material conditions amplify or reduce the likelihood of fatigue failure. Four different design codes are exposed and their verification process has been examined, compared and evaluated both qualitatively and quantitatively. Using a wind turbine tower structure as case study, this report presents calculated results following the verification processes as instructed in the respective reference codes.
Resumo:
Based on two research projects, a device for testing the response to-impact of fruits and related materials has been designed and tested during the last three years. As it is not related directly to potatoes, this contribution focuses mainly on the principles of impact and static loading and on the description of the device, and the type of results obtained up to now in different fruits.
Resumo:
This paper describes the potential impact of social media and new technologies in secondary education. The case of study has been designed for the drama and theatre subject. A wide set of tools like social networks, blogs, internet, multimedia content, local press and other promotional tools are promoted to increase students’ motivation. The experiment was developed at the highschool IES Al-Satt located in Algete in the Comunidad de Madrid. The students included in the theatre group present a low academic level, 80% of them had previously repeated at least one grade, half of them come from programs for students with learning difficulties and were at risk of social exclusion. This action is supported by higher and secondary education professors and teachers who look forward to implanting networked media technologies as new tools to improve the academic results and the degree of involvement of students. The results of the experiment have been excellent, based on satisfactory opinions obtained from a survey answered by students at the end of the course, and also revealed by the analytics taken from different social networks. This project is a pioneer in the introduction and usage of new technologies in secondary high-schools in Spain.
Resumo:
Numerical analysis is a suitable tool in the design of complex reinforced concrete structures under extreme impulsive loadings such as impacts or explosions at close range. Such events may be the result of terrorist attacks. Reinforced concrete is commonly used for buildings and infrastructures. For this reason, the ability to accurately run numerical simulations of concrete elements subjected to blast loading is needed. In this context, reliable constitutive models for concrete are of capital importance. In this research numerical simulations using two different constitutive models for concrete (Continuous Surface Cap Model and Brittle Damage Model) have been carried out using LS-DYNA. Two experimental benchmark tests have been taken as reference. The results of the numerical simulations with the aforementioned constitutive models show different abilities to accurately represent the structural response of the reinforced concrete elements studied.
Resumo:
The design and development of a new method for performing fracture toughness tests under impulsive loadings using explosives is presented. The experimental set-up was complemented with pressure transducers and strain gauges in order to measure, respectively, the blast wave that reached the specimen and the loading history. Fracture toughness tests on a 7017-T73 aluminium alloy were carried out by using this device under impulsive loadings. Previous studies reported that such aluminium alloy had very little strain rate sensitivity, which made it an ideal candidate for comparison at different loading rates. The fracture-initiation toughness values of the 7017-T73 aluminium alloy obtained at impulsive loadings did not exhibit a significant variation from the cases studied at lower loading rates. Therefore, the method and device developed for measuring the dynamic fracture-initiation toughness under impulsive loadings was considered suitable for such a purpose
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.