853 resultados para periodically poled
Resumo:
Fog oases, locally named Lomas, are distributed in a fragmented way along the western coast of Chile and Peru (South America) between ~6°S and 30°S following an altitudinal gradient determined by a fog layer. This fragmentation has been attributed to the hyper aridity of the desert. However, periodically climatic events influence the ‘normal seasonality’ of this ecosystem through a higher than average water input that triggers plant responses (e.g. primary productivity and phenology). The impact of the climatic oscillation may vary according to the season (wet/dry). This thesis evaluates the potential effect of climate oscillations, such as El Niño Southern Oscillation (ENSO), through the analysis of vegetation of this ecosystem following different approaches: Chapters two and three show the analysis of fog oasis along the Peruvian and Chilean deserts. The objectives are: 1) to explain the floristic connection of fog oases analysing their taxa composition differences and the phylogenetic affinities among them, 2) to explore the climate variables related to ENSO which likely affect fog production, and the responses of Lomas vegetation (composition, productivity, distribution) to climate patterns during ENSO events. Chapters four and five describe a fog-oasis in southern Peru during the 2008-2010 period. The objectives are: 3) to describe and create a new vegetation map of the Lomas vegetation using remote sensing analysis supported by field survey data, and 4) to identify the vegetation change during the dry season. The first part of our results show that: 1) there are three significantly different groups of Lomas (Northern Peru, Southern Peru, and Chile) with a significant phylogenetic divergence among them. The species composition reveals a latitudinal gradient of plant assemblages. The species origin, growth-forms typologies, and geographic position also reinforce the differences among groups. 2) Contradictory results have emerged from studies of low-cloud anomalies and the fog-collection during El Niño (EN). EN increases water availability in fog oases when fog should be less frequent due to the reduction of low-clouds amount and stratocumulus. Because a minor role of fog during EN is expected, it is likely that measurements of fog-water collection during EN are considering drizzle and fog at the same time. Although recent studies on fog oases have shown some relationship with the ENSO, responses of vegetation have been largely based on descriptive data, the absence of large temporal records limit the establishment of a direct relationship with climatic oscillations. The second part of the results show that: 3) five different classes of different spectral values correspond to the main land cover of Lomas using a Vegetation Index (VI). The study case is characterised by shrubs and trees with variable cover (dense, semi-dense and open). A secondary area is covered by small shrubs where the dominant tree species is not present. The cacti area and the old terraces with open vegetation were not identified with the VI. Agriculture is present in the area. Finally, 4) contrary to the dry season of 2008 and 2009 years, a higher VI was obtained during the dry season of 2010. The VI increased up to three times their average value, showing a clear spectral signal change, which coincided with the ENSO event of that period.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
The cooperative motion algorithm was applied on the molecular simulation of complex chemical reactions and macromolecular orientation phenomena in confined geometries. First, we investigated the case of equilibrium step-growth polymerization in lamellae, pores and droplets. In such systems, confinement was quantified as the area/volume ratio. Results showed that, as confinement increases, polymerization becomes slower and the average molecular weight (MW) at equilibrium decreases. This is caused by the sterical hindrance imposed by the walls since chain growth reactions in their close vicinity have less realization possibilities. For reactions inside droplets at surfaces, contact angles usually increased after polymerization to compensate conformation restrictions imposed by confinement upon growing chains. In a second investigation, we considered monodisperse and chemically inert chains and focused on the effect of confinement on chain orientation. Simulations of thin polymer films showed that chains are preferably oriented parallel to the surface. Orientation increases as MW increases or as film thickness d decreases, in qualitative agreement with experiments with low MW polystyrene. It is demonstrated that the orientation of simulated chains results from a size effect, being a function of the ratio between chain end-to-end distance and d. This study was complemented by experiments with thin films of pi-conjugated polymers like MEH-PPV. Anisotropic refractive index measurements were used to analyze chain orientation. With increasing MW, orientation is enhanced. However, for MEH-PPV, orientation does not depend on d even at thicknesses much larger than the chain contour length. This contradiction with simulations was discussed by considering additional causes for orientation, for instance the appearance of nematic-like ordering in polymer films. In another investigation, we simulated droplet evaporation at soluble surfaces and reproduced the formation of wells surrounded by ringlike deposits at the surface, as observed experimentally. In our simulations, swollen substrate particles migrate to the border of the droplet to minimize the contact between solvent and vacuum, which costs the most energy. Deposit formation in the beginning of evaporation results in pinning of the droplet. When polymer chains at the substrate surface have strong uniaxial orientation, the resulting pattern is no longer similar to a ring but to a pair of half-moons. In a final stage, as an extension for the model developed for polymerization in nanoreactors, we studied the effect of geometrical confinement on a hypothetical oscillating reaction following the mechanism of the so called periodically forced Brusselator. It was shown that a reaction which is chaotic in the bulk may be driven to periodicity by confinement and vice-versa, opening new perspectives for chaos control.
Resumo:
Cor-Ten is a particular kind of steel, belonging to low-alloyed steel; thanks to his aesthetic features and resistance to atmospheric corrosion, this material is largely used in architectural, artistic and infrastructural applications. After environmental exposure, Cor-Ten steel exhibits the characteristic ability to self-protect from corrosion, by the development of a stable and adherent protective layer. However, some environmental factors can influence the formation and stability of the patina. In particular, exposure of Cor-Ten to polluted atmosphere (NOx, SOx, O3) or coastal areas (marine spray) may cause problems to the protective layer and, as a consequence, a release of alloying metals, which can accumulate near the structures. Some of these metals, such as Cr and Ni, could be very dangerous for soils and water because of their large toxicity. The aim of this work was to study the corrosion behavior of Cor-Ten exposed to an urban-coastal site (Rimini, Italy). Three different kinds of commercial surface finish (bare and pre-patinated, with or without a beeswax covering) were examined, both in sheltered and unsheltered exposure conditions. Wet deposition brushing the specimens surface (leaching solutions) are monthly collected and analyzed to evaluate the extent of metal release and the form in which they leave the surface, for example, as water-soluble compounds or non-adherent corrosion products. Five alloying metals (Fe, Cu, Cr, Mn and Ni) and nine ions (Cl-, NO3-, NO2-, SO42-, Na+, Ca2+, K+, Mg2+, NH4+) are determined through Atomic Absorption Spectroscopy and Ion Chromatography, respectively. Furthermore, the evolution and the behaviour of the patina are periodically followed by surface investigations (SEM-EDS and Raman Spectroscopy). After two years of exposure, the results show that Bare Cor-Ten, cheaper than the other analyzed specimens, even though undergoes the greater mass variation, his metal release is comparable to the release of the pre-patinated samples. The behavior of pre-patinated steel, with or without beeswax covering, do not show particular difference. This exposure environment doesn’t allow a completely stabilization of the patina; nevertheless an estimate of metal release after 10 years of exposure points out that the environmental impact of Cor-Ten is very low: for example, the release of chromium in the soluble fraction is less than 10 mg if we consider an exposed wall of 10 m2.
Resumo:
Bis heute ist die Frage nicht geklärt, warum bei der Entstehung des Universums Materie gegenüber der Antimaterie bevorzugt war und das heutige Materieuniversum entstanden ist. Eine Voraussetzung für die Entstehung dieser Materie-Antimaterie-Asymmetrie ist die Verletzung der Kombination von Ladungs- (C) und Punktsymmetrie (P), die CP-Verletzung. CP-Verletzung kann sich unter anderem in den Zerfällen K+- -> pi+- pi0 pi0 zeigen. Die NA48/2"=Kollaboration zeichnete während den Jahren 2003 und 2004 über 200~TB Daten von Zerfällen geladener Kaonen auf. In dieser Arbeit wurde die CP"=verletzende Asymmetrie der Zerfälle K+- -> pi+- pi0 pi0 mit über 90~Millionen ausgewählten Ereignissen aus diesem Datensatz gemessen. Vorhersagen im Standardmodell der Teilchenphysik sagen hier eine CP"=verletzende Asymmetrie in der Größenordnung zwischen $10^{-6}$ und $10^{-5}$ voraus. In Modellen außerhalb des Standardmodells kann es aber auch größere Asymmetrien geben. Das NA48/2"=Experiment war darauf ausgelegt, mögliche systematische Unsicherheiten zu begrenzen. Um dies zu erreichen, wurden positive und negative Kaonen simultan an einem Target erzeugt und ihr Impuls durch ein Strahlsystem mit zwei Strahlengängen auf ca. $60~GeV/c$ begrenzt. Die Strahlen wurden auf wenige Millimeter genau überlagert in die Zerfallsregion geleitet. Die Strahlengänge von positiven und negativen Kaonen sowie die Polarität des Magneten des Impulsspektrometers wurden regelmäßig gewechselt. Dies erlaubte eine Symmetrisierung von Strahlführung und Detektor für positive und negative Kaonen während der Analyse. Durch ein Vierfachverhältnis der vier Datensätze mit den unterschiedlichen Konfigurationen konnte sichergestellt werden, dass alle durch Strahlführung oder Detektor erzeugten Asymmetrien sich in erster Ordnung aufheben. Um die unterschiedlichen Produktionsspektren von positiven und negativen Kaonen auszugleichen wurde in dieser Arbeit eine Ereignisgewichtung durchgeführt. Die Analyse wurde auf mögliche systematische Unsicherheiten untersucht. Dabei zeigte sich, dass die systematischen Unsicherheiten in der Analyse deutlich kleiner als der statistischer Fehler sind. Das Ergebnis der Messung des die CP-verletzende Asymmetrie beschreibenden Parameters $A_g$ ist: begin{equation} A_g= (1,2 pm 1,7_{mathrm{(stat)}} pm 0,7_{mathrm{(sys)}}) cdot 10^{-4}. end{equation} Diese Messung ist fast zehnmal genauer als bisherige Messungen und stimmt innerhalb ihrer Unsicherheit mit dem Standardmodell überein. Modelle, die eine größere CP-Verletzung in diesem Zerfall vorhersagen, können ausgeschlossen werden.
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Development of a biorefinery scheme for the valorization of olive mill wastewaters and grape pomaces
Resumo:
In the Mediterranean area, olive mill wastewater (OMW) and grape pomace (GP) are among the major agro-industrial wastes produced. These two wastes have a high organic load and high phytotoxicity. Thus, their disposal in the environment can lead to negative effects. Second-generation biorefineries are dedicated to the valorization of biowaste by the production of goods from such residual biomasses. This approach can combine bioremediation approaches to the generation of noble molecules, biomaterials and energy. The main aim of this thesis work was to study the anaerobic digestion of OMW and GP under different operational conditions to produce volatile fatti acids (VFAs) (first stage aim) and CH4 (second stage aim). To this end, a packed-bed biofilm reactor (PBBR) was set up to perform the anaerobic acidogenic digestion of the liquid dephenolized stream of OMW (OMWdeph). In parallel, the solid stream of OMW (OMWsolid), previously separated in order to allow the solid phase extraction of polyphenols, was addressed to anaerobic methanogenic digestion to obtain CH4. The latter experiment was performed in 100ml Pyrex bottles which were maintained at different temperatures (55-45-37°C). Together with previous experiments, the anaerobic acidogenic digestion of fermented GP (GPfreshacid) and dephenolized and fermented GP (GPdephacid) was performed in 100ml Pyrex bottles to estimate the concentration of VFAs achievable from each aforementioned GPs. Finally, the same matrices of GP and not pre-treated GP (GPfresh) were digested under anaerobic methanogenic condition to produce CH4. Anaerobic acidogenic and methanogenic digestion processes of GPs lasted about 33 days. Instead, the anaerobic acidogenic and methanogenic digestion process of OMWs lasted about 121 and 60 days, respectively. Each experiment was periodically monitored by analysing volume and composition of produced biogas and VFA concentration. Results showed that VFAs were produced in higher concentrations in GP compared to OMWdeph. The overall concentration of VFAs from GPfreshacid was approximately 39.5 gCOD L-1, 29 gCOD L-1 from GPdephacid, and 8.7 gCOD L-1 from OMWdeph. Concerning the CH4 production, the OMWsolid reached a high biochemical methane potential (BMP) at a thermophilic temperature (55°) than at mesophlic ones (37-45°C). The value reached was about 358.7 mlCH4 gSVsub-1. In contrast, GPfresh got a high BMP but at a mesophilic temperature. The BMP was about 207.3 mlCH4 gSVsub-1, followed by GPfreshacid with about 192.6 mlCH4 gSVsub-1 and lastly GPdephacid with about 102.2 mlCH4 gSVsub-1. In summary, based on the gathered results, GP seems to be a better carbon source for acidogenic and methanogenic microrganism compared to OMW, because higher amount of VFAs and CH4 were produced in AD of GP than OMW. In addition to these products, polyphenols were extracted by means of a solid phase extraction (SPE) procedure by another research group, and VFAs were utilised for biopolymers production, in particular polyhydroxyalkanoates (PHAs), by the same research group in which I was involved.
Resumo:
Practice guidelines are systematically developed statements and recommendations that assist the physicians and patients in making decisions about appropriate health care measures for specific clinical circumstances taking into account specific national health care structures. The 1(st) revision of the S-2k guideline of the German Sepsis Society in collaboration with 17 German medical scientific societies and one self-help group provides state-of-the-art information (results of controlled clinical trials and expert knowledge) on the effective and appropriate medical care (prevention, diagnosis, therapy and follow-up care) of critically ill patients with severe sepsis or septic shock. The guideline had been developed according to the "German Instrument for Methodological Guideline Appraisal" of the Association of the Scientific Medical Societies (AWMF). In view of the inevitable advancements in scientific knowledge and technical expertise, revisions, updates and amendments must be periodically initiated. The guideline recommendations may not be applied under all circumstances. It rests with the clinician to decide whether a certain recommendation should be adopted or not, taking into consideration the unique set of clinical facts presented in connection with each individual patient as well as the available resources.
Resumo:
Data gathering, either for event recognition or for monitoring applications is the primary intention for sensor network deployments. In many cases, data is acquired periodically and autonomously, and simply logged onto secondary storage (e.g. flash memory) either for delayed offline analysis or for on demand burst transfer. Moreover, operational data such as connectivity information, node and network state is typically kept as well. Naturally, measurement and/or connectivity logging comes at a cost. Space for doing so is limited. Finding a good representative model for the data and providing clever coding of information, thus data compression, may be a means to use the available space to its best. In this paper, we explore the design space for data compression for wireless sensor and mesh networks by profiling common, publicly available algorithms. Several goals such as a low overhead in terms of utilized memory and compression time as well as a decent compression ratio have to be well balanced in order to find a simple, yet effective compression scheme.
Resumo:
We present theory and experiments on the dynamics of reaction fronts in two-dimensional, vortex-dominated flows, for both time-independent and periodically driven cases. We find that the front propagation process is controlled by one-sided barriers that are either fixed in the laboratory frame (time-independent flows) or oscillate periodically (periodically driven flows). We call these barriers burning invariant manifolds (BIMs), since their role in front propagation is analogous to that of invariant manifolds in the transport and mixing of passive impurities under advection. Theoretically, the BIMs emerge from a dynamical systems approach when the advection-reaction-diffusion dynamics is recast as an ODE for front element dynamics. Experimentally, we measure the location of BIMs for several laboratory flows and confirm their role as barriers to front propagation.
Resumo:
We test for differences in financial reporting quality between companies that are required to file periodically with the SEC and those that are exempted from filing reports with the SEC under Rule 12g3-2(b). We examine three earnings quality measures: conservatism, abnormal accruals, and the predictability of earnings. Our results, for all three measures, show different financial reporting quality for companies that file with the SEC than for companies exempt from filing requirements. This paper provides empirical evidence of a link between filing with the SEC and financial reporting quality for foreign firms.
Resumo:
Hydrogeomorphic processes are a major threat in many parts of the Alps, where they periodically damage infrastructure, disrupt transportation corridors or even cause loss of life. Nonetheless, past torrential activity and the analysis of areas affected during particular events remain often imprecise. It was therefore the purpose of this study to reconstruct spatio-temporal patterns of past debris-flow activity in abandoned channels on the forested cone of the Manival torrent (Massif de la Chartreuse, French Prealps). A Light Detecting and Ranging (LiDAR) generated Digital Elevation Model (DEM) was used to identify five abandoned channels and related depositional forms (lobes, lateral levees) in the proximal alluvial fan of the torrent. A total of 156 Scots pine trees (Pinus sylvestris L.) with clear signs of debris flow events was analyzed and growth disturbances (GD) assessed, such as callus tissue, the onset of compression wood or abrupt growth suppression. In total, 375 GD were identified in the tree-ring samples, pointing to 13 debris-flow events for the period 1931–2008. While debris flows appear to be very common at Manival, they have only rarely propagated outside the main channel over the past 80 years. Furthermore, analysis of the spatial distribution of disturbed trees contributed to the identification of four patterns of debris-flow routing and led to the determination of three preferential breakout locations. Finally, the results of this study demonstrate that the temporal distribution of debris flows did not exhibit significant variations since the beginning of the 20th century.
Resumo:
Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.