9 resultados para Server consolidation
em Universidad Politécnica de Madrid
Resumo:
As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.
Resumo:
A mathematical formulation for finite strain elasto plastic consolidation of fully saturated soil media is presented. Strong and weak forms of the boundary-value problem are derived using both the material and spatial descriptions. The algorithmic treatment of finite strain elastoplasticity for the solid phase is based on multiplicative decomposition and is coupled with the algorithm for fluid flow via the Kirchhoff pore water pressure. Balance laws are written for the soil-water mixture following the motion of the soil matrix alone. It is shown that the motion of the fluid phase only affects the Jacobian of the solid phase motion, and therefore can be characterized completely by the motion of the soil matrix. Furthermore, it is shown from energy balance consideration that the effective, or intergranular, stress is the appropriate measure of stress for describing the constitutive response of the soil skeleton since it absorbs all the strain energy generated in the saturated soil-water mixture. Finally, it is shown that the mathematical model is amenable to consistent linearization, and that explicit expressions for the consistent tangent operators can be derived for use in numerical solutions such as those based on the finite element method.
Resumo:
A mathematical model for finite strain elastoplastic consolidation of fully saturated soil media is implemented into a finite element program. The algorithmic treatment of finite strain elastoplasticity for the solid phase is based on multiplicative decomposition and is coupled with the algorithm for fluid flow via the Kirchhoff pore water pressure. A two-field mixed finite element formulation is employed in which the nodal solid displacements and the nodal pore water pressures are coupled via the linear momentum and mass balance equations. The constitutive model for the solid phase is represented by modified Cam—Clay theory formulated in the Kirchhoff principal stress space, and return mapping is carried out in the strain space defined by the invariants of the elastic logarithmic principal stretches. The constitutive model for fluid flow is represented by a generalized Darcy's law formulated with respect to the current configuration. The finite element model is fully amenable to exact linearization. Numerical examples with and without finite deformation effects are presented to demonstrate the impact of geometric nonlinearity on the predicted responses. The paper concludes with an assessment of the performance of the finite element consolidation model with respect to accuracy and numerical stability.
Consolidation of a wsn and minimax method to rapidly neutralise intruders in strategic installations
Resumo:
Due to the sensitive international situation caused by still-recent terrorist attacks, there is a common need to protect the safety of large spaces such as government buildings, airports and power stations. To address this problem, developments in several research fields, such as video and cognitive audio, decision support systems, human interface, computer architecture, communications networks and communications security, should be integrated with the goal of achieving advanced security systems capable of checking all of the specified requirements and spanning the gap that presently exists in the current market. This paper describes the implementation of a decision system for crisis management in infrastructural building security. Specifically, it describes the implementation of a decision system in the management of building intrusions. The positions of the unidentified persons are reported with the help of a Wireless Sensor Network (WSN). The goal is to achieve an intelligent system capable of making the best decision in real time in order to quickly neutralise one or more intruders who threaten strategic installations. It is assumed that the intruders’ behaviour is inferred through sequences of sensors’ activations and their fusion. This article presents a general approach to selecting the optimum operation from the available neutralisation strategies based on a Minimax algorithm. The distances among different scenario elements will be used to measure the risk of the scene, so a path planning technique will be integrated in order to attain a good performance. Different actions to be executed over the elements of the scene such as moving a guard, blocking a door or turning on an alarm will be used to neutralise the crisis. This set of actions executed to stop the crisis is known as the neutralisation strategy. Finally, the system has been tested in simulations of real situations, and the results have been evaluated according to the final state of the intruders. In 86.5% of the cases, the system achieved the capture of the intruders, and in 59.25% of the cases, they were intercepted before they reached their objective.
Resumo:
The analysis of deformation in soils is of paramount importance in geotechnical engineering. For a long time the complex behaviour of natural deposits defied the ingenuity of engineers. The time has come that, with the aid of computers, numerical methods will allow the solution of every problem if the material law can be specified with a certain accuracy. Boundary Techniques (B.E.) have recently exploded in a splendid flowering of methods and applications that compare advantegeously with other well-established procedures like the finite element method (F.E.). Its application to soil mechanics problems (Brebbia 1981) has started and will grow in the future. This paper tries to present a simple formulation to a classical problem. In fact, there is already a large amount of application of B.E. to diffusion problems (Rizzo et al, Shaw, Chang et al, Combescure et al, Wrobel et al, Roures et al, Onishi et al) and very recently the first specific application to consolidation problems has been published by Bnishi et al. Here we develop an alternative formulation to that presented in the last reference. Fundamentally the idea is to introduce a finite difference discretization in the time domain in order to use the fundamental solution of a Helmholtz type equation governing the neutral pressure distribution. Although this procedure seems to have been unappreciated in the previous technical literature it is nevertheless effective and straightforward to implement. Indeed for the special problem in study it is perfectly suited, because a step by step interaction between the elastic and flow problems is needed. It allows also the introduction of non-linear elastic properties and time dependent conditions very easily as will be shown and compares well with performances of other approaches.
Resumo:
Leakage power consumption is a com- ponent of the total power consumption in data cen- ters that is not traditionally considered in the set- point temperature of the room. However, the effect of this power component, increased with temperature, can determine the savings associated with the careful management of the cooling system, as well as the re- liability of the system. The work presented in this paper detects the need of addressing leakage power in order to achieve substantial savings in the energy consumption of servers. In particular, our work shows that, by a careful detection and management of two working regions (low and high impact of thermal- dependent leakage), energy consumption of the data- center can be optimized by a reduction of the cooling budget.
Resumo:
Reducing the energy consumption for computation and cooling in servers is a major challenge considering the data center energy costs today. To ensure energy-efficient operation of servers in data centers, the relationship among computa- tional power, temperature, leakage, and cooling power needs to be analyzed. By means of an innovative setup that enables monitoring and controlling the computing and cooling power consumption separately on a commercial enterprise server, this paper studies temperature-leakage-energy tradeoffs, obtaining an empirical model for the leakage component. Using this model, we design a controller that continuously seeks and settles at the optimal fan speed to minimize the energy consumption for a given workload. We run a customized dynamic load-synthesis tool to stress the system. Our proposed cooling controller achieves up to 9% energy savings and 30W reduction in peak power in comparison to the default cooling control scheme.
Resumo:
The computational and cooling power demands of enterprise servers are increasing at an unsustainable rate. Understanding the relationship between computational power, temperature, leakage, and cooling power is crucial to enable energy-efficient operation at the server and data center levels. This paper develops empirical models to estimate the contributions of static and dynamic power consumption in enterprise servers for a wide range of workloads, and analyzes the interactions between temperature, leakage, and cooling power for various workload allocation policies. We propose a cooling management policy that minimizes the server energy consumption by setting the optimum fan speed during runtime. Our experimental results on a presently shipping enterprise server demonstrate that including leakage awareness in workload and cooling management provides additional energy savings without any impact on performance.
Resumo:
The inverter in a photovoltaic system assures two essential functions. The first is to track the maximum power point of the system IV curve throughout variable environmental conditions. The second is to convert DC power delivered by the PV panels into AC power. Nowadays, in order to qualify inverters, manufacturers and certifying organisms use mainly European and/or CEC efficiency standards. The question arises if these are still representative of CPV system behaviour. We propose to use a set of CPV – specific weighted average and a representative dynamic response to have a better determination of the static and dynamic MPPT efficiencies. Four string-sized commercial inverters used in real CPV plants have been tested.