59 resultados para Elasticity Virtualizzazione Scalability Onit Azure AWS Microsoft Cloud Computing

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a combination of technologies to provide an Energy-on-Demand (EoD) service to enable low cost innovation suitable for microgrid networks. The system is designed around the low cost and simple Rural Energy Device (RED) Box which in combination with Short Message Service (SMS) communication methodology serves as an elementary proxy for Smart meters which are typically used in urban settings. Further, customer behavior and familiarity in using such devices based on mobile experience has been incorporated into the design philosophy. Customers are incentivized to interact with the system thus providing valuable behavioral and usage data to the Utility Service Provider (USP). Data that is collected over time can be used by the USP for analytics envisioned by using remote computing services known as cloud computing service. Cloud computing allows for a sharing of computational resources at the virtual level across several networks. The customer-system interaction is facilitated by a third party Telecom Service provider (TSP). The approximate cost of the RED Box is envisaged to be under USD 10 on production scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report here on a series of laboratory experiments on plumes, undertaken with the object of simulating the effect of the heat release that occurs in clouds on condensation of water vapor. The experimental technique used for this purpose relies on ohmic heating generated in an electrically conducting plume fluid subjected to a suitable alternating voltage across specified axial stations in the plume flow [Bhat et al., 1989]. The present series of experiments achieves a value of the Richardson number that is toward the lower end of the range that characteristics cumulus clouds. It is found that the buoyancy enhancement due to heating disrupts the eddy structures in the flow and reduces the dilution owing to entrainment of ambient fluid that would otherwise have occurred in the central region of the plume. Heating also reduces the spread rate of the plume, but as it accelerates the flow as well, the overall specific mass flux in the plume does not show a very significant change at the heat input employed in the experiment. However, there is some indication that the entrainment rate (proportional to the streamwise derivative of the mass flux) is slightly higher immediately after heat injection and slightly lower farther downstream. The measurements support a previous proposal for a cloud scenario [Bhat and Narasimha, 1996] and demonstrate how fresh insights into certain aspects of the fluid dynamics of clouds may be derived from the experimental techniques employed here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aerosols from biomass burning can alter the radiative balance of the Earth by reflecting and absorbing solar radiation(1). Whether aerosols exert a net cooling or a net warming effect will depend on the aerosol type and the albedo of the underlying surface(2). Here, we use a satellite-based approach to quantify the direct, top-of-atmosphere radiative effect of aerosol layers advected over the partly cloudy boundary layer of the southeastern Atlantic Ocean during July-October of 2006 and 2007. We show that the warming effect of aerosols increases with underlying cloud coverage. This relationship is nearly linear, making it possible to define a critical cloud fraction at which the aerosols switch from exerting a net cooling to a net warming effect. For this region and time period, the critical cloud fraction is about 0.4, and is strongly sensitive to the amount of solar radiation the aerosols absorb and the albedo of the underlying clouds. We estimate that the regional-mean warming effect of aerosols is three times higher when large-scale spatial covariation between cloud cover and aerosols is taken into account. These results demonstrate the importance of cloud prediction for the accurate quantification of aerosol direct effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Synchrotron-based high-pressure x-ray diffraction measurements indicate that compressibility, a fundamental materials property, can have a size-specific minimum value. The bulk modulus of nanocrystalline titania has a maximum at particle size of 15 nm. This can be explained by dislocation behavior because very high dislocation contents can be achieved when shear stress induced within nanoparticles counters the repulsion between dislocations. As particle size decreases, compression increasingly generates dislocation networks hardened by overlap of strain fields that shield intervening regions from external pressure. However, when particles become too small to sustain high dislocation concentrations, elastic stiffening declines. The compressibility has a minimum at intermediate sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fraunhoffer diffraction analysis of cloud-covered satellite imagery has shown that the diffraction pattern follows approximately cosine squared distribution. The overshooting tops of clouds and the shadows cast by them contribute much to the diffraction of light, particularly in the high-frequency range. Indeed, cloud-covered imagery can be distinguished from cloud-free imagery on the basis of rate of decay of the diffracted light power in the high-frequency band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of an infinite circular sandwich shell subjected to an a\isymmetric radial line load is investigated using three-dimensional elasticity theory, shell core method, and sandwich shell theory due to Fulton and Schmidt. A comparison of the stresses and displacements with an exact elasticity solution is carried out for various shell parameters in order to clearly bring out the limitations of sandwich shell theories of Fulton and Schmidt as well as the shell core solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An elasticity solution has been obtained for a long circular sandwich cylindrical shell subjected to axisymmetric radial ring load using Love's stress function approach. Numerical results are presented for different ratios of modulus of elasticity of the layers. The results obtained from this analysis have been compared with those obtained from sandwich shell theory due to Fulton.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation is presented of the daily variation of the maximum cloud zone (MCZ) and the 7W mb trough in the Northern Hemisphere over the Indian longitudes 70–90°E during April–October for 1973–77. It is found that during June–September there are two favorable locations for a MCZ over these longitudes–on a majority of days the MCZ is present in the monsoon zone north of 15°N, and often a secondary MCZ occurs in the equatorial region (0–10°N). The monsoon MCZ gets established by northward movement of the MCZ occurring over the equatorial Indian ocean in April and May. The secondary MCZ appears intermittently, and is characterized by long spells of persistence only when the monsoon MCZ is absent. In each of the seasons studied, the MCZ temporarily disappeared from the mean summer monsoon location (15–28°N) about four weeks after it was established near the beginning of July. It is reestablished by the northward movement of the secondary MCZ, which becomes active during the absence of the monsoon MCZ, in a manner strikingly similar to that observed in the spring to summer transition. A break in monsoon conditions prevails just prior to the temporary disappearance of the monsoon MCZ. Thus we conclude that the monsoon MCZ cannot survive for longer than a month without reestablishment by the secondary MCZ. Possible underlying mechanisms are also discussed.