72 resultados para Cloud Computing
em Indian Institute of Science - Bangalore - Índia
Resumo:
There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.
Resumo:
In this paper we present a combination of technologies to provide an Energy-on-Demand (EoD) service to enable low cost innovation suitable for microgrid networks. The system is designed around the low cost and simple Rural Energy Device (RED) Box which in combination with Short Message Service (SMS) communication methodology serves as an elementary proxy for Smart meters which are typically used in urban settings. Further, customer behavior and familiarity in using such devices based on mobile experience has been incorporated into the design philosophy. Customers are incentivized to interact with the system thus providing valuable behavioral and usage data to the Utility Service Provider (USP). Data that is collected over time can be used by the USP for analytics envisioned by using remote computing services known as cloud computing service. Cloud computing allows for a sharing of computational resources at the virtual level across several networks. The customer-system interaction is facilitated by a third party Telecom Service provider (TSP). The approximate cost of the RED Box is envisaged to be under USD 10 on production scale.
Resumo:
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
Resumo:
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
We report here on a series of laboratory experiments on plumes, undertaken with the object of simulating the effect of the heat release that occurs in clouds on condensation of water vapor. The experimental technique used for this purpose relies on ohmic heating generated in an electrically conducting plume fluid subjected to a suitable alternating voltage across specified axial stations in the plume flow [Bhat et al., 1989]. The present series of experiments achieves a value of the Richardson number that is toward the lower end of the range that characteristics cumulus clouds. It is found that the buoyancy enhancement due to heating disrupts the eddy structures in the flow and reduces the dilution owing to entrainment of ambient fluid that would otherwise have occurred in the central region of the plume. Heating also reduces the spread rate of the plume, but as it accelerates the flow as well, the overall specific mass flux in the plume does not show a very significant change at the heat input employed in the experiment. However, there is some indication that the entrainment rate (proportional to the streamwise derivative of the mass flux) is slightly higher immediately after heat injection and slightly lower farther downstream. The measurements support a previous proposal for a cloud scenario [Bhat and Narasimha, 1996] and demonstrate how fresh insights into certain aspects of the fluid dynamics of clouds may be derived from the experimental techniques employed here.
Resumo:
Aerosols from biomass burning can alter the radiative balance of the Earth by reflecting and absorbing solar radiation(1). Whether aerosols exert a net cooling or a net warming effect will depend on the aerosol type and the albedo of the underlying surface(2). Here, we use a satellite-based approach to quantify the direct, top-of-atmosphere radiative effect of aerosol layers advected over the partly cloudy boundary layer of the southeastern Atlantic Ocean during July-October of 2006 and 2007. We show that the warming effect of aerosols increases with underlying cloud coverage. This relationship is nearly linear, making it possible to define a critical cloud fraction at which the aerosols switch from exerting a net cooling to a net warming effect. For this region and time period, the critical cloud fraction is about 0.4, and is strongly sensitive to the amount of solar radiation the aerosols absorb and the albedo of the underlying clouds. We estimate that the regional-mean warming effect of aerosols is three times higher when large-scale spatial covariation between cloud cover and aerosols is taken into account. These results demonstrate the importance of cloud prediction for the accurate quantification of aerosol direct effects.
Resumo:
A technique for computing the spectral and angular (both the zenith and azimuthal) distribution of the solar energy reaching the surface of earth and any other plane in the atmosphere has been developed. Here the computer code LOWTRAN is used for getting the atmospheric transmittances in conjunction with two approximate procedures: one based on the Eddington method and the other on van de Hulst's adding method, for solving the equation of radiative transfer to obtain the diffuse radiation in the cloud-free situation. The aerosol scattering phase functions are approximated by the Hyeney-Greenstein functions. When the equation of radiative transfer is solved using the adding method, the azimuthal and zenith angle dependence of the scattered radiation is evaluated, whereas when the Eddington technique is utilized only the total downward flux of scattered solar radiation is obtained. Results of the diffuse and beam components of solar radiation received on surface of earth compare very well with those computed by other methods such as the more exact calculations using spherical harmonics and when atmospheric conditions corresponding to that prevailing locally in a tropical location (as in India) are used as inputs the computed values agree closely with the measured values.
Resumo:
Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems. Index Terms-Architecture optimization, computation
Resumo:
This paper is aimed at reviewing the notion of Byzantine-resilient distributed computing systems, the relevant protocols and their possible applications as reported in the literature. The three agreement problems, namely, the consensus problem, the interactive consistency problem, and the generals problem have been discussed. Various agreement protocols for the Byzantine generals problem have been summarized in terms of their performance and level of fault-tolerance. The three classes of Byzantine agreement protocols discussed are the deterministic, randomized, and approximate agreement protocols. Finally, application of the Byzantine agreement protocols to clock synchronization is highlighted.
Resumo:
The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg(loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn(loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O(n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice.
Resumo:
The Fraunhoffer diffraction analysis of cloud-covered satellite imagery has shown that the diffraction pattern follows approximately cosine squared distribution. The overshooting tops of clouds and the shadows cast by them contribute much to the diffraction of light, particularly in the high-frequency range. Indeed, cloud-covered imagery can be distinguished from cloud-free imagery on the basis of rate of decay of the diffracted light power in the high-frequency band.