5 resultados para cloud service pricing
em Digital Commons at Florida International University
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.
Resumo:
The purpose of this paper is to compare prices for a popular quick-service restaurant chain (i.e. McDonalds’) across countries throughout the world using the “Big Mac Index” published by “The Economist.” The index was originally developed to measure the valuation of international currencies against the U.S. dollar. The analysis in this study examines the relationship between the price of a Big Mac and other variables such as the cost of beef, price elasticity, and income. Finally, these relationships are reviewed to draw inferences concerning the use of demand, costs, and competition in setting prices.
Resumo:
In the discussion - Indirect Cost Factors in Menu Pricing – by David V. Pavesic, Associate Professor, Hotel, Restaurant and Travel Administration at Georgia State University, Associate Professor Pavesic initially states: “Rational pricing methodologies have traditionally employed quantitative factors to mark up food and beverage or food and labor because these costs can be isolated and allocated to specific menu items. There are, however, a number of indirect costs that can influence the price charged because they provide added value to the customer or are affected by supply/demand factors. The author discusses these costs and factors that must be taken into account in pricing decisions. Professor Pavesic offers as a given that menu pricing should cover costs, return a profit, reflect a value for the customer, and in the long run, attract customers and market the establishment. “Prices that are too high will drive customers away, and prices that are too low will sacrifice profit,” Professor Pavesic puts it succinctly. To dovetail with this premise the author provides that although food costs measure markedly into menu pricing, other factors such as equipment utilization, popularity/demand, and marketing are but a few of the parenthetic factors also to be considered. “… there is no single method that can be used to mark up every item on any given restaurant menu. One must employ a combination of methodologies and theories,” says Professor Pavesic. “Therefore, when properly carried out, prices will reflect food cost percentages, individual and/or weighted contribution margins, price points, and desired check averages, as well as factors driven by intuition, competition, and demand.” Additionally, Professor Pavesic wants you to know that value, as opposed to maximizing revenue, should be a primary motivating factor when designing menu pricing. This philosophy does come with certain caveats, and he explains them to you. Generically speaking, Professor Pavesic says, “The market ultimately determines the price one can charge.” But, in fine-tuning that decree he further offers, “Lower prices do not automatically translate into value and bargain in the minds of the customers. Having the lowest prices in your market may not bring customers or profit. “Too often operators engage in price wars through discount promotions and find that profits fall and their image in the marketplace is lowered,” Professor Pavesic warns. In reference to intangibles that influence menu pricing, service is at the top of the list. Ambience, location, amenities, product [i.e. food] presentation, and price elasticity are discussed as well. Be aware of price-value perception; Professor Pavesic explains this concept to you. Professor Pavesic closes with a brief overview of a la carte pricing; its pros and cons.
Resumo:
The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.