26 resultados para Dynamic cloud service selection

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web service is one of the most fundamental technologies in implementing service oriented architecture (SOA) based applications. One essential challenge related to web service is to find suitable candidates with regard to web service consumer’s requests, which is normally called web service discovery. During a web service discovery protocol, it is expected that the consumer will find it hard to distinguish which ones are more suitable in the retrieval set, thereby making selection of web services a critical task. In this paper, inspired by the idea that the service composition pattern is significant hint for service selection, a personal profiling mechanism is proposed to improve ranking and recommendation performance. Since service selection is highly dependent on the composition process, personal knowledge is accumulated from previous service composition process and shared via collaborative filtering where a set of users with similar interest will be firstly identified. Afterwards a web service re-ranking mechanism is employed for personalised recommendation. Experimental studies are conduced and analysed to demonstrate the promising potential of this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaporation (sublimation) of ice particles beneath frontal ice cloud can provide a significant source of diabatic cooling which can lead to enhanced slantwise descent below the frontal surface. The strength and vertical extent of the cooling play a role in determining the dynamic response of the atmosphere, and an adequate representation is required in numerical weather-prediction (NWP) models for accurate forecasts of frontal dynamics. In this paper, data from a vertically pointing 94 GHz radar are used to determine the characteristic depth-scale of ice particle sublimation beneath frontal ice cloud. A statistical comparison is made with equivalent data extracted from the NWP mesoscale model operational at the Met Office, defining the evaporation depth-scale as the distance for the ice water content to fall to 10% of its peak value in the cloud. The results show that the depth of the ice evaporation zone derived from observations is less than 1 km for 90% of the time. The model significantly overestimates the sublimation depth-scales by a factor of between two and three, and underestimates the local ice water content by a factor of between two and four. Consequently the results suggest the model significantly underestimates the strength of the evaporative cooling, with implications for the prediction of frontal dynamics. A number of reasons for the model discrepancy are suggested. A comparison with radiosonde relative humidity data suggests part of the overestimation in evaporation depth may be due to a high RH bias in the dry slot beneath the frontal cloud, but other possible reasons include poor vertical resolution and deficiencies in the evaporation rate or ice particle fall-speed parametrizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite measurements of the radiation budget and data from the U.S. National Centers for Environmental Prediction–National Center for Atmospheric Research reanalysis are used to investigate the links between anomalous cloud radiative forcing over the tropical west Pacific warm pool and the tropical dynamics and sea surface temperature (SST) distribution during 1998. The ratio, N, of the shortwave cloud forcing (SWCF) to longwave cloud forcing (LWCF) (N = −SWCF/LWCF) is used to infer information on cloud altitude. A higher than average N during 1998 appears to be related to two separate phenomena. First, dynamic regime-dependent changes explain high values of N (associated with low cloud altitude) for small magnitudes of SWCF and LWCF (low cloud fraction), which reflect the unusual occurrence of mean subsiding motion over the tropical west Pacific during 1998, associated with the anomalous SST distribution. Second, Tropics-wide long-term changes in the spatial-mean cloud forcing, independent of dynamic regime, explain the higher values of N during both 1998 and in 1994/95. The changes in dynamic regime and their anomalous structure in 1998 are well simulated by version HadAM3 of the Hadley Centre climate model, forced by the observed SSTs. However, the LWCF and SWCF are poorly simulated, as are the interannual changes in N. It is argued that improved representation of LWCF and SWCF and their dependence on dynamical forcing are required before the cloud feedbacks simulated by climate models can be trusted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the recent developments and improvements made to the variable radius niching technique called Dynamic Niche Clustering (DNC). DNC is fitness sharing based technique that employs a separate population of overlapping fuzzy niches with independent radii which operate in the decoded parameter space, and are maintained alongside the normal GA population. We describe a speedup process that can be applied to the initial generation which greatly reduces the complexity of the initial stages. A split operator is also introduced that is designed to counteract the excessive growth of niches, and it is shown that this improves the overall robustness of the technique. Finally, the effect of local elitism is documented and compared to the performance of the basic DNC technique on a selection of 2D test functions. The paper is concluded with a view to future work to be undertaken on the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of using GPS for Alzheimer's Patients is to give carers and families of those affected by Alzheimer's Disease, as well as all the other dementia related conditions, a service that can, via SMS text message, notify them should their loved one leave their home. Through a custom website, it enables the carer to remotely manage a contour boundary that is specifically assigned to the patient as well as the telephone numbers of the carers. The technique makes liberal use of such as Google Maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demands for thermal comfort, better indoor air quality together with lower environmental impacts have had ascending trends in the last decade. In many circumstances, these demands could not be fully covered through the soft approach of bioclimatic design like optimisation of the building orientation and internal layout. This is mostly because of the dense urban environment and building internal energy loads. In such cases, heating, ventilation, air-conditioning and refrigeration (HVAC&R) systems make a key role to fulfill the requirements of indoor environment. Therefore, it is required to select the most proper HVAC&R system. In this study, a robust decision making approach for HVAC&R system selection is proposed. Technical performance, economic aspect and environmental impacts of 36 permutations of primary and secondary systems are taken into account to choose the most proper HVAC&R system for a case study office building. The building is a representative for the dominant form of office buildings in the UK. Dynamic performance evaluation of HVAC&R alternatives using TRNSYS package together with life cycle energy cost analysis provides a reliable basis for decision making. Six scenarios broadly cover the decision makers' attitudes on HVAC&R system selection which are analysed through Analytical Hierarchy Process (AHP). One of the significant outcomes reveals that, despite both the higher energy demand and more investment requirements associated with compound heating, cooling and power system (CCHP); this system is one of the top ranked alternatives due to the lower energy cost and C02 emissions. The sensitivity analysis reveals that in all six scenarios, the first five top ranked alternatives are not changed. Finally, the proposed approach and the results could be used by researchers and designers especially in the early stages of a design process in which all involved bodies face the lack of time, information and tools for evaluation of a variety of systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.