52 resultados para cloud TV


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports variations of polycyclic aromatic hydrocarbons (PAHs) features that were found in Spitzer Space Telescope spectra of carbon-rich post-asymptotic giant branch (post-AGB) stars in the Large Magellanic Cloud (LMC). The paper consists of two parts. The first part describes our Spitzer spectral observing programme of 24 stars including post-AGB candidates. The latter half of this paper presents the analysis of PAH features in 20 carbon-rich post-AGB stars in the LMC, assembled from the Spitzer archive as well as from our own programme.We found that five post-AGB stars showed a broad feature with a peak at 7.7 μm, that had not been classified before. Further, the 10-13 μm PAH spectra were classified into four classes, one of which has three broad peaks at 11.3, 12.3 and 13.3 μm rather than two distinct sharp peaks at 11.3 and 12.7 μm, as commonly found in HII regions. Our studies suggest that PAHs are gradually processed while the central stars evolve from post-AGB phase to planetary nebulae, changing their composition before PAHs are incorporated into the interstellar medium. Although some metallicity dependence of PAH spectra exists, the evolutionary state of an object is more significant than its metallicity in determining the spectral characteristics of PAHs for LMC and Galactic post-AGB stars. © 2014 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. The jets of compact accreting objects are composed of electrons and a mixture of positrons and ions. These outflows impinge on the interstellar or intergalactic medium and both plasmas interact via collisionless processes. Filamentation (beam-Weibel) instabilities give rise to the growth of strong electromagnetic fields. These fields thermalize the interpenetrating plasmas. 

Aims. Hitherto, the effects imposed by a spatial non-uniformity on filamentation instabilities have remained unexplored. We examine the interaction between spatially uniform background electrons and a minuscule cloud of electrons and positrons. The cloud size is comparable to that created in recent laboratory experiments and such clouds may exist close to internal and external shocks of leptonic jets. The purpose of our study is to determine the prevalent instabilities, their ability to generate electromagnetic fields and the mechanism, by which the lepton micro-cloud transfers energy to the background plasma. 

Methods. A square micro-cloud of equally dense electrons and positrons impinges in our particle-in-cell (PIC) simulation on a spatially uniform plasma at rest. The latter consists of electrons with a temperature of 1 keV and immobile ions. The initially charge- and current neutral micro-cloud has a temperature of 100 keV and a side length of 2.5 plasma skin depths of the micro-cloud. The side length is given in the reference frame of the background plasma. The mean speed of the micro-cloud corresponds to a relativistic factor of 15, which is relevant for laboratory experiments and for relativistic astrophysical outflows. The spatial distributions of the leptons and of the electromagnetic fields are examined at several times. 

Results. A filamentation instability develops between the magnetic field carried by the micro-cloud and the background electrons. The electromagnetic fields, which grow from noise levels, redistribute the electrons and positrons within the cloud, which boosts the peak magnetic field amplitude. The current density and the moduli of the electromagnetic fields grow aperiodically in time and steadily along the direction that is anti-parallel to the cloud's velocity vector. The micro-cloud remains conjoined during the simulation. The instability induces an electrostatic wakefield in the background plasma. 

Conclusions. Relativistic clouds of leptons can generate and amplify magnetic fields even if they have a microscopic size, which implies that the underlying processes can be studied in the laboratory. The interaction of the localized magnetic field and high-energy leptons will give rise to synchrotron jitter radiation. The wakefield in the background plasma dissipates the kinetic energy of the lepton cloud. Even the fastest lepton micro-clouds can be slowed down by this collisionless mechanism. Moderately fast charge- and current neutralized lepton micro-clouds will deposit their energy close to relativistic shocks and hence they do not constitute an energy loss mechanism for the shock.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably. Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention. Assessments of uncertainty profiles are useful in two ways: firstly, they provide a broad understanding of how environmental stress can effect an application’s performance (and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Magellanic Clouds are uniquely placed to study the stellar contribution to dust emission. Individual stars can be resolved in these systems even in the mid-infrared, and they are close enough to allow detection of infrared excess caused by dust. We have searched the Spitzer Space Telescope data archive for all Infrared Spectrograph (IRS) staring-mode observations of the Small Magellanic Cloud (SMC) and found that 209 Infrared Array Camera (IRAC) point sources within the footprint of the Surveying the Agents of Galaxy Evolution in the Small Magellanic Cloud (SAGE-SMC) Spitzer Legacy programme were targeted, within a total of 311 staring-mode observations. We classify these point sources using a decision tree method of object classification, based on infrared spectral features, continuum and spectral energy distribution shape, bolometric luminosity, cluster membership and variability information. We find 58 asymptotic giant branch (AGB) stars, 51 young stellar objects, 4 post-AGB objects, 22 red supergiants, 27 stars (of which 23 are dusty OB stars), 24 planetary nebulae (PNe), 10 Wolf-Rayet stars, 3 H II regions, 3 R Coronae Borealis stars, 1 Blue Supergiant and 6 other objects, including 2 foreground AGB stars. We use these classifications to evaluate the success of photometric classification methods reported in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.