886 resultados para Cloud Fraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Magellanic Clouds are uniquely placed to study the stellar contribution to dust emission. Individual stars can be resolved in these systems even in the mid-infrared, and they are close enough to allow detection of infrared excess caused by dust. We have searched the Spitzer Space Telescope data archive for all Infrared Spectrograph (IRS) staring-mode observations of the Small Magellanic Cloud (SMC) and found that 209 Infrared Array Camera (IRAC) point sources within the footprint of the Surveying the Agents of Galaxy Evolution in the Small Magellanic Cloud (SAGE-SMC) Spitzer Legacy programme were targeted, within a total of 311 staring-mode observations. We classify these point sources using a decision tree method of object classification, based on infrared spectral features, continuum and spectral energy distribution shape, bolometric luminosity, cluster membership and variability information. We find 58 asymptotic giant branch (AGB) stars, 51 young stellar objects, 4 post-AGB objects, 22 red supergiants, 27 stars (of which 23 are dusty OB stars), 24 planetary nebulae (PNe), 10 Wolf-Rayet stars, 3 H II regions, 3 R Coronae Borealis stars, 1 Blue Supergiant and 6 other objects, including 2 foreground AGB stars. We use these classifications to evaluate the success of photometric classification methods reported in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Binary stellar evolution calculations predict thatChandrasekhar-mass carbon/oxygen white dwarfs (WDs) show a radiallyvarying profile for the composition with a carbon depleted core. Manyrecent multi-dimensional simulations of Type Ia supernovae (SNe Ia),however, assume the progenitor WD has a homogeneous chemicalcomposition.
Aims: In this work, we explore the impact ofdifferent initial carbon profiles of the progenitor WD on the explosionphase and on synthetic observables in the Chandrasekhar-mass delayeddetonation model. Spectra and light curves are compared to observationsto judge the validity of the model.
Methods: The explosion phaseis simulated using the finite volume supernova code Leafs, which isextended to treat different compositions of the progenitor WD. Thesynthetic observables are computed with the Monte Carlo radiativetransfer code Artis. Results: Differences in binding energies ofcarbon and oxygen lead to a lower nuclear energy release for carbondepleted material; thus, the burning fronts that develop are weaker andthe total nuclear energy release is smaller. For otherwise identicalconditions, carbon depleted models produce less 56Ni.Comparing different models with similar 56Ni yields showslower kinetic energies in the ejecta for carbon depleted models, butonly small differences in velocity distributions and line velocities inspectra. The light curve width-luminosity relation (WLR) obtained formodels with differing carbon depletion is roughly perpendicular to theobserved WLR, hence the carbon mass fraction is probably only asecondary parameter in the family of SNe Ia.
Tables 3 and 4 are available in electronic form at http://www.aanda.org

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objectivo deste trabalho científico é o estudo do transporte vertical de momento linear horizontal (CMT) realizado por sistemas de nuvens de convecção profunda sobre o oceano tropical. Para realizar este estudo, foram utilizadas simulações tridimensionais produzidas por um modelo explícito de nuvens (CRM) para os quatro meses de duração da campanha observacional TOGA COARE que ocorreu sobre as águas quentes do Pacífico ocidental. O estudo foca essencialmente as características estatísticas e à escala da nuvem do CMT durante um episódio de fortes ventos de oeste e durante um período de tempo maior que incluí este evento de convecção profunda. As distribuições verticais e altitude-temporais de campos atmosféricos relacionados com o CMT são avaliadas relativamente aos campos observacionais disponíveis, mostrando um bom acordo com os resultados de estudos anteriores, confirmando assim a boa qualidade das primeiras e fornecendo a confiança necessária para continuar a investigação. A sensibilidade do CMT em relação do domínio espacial do model é analisada, utilizando dois tipos de simulações tridimensionais produzidas por domínios horizontais de diferente dimensão, sugerindo que o CMT não depende da dimensão do domínio espacial horizontal escolhido para simular esta variável. A capacidade da parameterização do comprimento de mistura simular o CMT é testada, destacando as regiões troposféricas onde os fluxos de momento linear horizontal são no sentido do gradiente ou contra o gradiente. Os fluxos no sentido do gradiente apresentam-se relacionados a uma fraca correlação entre os campos atmosféricos que caracterizam esta parameterização, sugerindo que as formulações dos fluxos de massa dentro da nuvem e o fenómeno de arrastamento do ar para dentro da nuvem devem ser revistos. A importância do ar saturado e não saturado para o CMT é estudada com o objectivo de alcançar um melhor entendimento acerca dos mecanismos físicos responsáveis pelo CMT. O ar não saturado e saturado na forma de correntes descendentes contribuem de forma determinante para o CMT e deverão ser considerados em futuras parameterizações do CMT e da convecção em nuvens cumulus. Métodos de agrupamento foram aplicados às contribuições do ar saturado e não saturado, analisando os campos da força de flutuação e da velocidade vertical da partícula de ar, concluindo-se a presença de ondas gravíticas internas como mecanismo responsável pelo ar não saturado. A força do gradiente de pressão dentro da nuvem é também avaliada, utilizando para este efeito a fórmula teórica proposta por Gregory et al. (1997). Uma boa correlação entre esta força e o produto entre efeito de cisalhamento do vento e a perturbação da velocidade vertical é registada, principalmente para as correntes ascendentes dentro da nuvem durante o episódio de convecção profunda. No entanto, o valor ideal para o coeficiente empírico c*, que caracteriza a influência da força do gradiente de pressão dentro da nuvem sobre a variação vertical da velocidade horizontal dentro da nuvem, não é satisfatoriamente alcançado. Bons resultados são alcançados através do teste feito à aproximação do fluxo de massa proposta por Kershaw e Gregory (1997) para o cálculo do CMT total, revelando mais uma vez a importância do ar não saturado para o CMT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de mestrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2015