7 resultados para cloud environment

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. We aim to investigate the chemistry and gas phase abundance of HNCO and the variation of the HNCO/CS abundance ratio as a diagnostic of the physics and chemistry in regions of massive star formation. Methods. A numerical-chemical model has been developed which self-consistently follows the chemical evolution of a hot core. The model comprises of two distinct stages. The first stage follows the isothermal, modified free-fall collapse of a molecular dark cloud. This is immediately followed by an increase in temperature which represents the switch on of a central massive star and the subsequent evolution of the chemistry in a hot, dense gas cloud (the hot core). During the collapse phase, gas species are allowed to accrete on to grain surfaces where they can participate in further reactions. During the hot core phase surface species thermally desorb back in to the ambient gas and further chemical evolution takes place. For comparison, the chemical network was also used to model a simple dark cloud and photodissociation regions. Results. Our investigation reveals that HNCO is inefficiently formed when only gas-phase formation pathways are considered in the chemical network with reaction rates consistent with existing laboratory data. This is particularly true at low temperatures but also in regions with temperatures up to ~200 K. Using currently measured gas phase reaction rates, obtaining the observed HNCO abundances requires its formation on grain surfaces – similar to other “hot core” species such as CH3OH. However our model shows that the gas phase HNCO in hot cores is not a simple direct product of the evaporation of grain mantles. We also show that the HNCO/CS abundance ratio varies as a function of time in hot cores and can match the range of values observed. This ratio is not unambiguously related to the ambient UV field as been suggested – our results are inconsistent with the hypothesis of Martín et al. (2008, ApJ, 678, 245). In addition, our results show that this ratio is extremely sensitive to the initial sulphur abundance. We find that the ratio grows monotonically with time with an absolute value which scales approximately linearly with the S abundance at early times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied the optical spectra of a sample of 28 O- and early B-type stars in the Large Magellanic Cloud, 22 of which are associated with the young star forming region N11. Our observations sample the central associations of LH9 and LH10, and the surrounding regions. Stellar parameters are determined using an automated fitting method ( Mokiem et al. 2005), which combines the stellar atmosphere code fastwind ( Puls et al. 2005) with the genetic algorithm based optimisation routine PIKAIA ( Charbonneau 1995). We derive an age of 7.0 +/- 1.0 and 3.0 +/- 1.0 Myr for LH9 and LH10, respectively. The age difference and relative distance of the associations are consistent with a sequential star formation scenario in which stellar activity in LH9 triggered the formation of LH10. Our sample contains four stars of spectral type O2. From helium and hydrogen line fitting we find the hottest three of these stars to be similar to 49- 54 kK ( compared to similar to 45- 46 kK for O3 stars). Detailed determination of the helium mass fraction reveals that the masses of helium enriched dwarfs and giants derived in our spectroscopic analysis are systematically lower than those implied by non-rotating evolutionary tracks. We interpret this as evidence for efficient rotationally enhanced mixing leading to the surfacing of primary helium and to an increase of the stellar luminosity. This result is consistent with findings for SMC stars by Mokiem et al. ( 2006). For bright giants and supergiants no such mass discrepancy is found; these stars therefore appear to follow tracks of modestly or non-rotating objects. The set of programme stars was sufficiently large to establish the mass loss rates of OB stars in this Z similar to 1/2 Z(circle dot) environment sufficiently accurate to allow for a quantitative comparison with similar objects in the Galaxy and the SMC. The mass loss properties are found to be intermediate to massive stars in the Galaxy and SMC. Comparing the derived modified wind momenta D-mom as a function of luminosity with predictions for LMC metallicities by Vink et al. ( 2001) yields good agreement in the entire luminosity range that was investigated, i.e. 5.0

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study attempts to establish a link between the reasonably well known nature of the progenitor of SN2011fe and its surrounding environment. This is done with the aim of enabling the identification of similar systems in the vast majority of the cases, when distance and epoch of discovery do not allow a direct approach. To study the circumstellar environment of SN2011fe we have obtained high-resolution spectroscopy of SN2011fe on 12 epochs, from 8 to 86 days after the estimated date of explosion, targeting in particular at the time evolution of CaII and NaI. Three main absorption systems are identified from CaII and NaI, one associated to the Milky Way, one probably arising within a high-velocity cloud, and one most likely associated to the halo of M101. The Galactic and host galaxy reddening, deduced from the integrated equivalent widths (EW) of the NaI lines are E(B-V)=0.011+/-0.002 and E(B-V)=0.014+/-0.002 mag, respectively. The host galaxy absorption is dominated by a component detected at the same velocity measured from the 21-cm HI line at the projected SN position (~180 km/s). During the ~3 months covered by our observations, its EW changed by 15.6+/-6.5 mA. This small variation is shown to be compatible with the geometric effects produced by therapid SN photosphere expansion coupled to the patchy fractal structure of the ISM. The observed behavior is fully consistent with ISM properties similar to those derived for our own Galaxy, with evidences for structures on scales

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing is a technological advancementthat provide resources through internet on pay-as-you-go basis.Cloud computing uses virtualisation technology to enhance theefficiency and effectiveness of its advantages. Virtualisation isthe key to consolidate the computing resources to run multiple instances on each hardware, increasing the utilization rate of every resource, thus reduces the number of resources needed to buy, rack, power, cool, and manage. Cloud computing has very appealing features, however, lots of enterprises and users are still reluctant to move into cloud due to serious security concerns related to virtualisation layer. Thus, it is foremost important to secure the virtual environment.In this paper, we present an elastic framework to secure virtualised environment for trusted cloud computing called Server Virtualisation Security System (SVSS). SVSS provide security solutions located on hyper visor for Virtual Machines by deploying malicious activity detection techniques, network traffic analysis techniques, and system resource utilization analysis techniques.SVSS consists of four modules: Anti-Virus Control Module,Traffic Behavior Monitoring Module, Malicious Activity Detection Module and Virtualisation Security Management Module.A SVSS prototype has been deployed to validate its feasibility,efficiency and accuracy on Xen virtualised environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably. Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention. Assessments of uncertainty profiles are useful in two ways: firstly, they provide a broad understanding of how environmental stress can effect an application’s performance (and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed