94 resultados para Cloud Forest
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably. Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention. Assessments of uncertainty profiles are useful in two ways: firstly, they provide a broad understanding of how environmental stress can effect an application’s performance (and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Resumo:
In this study the fate of naphthalene, fluorene and pyrene were investigated in the presence and absence of enchytraeid worms. Microcosms were used, which enabled the full fate of 14C-labelled PAHs to be followed. Between 60 and 70% of naphthalene was either mineralised or volatilised, whereas over 90% of the fluorene and pyrene was retained within the soil. Mineralisation and volatilisation of naphthalene was lower in the presence of enchytraeid worms. The hypothesis that microbial mineralisation of naphthalene was limited by enchytraeids because they reduce nutrient availability, and hence limit microbial carbon turnover in these nutrient poor soils, was tested. Ammonia concentrations increased and phosphorus concentrations decreased in all microcosms over the 56 d experimental period. The soil nutrient chemistry was only altered slightly by enchytraeid worms, and did not appear to be the cause of retardation of naphthalene mineralisation. The results suggest that microbial availability and volatilisation of naphthalene is altered as it passes through enchytraeid worms due to organic material encapsulation. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
Although visual surveillance has emerged as an effective technolody for public security, privacy has become an issue of great concern in the transmission and distribution of surveillance videos. For example, personal facial images should not be browsed without permission. To cope with this issue, face image scrambling has emerged as a simple solution for privacyrelated applications. Consequently, online facial biometric verification needs to be carried out in the scrambled domain thus bringing a new challenge to face classification. In this paper, we investigate face verification issues in the scrambled domain and propose a novel scheme to handle this challenge. In our proposed method, to make feature extraction from scrambled face images robust, a biased random subspace sampling scheme is applied to construct fuzzy decision trees from randomly selected features, and fuzzy forest decision using fuzzy memberships is then obtained from combining all fuzzy tree decisions. In our experiment, we first estimated the optimal parameters for the construction of the random forest, and then applied the optimized model to the benchmark tests using three publically available face datasets. The experimental results validated that our proposed scheme can robustly cope with the challenging tests in the scrambled domain, and achieved an improved accuracy over all tests, making our method a promising candidate for the emerging privacy-related facial biometric applications.
Resumo:
Background/Question/Methods
Assessing the large scale impact of deer populations on forest structure and composition is important because of their increasing abundance in many temperate forests. Deer are invasive animals and sometimes thought to be responsible for immense damage to New Zealand’s forests. We report demographic changes taking place among 40 widespread indigenous tree species over 20 years, following a period of record deer numbers in the 1950s and a period of extensive hunting and depletion of deer populations during the 1960s and 1970s.
Results/Conclusions
Across a network of 578 plots there was an overall 13% reduction in sapling density of our study species with most remaining constant and a few declining dramatically. The effect of suppressed recruitment when deer populations were high was evident in the small tree size class (30 – 80 mm dbh). Stem density decreased by 15% and species with the greatest annual decreases in small tree density were those which have the highest rates of sapling recovery in exclosures indicating that deer were responsible. Densities of large canopy trees have remained relatively stable. There were imbalances between mortality and recruitment rates for 23 of the 40 species, 7 increasing and 16 in decline. These changes were again linked with sapling recovery in exclosures; species which recovered most rapidly following deer exclusion had the greatest net recruitment deficit across the wider landscape, indicating recruitment suppression by deer as opposed to mortality induced by disturbance and other herbivores. Species are not declining uniformly across all populations and no species are in decline across their entire range. Therefore we predict that with continued deer presence some forests will undergo compositional changes but that none of the species tested will become nationally extinct.
Impacts of invasive browsers on demographic rates and forest structure in New Zealand. Available from: http://www.researchgate.net/publication/267285500_Impacts_of_invasive_browsers_on_demographic_rates_and_forest_structure_in_New_Zealand [accessed Oct 9, 2015].
Resumo:
The Magellanic Clouds are uniquely placed to study the stellar contribution to dust emission. Individual stars can be resolved in these systems even in the mid-infrared, and they are close enough to allow detection of infrared excess caused by dust. We have searched the Spitzer Space Telescope data archive for all Infrared Spectrograph (IRS) staring-mode observations of the Small Magellanic Cloud (SMC) and found that 209 Infrared Array Camera (IRAC) point sources within the footprint of the Surveying the Agents of Galaxy Evolution in the Small Magellanic Cloud (SAGE-SMC) Spitzer Legacy programme were targeted, within a total of 311 staring-mode observations. We classify these point sources using a decision tree method of object classification, based on infrared spectral features, continuum and spectral energy distribution shape, bolometric luminosity, cluster membership and variability information. We find 58 asymptotic giant branch (AGB) stars, 51 young stellar objects, 4 post-AGB objects, 22 red supergiants, 27 stars (of which 23 are dusty OB stars), 24 planetary nebulae (PNe), 10 Wolf-Rayet stars, 3 H II regions, 3 R Coronae Borealis stars, 1 Blue Supergiant and 6 other objects, including 2 foreground AGB stars. We use these classifications to evaluate the success of photometric classification methods reported in the literature.
Resumo:
Demand Side Management (DSM) plays an important role in Smart Grid. It has large scale access points, massive users, heterogeneous infrastructure and dispersive participants. Moreover, cloud computing which is a service model is characterized by resource on-demand, high reliability and large scale integration and so on and the game theory is a useful tool to the dynamic economic phenomena. In this study, a scheme design of cloud + end technology is proposed to solve technical and economic problems of the DSM. The architecture of cloud + end is designed to solve technical problems in the DSM. In particular, a construct model of cloud + end is presented to solve economic problems in the DSM based on game theories. The proposed method is tested on a DSM cloud + end public service system construction in a city of southern China. The results demonstrate the feasibility of these integrated solutions which can provide a reference for the popularization and application of the DSM in china.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.
Resumo:
With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.
In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.
Resumo:
Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.
Resumo:
When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.
Resumo:
Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.