911 resultados para scaling rules


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Molecular dynamics simulations have been performed on monatomic sorbates confined within zeolite NaY to obtain the dependence of entropy and self-diffusivity on the sorbate diameter. Previously, molecular dynamics simulations by Santikary and Yashonath J. Phys. Chem. 98, 6368 (1994)], theoretical analysis by Derouane J. Catal. 110, 58 (1988)] as well as experiments by Kemball Adv. Catal. 2, 233 (1950)] found that certain sorbates in certain adsorbents exhibit unusually high self-diffusivity. Experiments showed that the loss of entropy for certain sorbates in specific adsorbents was minimum. Kemball suggested that such sorbates will have high self-diffusivity in these adsorbents. Entropy of the adsorbed phase has been evaluated from the trajectory information by two alternative methods: two-phase and multiparticle expansion. The results show that anomalous maximum in entropy is also seen as a function of the sorbate diameter. Further, the experimental observation of Kemball that minimum loss of entropy is associated with maximum in self-diffusivity is found to be true for the system studied here. A suitably scaled dimensionless self-diffusivity shows an exponential dependence on the excess entropy of the adsorbed phase, analogous to excess entropy scaling rules seen in many bulk and confined fluids. The two trajectory-based estimators for the entropy show good semiquantitative agreement and provide some interesting microscopic insights into entropy changes associated with confinement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

本文对松嫩平原羊草地的群落结构及空间格局进行了分析,主要内容包含以下三个方面: 1、本文第一部分是对碱化草地植物群落分布的空间、环境因子分离。决定碱化草地植物物种在空间分布的因素可以大致分解成4个部分:(1)通过物种的生态位起作用的环境因子;(2)决定物种、群落间竞争激烈程度的空间距离因子;(3)环境和空间因子的交叉或耦合作用;(4)其它未知因子(如生物和随机因子)。本研究在1996年野外实地采样的基础上,运用DCA消势对应分析(Detrended Correspondence Analysis)对群落主要变化趋势及其与土壤环境因子之间的关系进行了分析。运用DCCA消势典范对应分析(Detrended Canonical Correspondence Analysis)对影响松嫩平原碱化草地群落结构季节动态的空间和环境因素进行了定量的分解。结果表明: 在影响群落分布的各因子中,环境因子独立约占40%,而环境一空间耦合因子占35%,空间因子独立约占3%,其它因子约占20%。在诸多因子中,土壤的盐渍化程度在整个生长季起着决定性作用,但土壤水分和氮素的作用因季节而变化。在干旱季节,植物生长的主要制约因子表现为土壤碱化度和土壤水份,而土壤氮素的作用处于次要地位。但在降雨较多、土壤湿润度较大的季节,土壤氮素的影响明显增强,成为仅次于土经度的决定性因素。 2、第二部分对东北松嫩平原碱化草地植物群落空间格局的分形性质进行了分析,分别用边长-面积指数和Korcak指数估计了斑块边界复杂性和斑块面积分布的分形维数。结果表明: 随放牧强度增加,占优势的羊草斑块的相对斑块化加剧,而斑块的边界在中度放牧时最不规则: 在水淹地,占优势的羊草斑块和次优势的碱茅斑块、獐茅斑块的斑块边界复杂性和斑块化程度比未水淹地都低,说明水降低了群落复合体内的异质性: 水淹地优势植物斑块的边长-面积指数和Korcak指数均低于次优势种,但在重度放牧地结果好相反,说明两种样地处于不同的演替阶段; 斑块边界复杂性符合同一尺度规律,在现有的面积范围以内没有尺度转换,而斑块面积分布则存在尺度转换点。对重牧地和水淹地的尺度分析表明,较小尺度上, 群落空间格局的相对斑块化程度较低,说明较小尺度上的空间格局要相对稳定一些,而较大尺度上则相反。可能原因是放牧干扰对大尺度的斑块影响更大。 3、对1989年开始围栏封育的样地10年来的空间格局变化进行时间序列分析,研究恢复演替过程中各个主要斑块类型的分布特征及动态,以及整个样地总体格局在演替系列中的变化。得出如下结果: 羊草是该区植被的优势种,其格局动态为:1989-1993年,羊草斑块总面积增加,斑块数目减少,相对斑块化指数降低,1994年后羊草的空间格局基本稳定。 羊草的斑块面积分布曲线在20平方米左右发生转折,即空间格局在此尺度发生尺度转换。不同尺度上斑块的相对斑块化指数及其在恢复演替中的变化趋势不同:小斑块的相对斑块化指数低,在1989-1993年期间增加,而后降低;大斑块的情况正好相反。说明羊草空间格局的主要变化之一是中等大小斑块的合并和数目减少,羊草斑块生长对格局的影响比斑块合并的影响要小。 獐茅斑块的格局动态为:1989-1994,斑块总面积、斑块数目、最大斑块面积增大,1995年开始降低。相对斑块化程度则先降后增。碱蓬斑块在1989-1993年间的格局动态与羊草正好相反:总面积减少,斑块数目增加,斑块面积大小的变化范围变窄,说明1993年之前,控制碱蓬斑块的主要生态学过程是其他物种的侵入,隔离原来较大的斑块,同时占据某些小斑块。1994年之后,碱蓬斑块的变化比较随机,其格局变化受到其他物种如杂草对策种虎尾草等的影响很大。 样地总体格局的变化为:1991年前,斑块种类和总斑块数目增加,斑块化程度随着增大;1993年后,斑块种类基本稳定下来;1991-1995,斑块化指数降低。在此期间,斑块数目增加,所以斑块化程度降低意味着斑块大小频率分布趋于均匀,大斑块被分裂,小斑块在长大,群落总体空间格局逐渐稳定。总体格局在三个尺度上有不同的自相似规律,这三个尺度分别为:La(a)<=1; 15.5。小尺度上相对斑块化程度低,但年际间变化较中尺度剧烈;中尺度相对斑块化程度变化较为缓,大尺度上,斑块分布一直趋于减少最大的斑块数目。总的变化趋势为,随恢复演替的深入,最大和最小的斑块数目都减少,处于中间尺度的斑块最多。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grazing experiments are usually used to quantify and demonstrate the biophysical impact of grazing strategies, with the Wambiana grazing experiment being one of the longest running such experiments in northern Australia. Previous economic analyses of this experiment suggest that there is a major advantage in stocking at a fixed, moderate stocking rate or in using decision rules allowing flexible stocking to match available feed supply. The present study developed and applied a modelling procedure to use data collected at the small plot, land type and paddock scales at the experimental site to simulate the property-level implications of a range of stocking rates for a breeding-finishing cattle enterprise. The greatest economic performance was achieved at a moderate stocking rate of 10.5 adult equivalents 100 ha(-1). For the same stocking rate over time, the fixed stocking strategy gave a greater economic performance than strategies that involved moderate changes to stocking rates each year in response to feed supply. Model outcomes were consistent with previous economic analyses using experimental data. Further modelling of the experimental data is warranted and similar analyses could be applied to other major grazing experiments to allow the scaling of results to greater scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilising the expressive power of S-Expressions in Learning Classifier Systems often prohibitively increases the search space due to increased flexibility of the endcoding. This work shows that selection of appropriate S-Expression functions through domain knowledge improves scaling in problems, as expected. It is also known that simple alphabets perform well on relatively small sized problems in a domain, e.g. ternary alphabet in the 6, 11 and 20 bit MUX domain. Once fit ternary rules have been formed it was investigated whether higher order learning was possible and whether this staged learning facilitated selection of appropriate functions in complex alphabets, e.g. selection of S-Expression functions. This novel methodology is shown to provide compact results (135-MUX) and exhibits potential for scaling well (1034-MUX), but is only a small step towards introducing abstraction to LCS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand- avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand-avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To understand how the human visual system analyzes images, it is essential to know the structure of the visual environment. In particular, natural images display consistent statistical properties that distinguish them from random luminance distributions. We have studied the geometric regularities of oriented elements (edges or line segments) present in an ensemble of visual scenes, asking how much information the presence of a segment in a particular location of the visual scene carries about the presence of a second segment at different relative positions and orientations. We observed strong long-range correlations in the distribution of oriented segments that extend over the whole visual field. We further show that a very simple geometric rule, cocircularity, predicts the arrangement of segments in natural scenes, and that different geometrical arrangements show relevant differences in their scaling properties. Our results show similarities to geometric features of previous physiological and psychophysical studies. We discuss the implications of these findings for theories of early vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing need for a comprehensive institutional understanding pertaining to ecosystem services (ESs) in coastal and marine fields. This paper develops a systematic framework to inform coastal and marine governance about the integration of ES concepts. First, as a theoretical basis, we analyze the generic rules that are part of the Institutional Analysis and Development (IAD) framework. Second, by an extensive literature review, we formulate a set of ES-specific rules and develop an evaluative framework for coastal and marine governance. Third, we examine this evaluative framework in a specific action situation, namely coastal strategic planning concerning Qingdao, China. Results from the literature review and the case study reveal that when designing ES-specific rules for coastal and marine governance, there are several socio-spatial and economic aspects that should be taken into account: (1) conceive of stakeholders as ES users, (2) capture the effect of ecological scaling, (3) understand ES interactions and clarify indirect impacts and causalities, (4) account for ES values, and (5) draw on economic choices for use rights to deal with ES issues.