18 resultados para DGTW benchmarks

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Grammars for programming languages are traditionally specified statically. They are hard to compose and reuse due to ambiguities that inevitably arise. PetitParser combines ideas from scannerless parsing, parser combinators, parsing expression grammars and packrat parsers to model grammars and parsers as objects that can be reconfigured dynamically. Through examples and benchmarks we demonstrate that dynamic grammars are not only flexible but highly practical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Behavioral reflection is crucial to support for example functional upgrades, on-the-fly debugging, or monitoring critical applications. However the use of reflective features can lead to severe problems due to infinite metacall recursion even in simple cases. This is especially a problem when reflecting on core language features since there is a high chance that such features are used to implement the reflective behavior itself. In this paper we analyze the problem of infinite meta-object call recursion and solve it by providing a first class representation of meta-level execution: at any point in the execution of a system it can be determined if we are operating on a meta-level or base level so that we can prevent infinite recursion. We present how meta-level execution can be represented by a meta-context and how reflection becomes context-aware. Our solution makes it possible to freely apply behavioral reflection even on system classes: the meta-context brings stability to behavioral reflection. We validate the concept with a robust implementation and we present benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic, unanticipated adaptation of running systems is of interest in a variety of situations, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications. In this paper we study a particular form of computational reflection, called unanticipated partial behavioral reflection, which is particularly well-suited for unanticipated adaptation of real-world systems. Our proposal combines the dynamicity of unanticipated reflection, i.e. reflection that does not require preparation of the code of any sort, and the selectivity and efficiency of partial behavioral reflection. First, we propose unanticipated partial behavioral reflection which enables the developer to precisely select the required reifications, to flexibly engineer the metalevel and to introduce the meta behavior dynamically. Second, we present a system supporting unanticipated partial behavioral reflection in Squeak Smalltalk, called Geppetto, and illustrate its use with a concrete example of a web application. Benchmarks validate the applicability of our proposal as an extension to the standard reflective abilities of Smalltalk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The European Union’s (EU) trade policy has a strong influence on economic development and the human rights situation in the EU’s partner countries, particularly in developing countries. The present study was commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ) as a contribution to further developing appropriate methodologies for assessing human rights risks in development-related policies, an objective set in the BMZ’s 2011 strategy on human rights. The study offers guidance for stakeholders seeking to improve their knowledge of how to assess, both ex ante and ex post, the impact of Economic Partnership Agreements on poverty reduction and the right to food in ACP countries. Currently, human rights impacts are not yet systematically addressed in the trade sustainability impact assessments (trade SIAs) that the European Commission conducts when negotiating trade agreements. Nor do they focus specifically on disadvantaged groups or include other benchmarks relevant to human rights impact assessments (HRIAs). The EU itself has identified a need for action in this regard. In June 2012 it presented an Action Plan on Human Rights and Democracy that calls for the inclusion of human rights in all impact assessments and in this context explicitly refers to trade agreements. Since then, the EU has begun to slightly adapt its SIA methodology and is working to define more adequate human rights–consistent procedures. It is hoped that readers of this study will find inspiration to help contribute to this process and help improve human rights consistency of future trade options.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Paper 1: Pilot study of Swiss firms Abstract Using a fixed effects approach, we investigate whether the presence of specific individuals on Swiss firms’ boards affects firm performance and the policy choices they make. We find evidence for a substantial impact of these directors’ presence on their firms. Moreover, the director effects are correlated across policies and performance measures but uncorrelated to the directors’ background. We find these results interesting but conclude that they should to be substantiated on a dataset that is larger and better understood by researchers. Also, further tests are required to rule out methodological concerns. Paper 2: Evidence from the S&P 1,500 Abstract We ask whether directors on corporate boards contribute to firm performance as individuals. From the universe of the S&P 1,500 firms since 1996 we track 2,062 directors who serve on multiple boards over extended periods of time. Our initial findings suggest that the presence of these directors is associated with substantial performance shifts (director fixed effects). Closer examination shows that these effects are statistical artifacts and we conclude that directors are largely fungible. Moreover, we contribute to the discussion of the fixed effects method. In particular, we highlight that the selection of the randomization method is pivotal when generating placebo benchmarks. Paper 3: Robustness, statistical power, and important directors Abstract This article provides a better understanding of Senn’s (2014) findings: The outcome that individual directors are unrelated to firm performance proves robust against different estimation models and testing strategies. By looking at CEOs, the statistical power of the placebo benchmarking test is evaluated. We find that only the stronger tests are able to detect CEO fixed effects. However, these tests are not suitable to analyze directors. The suitable tests would detect director effects if the inter quartile range of the true effects amounted to 3 percentage points ROA. As Senn (2014) finds no such effects for outside directors in general, we focus on groups of particularly important directors (e.g., COBs, non-busy directors, successful directors). Overall, our evidence suggests that the members of these groups are not individually associated with firm performance either. Thus, we confirm that individual directors are largely fungible. If the individual has an effect on performance, it is of small magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acid rock drainage (ARD) is a problem of international relevance with substantial environmental and economic implications. Reactive transport modeling has proven a powerful tool for the process-based assessment of metal release and attenuation at ARD sites. Although a variety of models has been used to investigate ARD, a systematic model intercomparison has not been conducted to date. This contribution presents such a model intercomparison involving three synthetic benchmark problems designed to evaluate model results for the most relevant processes at ARD sites. The first benchmark (ARD-B1) focuses on the oxidation of sulfide minerals in an unsaturated tailing impoundment, affected by the ingress of atmospheric oxygen. ARD-B2 extends the first problem to include pH buffering by primary mineral dissolution and secondary mineral precipitation. The third problem (ARD-B3) in addition considers the kinetic and pH-dependent dissolution of silicate minerals under low pH conditions. The set of benchmarks was solved by four reactive transport codes, namely CrunchFlow, Flotran, HP1, and MIN3P. The results comparison focused on spatial profiles of dissolved concentrations, pH and pE, pore gas composition, and mineral assemblages. In addition, results of transient profiles for selected elements and cumulative mass loadings were considered in the intercomparison. Despite substantial differences in model formulations, very good agreement was obtained between the various codes. Residual deviations between the results are analyzed and discussed in terms of their implications for capturing system evolution and long-term mass loading predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Effects of conspecific neighbours on survival and growth of trees have been found to be related to species abundance. Both positive and negative relationships may explain observed abundance patterns. Surprisingly, it is rarely tested whether such relationships could be biased or even spurious due to transforming neighbourhood variables or influences of spatial aggregation, distance decay of neighbour effects and standardization of effect sizes. To investigate potential biases, communities of 20 identical species were simulated with log-series abundances but without species-specific interactions. No relationship of conspecific neighbour effects on survival or growth with species abundance was expected. Survival and growth of individuals was simulated in random and aggregated spatial patterns using no, linear, or squared distance decay of neighbour effects. Regression coefficients of statistical neighbourhood models were unbiased and unrelated to species abundance. However, variation in the number of conspecific neighbours was positively or negatively related to species abundance depending on transformations of neighbourhood variables, spatial pattern and distance decay. Consequently, effect sizes and standardized regression coefficients, often used in model fitting across large numbers of species, were also positively or negatively related to species abundance depending on transformation of neighbourhood variables, spatial pattern and distance decay. Tests using randomized tree positions and identities provide the best benchmarks by which to critically evaluate relationships of effect sizes or standardized regression coefficients with tree species abundance. This will better guard against potential misinterpretations.