502 resultados para BENCHMARKS
Resumo:
We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e., Smalltalk-80) VM using the PyPy toolchain. The PyPy project allows code written in RPython, a subset of Python, to be translated to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk VMs, but which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint Squeak-PyPy Sprint in Bern last autumn.
Resumo:
Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.
Resumo:
Several commentators have expressed disappointment with New Labour's apparent adherence to the policy frameworks of the previous Conservative administrations. The employment orientation of its welfare programmes, the contradictory nature of the social exclusion initiatives, and the continuing obsession with public sector marketisation, inspections, audits, standards and so on, have all come under critical scrutiny (c.f., Blyth 2001; Jordan 2001; Orme 2001). This paper suggests that in order to understand the socio-economic and political contexts affecting social work we need to examine the relationship between New Labour's modernisation project and its insertion within an architecture of global governance. In particular, membership of the European Union (EU), International Monetary Fund (IMF) and World Trade Organisation (WTO) set the parameters for domestic policy in important ways. Whilst much has been written about the economic dimensions of 'globalisation' in relation to social work rather less has been noted about the ways in which domestic policy agenda are driven by multilateral governance objectives. This policy dimension is important in trying to respond to various changes affecting social work as a professional activity. What is possible, what is encouraged, how things might be done, is tightly bounded by the policy frameworks governing practice and affected by those governing the lives of service users. It is unhelpful to see policy formulation in purely national terms as the UK is inserted into a network governance structure, a regulatory framework where decisions are made by many countries and organisations and agencies. Together, they are producing a 'new legal regime', characterised by a marked neo-liberal policy agenda. This paper aims to demonstrate the relationship of New Labour's modernisation programme to these new forms of legality by examining two main policy areas and the welfare implications they are enmeshed in. The first is privatisation, and the second is social policy in the European Union. Examining these areas allows a demonstration of how much of the New Labour programme can be understood as a local implementation of a transnational strategy, how parts of that strategy produce much of the social exclusion it purports to address, and how social welfare, and particularly social work, are noticeable by their absence within policy discourses of the strategy. The paper details how the privatisation programme is considered to be a crucial vehicle for the further development of a transnational political-economy, where capital accumulation has been redefined as 'welfare'. In this development, frameworks, codes and standards are central, and the final section of the paper examines how the modernisation strategy of the European Union depends upon social policy marked by an employment orientation and risk rationality, aimed at reconfiguring citizen identities.The strategy is governed through an 'open mode of coordination', in which codes, standards, benchmarks and so on play an important role. The paper considers the modernisation strategy and new legality within which it is embedded as dependent upon social policy as a technology of liberal governance, one demonstrating a new rationality in comparison to that governing post-Second World War welfare, and which aims to reconfigure institutional infrastructure and citizen identity.
Resumo:
Zur Optimierung innerbetrieblicher Logistikprozesse ist eine ganzheitliche Prozessdarstellung unter Berücksichtigung von Material-, Informationsfluss und der eingesetzten Ressourcen erforderlich. In diesem Aufsatz werden verschiedene, häufig verwendete Methoden zur Prozessdarstellung diesbezüglich miteinander verglichen und bewertet. Die verschiedenen Stärken und Schwächen werden in Form eines Benchmarks zusammengefasst, das als Grundlage für eine neue Methode dient, die im Rahmen des IGF-Forschungsprojekts 16187 N/1 erarbeitet wurde.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
High-throughput assays, such as yeast two-hybrid system, have generated a huge amount of protein-protein interaction (PPI) data in the past decade. This tremendously increases the need for developing reliable methods to systematically and automatically suggest protein functions and relationships between them. With the available PPI data, it is now possible to study the functions and relationships in the context of a large-scale network. To data, several network-based schemes have been provided to effectively annotate protein functions on a large scale. However, due to those inherent noises in high-throughput data generation, new methods and algorithms should be developed to increase the reliability of functional annotations. Previous work in a yeast PPI network (Samanta and Liang, 2003) has shown that the local connection topology, particularly for two proteins sharing an unusually large number of neighbors, can predict functional associations between proteins, and hence suggest their functions. One advantage of the work is that their algorithm is not sensitive to noises (false positives) in high-throughput PPI data. In this study, we improved their prediction scheme by developing a new algorithm and new methods which we applied on a human PPI network to make a genome-wide functional inference. We used the new algorithm to measure and reduce the influence of hub proteins on detecting functionally associated proteins. We used the annotations of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) as independent and unbiased benchmarks to evaluate our algorithms and methods within the human PPI network. We showed that, compared with the previous work from Samanta and Liang, our algorithm and methods developed in this study improved the overall quality of functional inferences for human proteins. By applying the algorithms to the human PPI network, we obtained 4,233 significant functional associations among 1,754 proteins. Further comparisons of their KEGG and GO annotations allowed us to assign 466 KEGG pathway annotations to 274 proteins and 123 GO annotations to 114 proteins with estimated false discovery rates of <21% for KEGG and <30% for GO. We clustered 1,729 proteins by their functional associations and made pathway analysis to identify several subclusters that are highly enriched in certain signaling pathways. Particularly, we performed a detailed analysis on a subcluster enriched in the transforming growth factor β signaling pathway (P<10-50) which is important in cell proliferation and tumorigenesis. Analysis of another four subclusters also suggested potential new players in six signaling pathways worthy of further experimental investigations. Our study gives clear insight into the common neighbor-based prediction scheme and provides a reliable method for large-scale functional annotations in this post-genomic era.
Resumo:
As the family preservation and support movement evolves rapidly, this article overviews the past, present and future of this approach to policy and services. Building upon several decades of practice experience and research, and now federally funded, program designers are searching for ways to implement system wide change with an array of services all from a family focus, and strengths perspective. Critical issues facing the movement are discussed and a set of benchmarks to judge our future success is presented.
Resumo:
In order to fully describe the construct of empowerment and to determine possible measures for this construct in racially and ethnically diverse neighborhoods, a qualitative study based on Grounded Theory was conducted at both the individual and collective levels. Participants for the study included 49 grassroots experts on community empowerment who were interviewed through semi-structured interviews and focus groups. The researcher also conducted field observations as part of the research protocol.^ The results of the study identified benchmarks of individual and collective empowerment and hundreds of possible markers of collective empowerment applicable in diverse communities. Results also indicated that community involvement is essential in the selection and implementation of proper measures. Additional findings were that the construct of empowerment involves specific principles of empowering relationships and particular motivational factors. All of these findings lead to a two dimensional model of empowerment based on the concepts of relationships among members of a collective body and the collective body's desire for socio-political change.^ These results suggest that the design, implementation, and evaluation of programs that foster empowerment must be based on collaborative ventures between the population being served and program staff because of the interactive, synergistic nature of the construct. In addition, empowering programs should embrace specific principles and processes of individual and collective empowerment in order to maximize their effectiveness and efficiency. And finally, the results suggest that collaboratively choosing markers to measure the processes and outcomes of empowerment in the main systems and populations living in today's multifaceted communities is a useful mechanism to determine change. ^
Resumo:
The European Union’s (EU) trade policy has a strong influence on economic development and the human rights situation in the EU’s partner countries, particularly in developing countries. The present study was commissioned by the German Federal Ministry for Economic Cooperation and Development (BMZ) as a contribution to further developing appropriate methodologies for assessing human rights risks in development-related policies, an objective set in the BMZ’s 2011 strategy on human rights. The study offers guidance for stakeholders seeking to improve their knowledge of how to assess, both ex ante and ex post, the impact of Economic Partnership Agreements on poverty reduction and the right to food in ACP countries. Currently, human rights impacts are not yet systematically addressed in the trade sustainability impact assessments (trade SIAs) that the European Commission conducts when negotiating trade agreements. Nor do they focus specifically on disadvantaged groups or include other benchmarks relevant to human rights impact assessments (HRIAs). The EU itself has identified a need for action in this regard. In June 2012 it presented an Action Plan on Human Rights and Democracy that calls for the inclusion of human rights in all impact assessments and in this context explicitly refers to trade agreements. Since then, the EU has begun to slightly adapt its SIA methodology and is working to define more adequate human rights–consistent procedures. It is hoped that readers of this study will find inspiration to help contribute to this process and help improve human rights consistency of future trade options.
Resumo:
We track dated firn horizons within 400 MHz short-pulse radar profiles to find the continuous extent over which they can be used as historical benchmarks to study past accumulation rates in West Antarctica. The 30-40 cm pulse resolution compares with the accumulation rates of most areas. We tracked a particular set that varied from 30 to 90 m in depth over a distance of 600 km. The main limitations to continuity are fading at depth, pinching associated with accumulation rate differences within hills and valleys, and artificial fading caused by stacking along dips. The latter two may be overcome through multi-kilometer distances by matching the relative amplitude and spacing of several close horizons, along with their pulse forms and phases. Modeling of reflections from thin layers suggests that the - 37 to - 50 dB range of reflectivity and the pulse waveforms we observed are caused by the numerous thin ice layers observed in core stratigraphy. Constructive interference between reflections from these close, high-density layers can explain the maintenance of reflective strength throughout the depth of the firn despite the effects of compaction. The continuity suggests that these layers formed throughout West Antarctica and possibly into East Antarctica as well.
Resumo:
Interior ice elevations of the West Antarctic Ice Sheet (WAIS) during the last glaciation, which can serve as benchmarks for ice-sheet models, are largely unconstrained. Here we report past ice elevation data from the Ohio Range, located near the WAIS divide and the onset region of the Mercer Ice Stream. Cosmogenic exposure ages of glacial erratics that record a WAIS highstand similar to 125 m above the present surface date to similar to 11.5 ka. The deglacial chronology prohibits an interior WAIS contribution to meltwater pulse 1A. Our observational data of ice elevation changes compare well with predictions of a thermomechanical ice-sheet model that incorporates very low basal shear stress downstream of the present day grounding line. We conclude that ice streams in the Ross Sea Embayment had thin, low-slope profiles during the last glaciation and interior WAIS ice elevations during this period were several hundred meters lower than previous reconstructions.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
Paper 1: Pilot study of Swiss firms Abstract Using a fixed effects approach, we investigate whether the presence of specific individuals on Swiss firms’ boards affects firm performance and the policy choices they make. We find evidence for a substantial impact of these directors’ presence on their firms. Moreover, the director effects are correlated across policies and performance measures but uncorrelated to the directors’ background. We find these results interesting but conclude that they should to be substantiated on a dataset that is larger and better understood by researchers. Also, further tests are required to rule out methodological concerns. Paper 2: Evidence from the S&P 1,500 Abstract We ask whether directors on corporate boards contribute to firm performance as individuals. From the universe of the S&P 1,500 firms since 1996 we track 2,062 directors who serve on multiple boards over extended periods of time. Our initial findings suggest that the presence of these directors is associated with substantial performance shifts (director fixed effects). Closer examination shows that these effects are statistical artifacts and we conclude that directors are largely fungible. Moreover, we contribute to the discussion of the fixed effects method. In particular, we highlight that the selection of the randomization method is pivotal when generating placebo benchmarks. Paper 3: Robustness, statistical power, and important directors Abstract This article provides a better understanding of Senn’s (2014) findings: The outcome that individual directors are unrelated to firm performance proves robust against different estimation models and testing strategies. By looking at CEOs, the statistical power of the placebo benchmarking test is evaluated. We find that only the stronger tests are able to detect CEO fixed effects. However, these tests are not suitable to analyze directors. The suitable tests would detect director effects if the inter quartile range of the true effects amounted to 3 percentage points ROA. As Senn (2014) finds no such effects for outside directors in general, we focus on groups of particularly important directors (e.g., COBs, non-busy directors, successful directors). Overall, our evidence suggests that the members of these groups are not individually associated with firm performance either. Thus, we confirm that individual directors are largely fungible. If the individual has an effect on performance, it is of small magnitude.
Resumo:
Acid rock drainage (ARD) is a problem of international relevance with substantial environmental and economic implications. Reactive transport modeling has proven a powerful tool for the process-based assessment of metal release and attenuation at ARD sites. Although a variety of models has been used to investigate ARD, a systematic model intercomparison has not been conducted to date. This contribution presents such a model intercomparison involving three synthetic benchmark problems designed to evaluate model results for the most relevant processes at ARD sites. The first benchmark (ARD-B1) focuses on the oxidation of sulfide minerals in an unsaturated tailing impoundment, affected by the ingress of atmospheric oxygen. ARD-B2 extends the first problem to include pH buffering by primary mineral dissolution and secondary mineral precipitation. The third problem (ARD-B3) in addition considers the kinetic and pH-dependent dissolution of silicate minerals under low pH conditions. The set of benchmarks was solved by four reactive transport codes, namely CrunchFlow, Flotran, HP1, and MIN3P. The results comparison focused on spatial profiles of dissolved concentrations, pH and pE, pore gas composition, and mineral assemblages. In addition, results of transient profiles for selected elements and cumulative mass loadings were considered in the intercomparison. Despite substantial differences in model formulations, very good agreement was obtained between the various codes. Residual deviations between the results are analyzed and discussed in terms of their implications for capturing system evolution and long-term mass loading predictions.
Resumo:
Effects of conspecific neighbours on survival and growth of trees have been found to be related to species abundance. Both positive and negative relationships may explain observed abundance patterns. Surprisingly, it is rarely tested whether such relationships could be biased or even spurious due to transforming neighbourhood variables or influences of spatial aggregation, distance decay of neighbour effects and standardization of effect sizes. To investigate potential biases, communities of 20 identical species were simulated with log-series abundances but without species-specific interactions. No relationship of conspecific neighbour effects on survival or growth with species abundance was expected. Survival and growth of individuals was simulated in random and aggregated spatial patterns using no, linear, or squared distance decay of neighbour effects. Regression coefficients of statistical neighbourhood models were unbiased and unrelated to species abundance. However, variation in the number of conspecific neighbours was positively or negatively related to species abundance depending on transformations of neighbourhood variables, spatial pattern and distance decay. Consequently, effect sizes and standardized regression coefficients, often used in model fitting across large numbers of species, were also positively or negatively related to species abundance depending on transformation of neighbourhood variables, spatial pattern and distance decay. Tests using randomized tree positions and identities provide the best benchmarks by which to critically evaluate relationships of effect sizes or standardized regression coefficients with tree species abundance. This will better guard against potential misinterpretations.