34 resultados para benchmarks

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews nine software packages with particular reference to their GARCH model estimation accuracy when judged against a respected benchmark. We consider the numerical consistency of GARCH and EGARCH estimation and forecasting. Our results have a number of implications for published research and future software development. Finally, we argue that the establishment of benchmarks for other standard non-linear models is long overdue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modern built environment has become more complex in terms of building types, environmental systems and use profiles. This complexity causes difficulties in terms of optimising buildings energy design. In this circumstance, introducing a set of prototype reference buildings, or so called benchmark buildings, that are able to represent all or majority parts of the UK building stock may be useful for the examination of the impact of national energy policies on building energy consumption. This study proposes a set of reference office buildings for England and Wales based on the information collected from the Non-Domestic Building Stock (NDBS) project and an intensive review of the existing building benchmarks. The proposed building benchmark comprises 10 prototypical reference buildings, which in relation to built form and size, represent 95% of office buildings in England and Wales. This building benchmark provides a platform for those involved in building energy simulations to evaluate energy-efficiency measures and for policy-makers to assess the influence of different building energy policies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are ‘toughest to beat’ and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fieldwork is regarded as an important component of many bioscience degree programmes. QAA benchmarks statements refer explicitly to the importance of fieldwork, although give no indication of amounts of field provision expected. Previous research has highlighted the importance of fieldwork to the learning of both subject-specific and transferable skills. However, it is unclear how the amount and type of fieldwork currently offered is being affected by the recent expansion in student numbers and current funding constraints. Here we review contemporary literature and report on the results of a questionnaire completed by bioscience tutors across 33 UK institutions. The results suggest, perhaps contrary to anecdotal evidence, that the amount of fieldwork being undertaken by students is not in decline and that on the whole, programmes contain reasonable amounts of fieldwork. The majority of programmes involved UK-based fieldwork, but a number of programmes also offered ‘exotic’ overseas fieldwork which was considered important in terms of student recruitment as well as exposing students to a diversity of field learning environments. Tutors were very clear about the benefits of fieldwork and the need to be proactive to maintain its provision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The paper addresses the practical problems which emerge when attempting to apply longitudinal approaches to the assessment of property depreciation using valuation-based data. These problems relate to inconsistent valuation regimes and the difficulties in finding appropriate benchmarks. Design/methodology/approach – The paper adopts a case study of seven major office locations around Europe and attempts to determine ten-year rental value depreciation rates based on a longitudinal approach using IPD, CBRE and BNP Paribas datasets. Findings – The depreciation rates range from a 5 per cent PA depreciation rate in Frankfurt to a 2 per cent appreciation rate in Stockholm. The results are discussed in the context of the difficulties in applying this method with inconsistent data. Research limitations/implications – The paper has methodological implications for measuring property investment depreciation and provides an example of the problems in adopting theoretically sound approaches with inconsistent information. Practical implications – Valuations play an important role in performance measurement and cross border investment decision making and, therefore, knowledge of inconsistency of valuation practice aids decision making and informs any application of valuation-based data in the attainment of depreciation rates. Originality/value – The paper provides new insights into the use of property market valuation data in a cross-border context, insights that previously had been anecdotal and unproven in nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The orthodox approach for incentivising Demand Side Participation (DSP) programs is that utility losses from capital, installation and planning costs should be recovered under financial incentive mechanisms which aim to ensure that utilities have the right incentives to implement DSP activities. The recent national smart metering roll-out in the UK implies that this approach needs to be reassessed since utilities will recover the capital costs associated with DSP technology through bills. This paper introduces a reward and penalty mechanism focusing on residential users. DSP planning costs are recovered through payments from those consumers who do not react to peak signals. Those consumers who do react are rewarded by paying lower bills. Because real-time incentives to residential consumers tend to fail due to the negligible amounts associated with net gains (and losses) or individual users, in the proposed mechanism the regulator determines benchmarks which are matched against responses to signals and caps the level of rewards/penalties to avoid market distortions. The paper presents an overview of existing financial incentive mechanisms for DSP; introduces the reward/penalty mechanism aimed at fostering DSP under the hypothesis of smart metering roll-out; considers the costs faced by utilities for DSP programs; assesses linear rate effects and value changes; introduces compensatory weights for those consumers who have physical or financial impediments; and shows findings based on simulation runs on three discrete levels of elasticity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aircraft Maintenance, Repair and Overhaul (MRO) agencies rely largely on row-data based quotation systems to select the best suppliers for the customers (airlines). The data quantity and quality becomes a key issue to determining the success of an MRO job, since we need to ensure we achieve cost and quality benchmarks. This paper introduces a data mining approach to create an MRO quotation system that enhances the data quantity and data quality, and enables significantly more precise MRO job quotations. Regular Expression was utilized to analyse descriptive textual feedback (i.e. engineer’s reports) in order to extract more referable highly normalised data for job quotation. A text mining based key influencer analysis function enables the user to proactively select sub-parts, defects and possible solutions to make queries more accurate. Implementation results show that system data would improve cost quotation in 40% of MRO jobs, would reduce service cost without causing a drop in service quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While style analysis has been studied extensively in equity markets, applications of this valuable tool for measuring and benchmarking performance and risk in a real estate context are still relatively new. Most previous real estate studies on this topic have identified three investment categories (rather than styles): sectors, administrative regions and economic regions. However, the low explanatory power reveals the need to extend this analysis to other investment styles. We identify four main real estate investment styles and apply a multivariate model to randomly generated portfolios to test the significance of each style in explaining portfolio returns. Results show that significant alpha performance is significantly reduced when we account for the new investment styles, with small vs. big properties being the dominant one. Secondly, we find that the probability of obtaining alpha performance is dependent upon the actual exposure of funds to style factors. Finally we obtain that both alpha and systematic risk levels are linked to the actual characteristics of portfolios. Our overall results suggest that it would be beneficial for real estate fund managers to use these style factors to set benchmarks and to analyze portfolio returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What constitutes a baseline level of success for protein fold recognition methods? As fold recognition benchmarks are often presented without any thought to the results that might be expected from a purely random set of predictions, an analysis of fold recognition baselines is long overdue. Given varying amounts of basic information about a protein—ranging from the length of the sequence to a knowledge of its secondary structure—to what extent can the fold be determined by intelligent guesswork? Can simple methods that make use of secondary structure information assign folds more accurately than purely random methods and could these methods be used to construct viable hierarchical classifications?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important test of the quality of a computational model is its ability to reproduce standard test cases or benchmarks. For steady open–channel flow based on the Saint Venant equations some benchmarks exist for simple geometries from the work of Bresse, Bakhmeteff and Chow but these are tabulated in the form of standard integrals. This paper provides benchmark solutions for a wider range of cases, which may have a nonprismatic cross section, nonuniform bed slope, and transitions between subcritical and supercritical flow. This makes it possible to assess the underlying quality of computational algorithms in more difficult cases, including those with hydraulic jumps. Several new test cases are given in detail and the performance of a commercial steady flow package is evaluated against two of them. The test cases may also be used as benchmarks for both steady flow models and unsteady flow models in the steady limit.