46 resultados para Parameters of performance
Resumo:
This paper investigates the extent to which office activity contributes to travel-related CO2 emission. Using ‘end-user’ figures[1], travel accounts for 32% of UK CO2 emission (Commission for Integrated Transport, 2007) and commuting and business travel accounts for a fifth of transport-related CO2 emissions, equating to 6.4% of total UK emissions (Building Research Establishment, 2000). Figures from the Department for Transport (2006) report that 70% of commuting trips were made by car, accounting for 73% of all commuting miles travelled. In assessing the environmental performance of an office building, the paper questions whether commuting and business travel-related CO2 emission is being properly assessed. For example, are office buildings in locations that are easily accessible by public transport being sufficiently rewarded? The de facto method for assessing the environmental performance of office buildings in the UK is the Building Research Establishment’s Environmental Assessment Method (BREEAM). Using data for Bristol, this paper examines firstly whether BREEAM places sufficient weight on travel-related CO2 emission in comparison with building operation-related CO2 emission, and secondly whether the methodology for assigning credits for travel-related CO2 emission efficiency is capable of discerning intra-urban differences in location such as city centre and out-of-town. The results show that, despite CO2 emission per worker from building operation and travel being comparable, there is a substantial difference in the credit-weighting allocated to each. Under the current version of BREEAM for offices, only a maximum of 4% of the available credits can be awarded for ensuring the office location is environmentally sustainable. The results also show that all locations within the established city centre of Bristol will receive maximum BREEAM credits. Given the parameters of the test there is little to distinguish one city centre location from another and out of town only one office location receives any credits. It would appear from these results that the assessment method is not able to discern subtle differences in the sustainability of office locations
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
Using grand canonical Monte Carlo simulation we show, for the first time, the influence of the carbon porosity and surface oxidation on the parameters of the Dubinin-Astakhov (DA) adsorption isotherm equation. We conclude that upon carbon surface oxidation, the adsorption decreases for all carbons studied. Moreover, the parameters of the DA model depend on the number of surface oxygen groups. That is why in the case of carbons containing surface polar groups, SF(6) adsorption isotherm data cannot be used for characterization of the porosity.
Resumo:
BACKGROUND: Strawberry (Fragaria × ananassa Duchesne var. Elsanta) plants were grown in polytunnels covered with three polythene films that transmitted varying levels of ultraviolet (UV) light. Fruit were harvested under near-commercial conditions and quality and yield were measured. During ripening, changes in the colour parameters of individual fruit were monitored, and the accuracy of using surface colour to predict other quality parameters was determined by analysing the correlation between colour and quality parameters within UV treatments. RESULTS: Higher exposure to UV during growth resulted in the fruit becoming darker at harvest and developing surface colour more quickly; fruit were also firmer at harvest, but shelf life was not consistently affected by the UV regime. Surface colour measurements were poorly correlated to firmness, shelf life or total phenolics, anthocyanins and ellagic acid contents. CONCLUSION: Although surface colour of strawberry fruits was affected by the UV regime during growth, and this parameter is an important factor in consumer perception, we concluded that the surface colour at the time of harvest was, contrary to consumer expectations, a poor indicator of firmness, potential shelf life or anthocyanin content. Copyright © 2011 Society of Chemical Industry
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
The IPD Annual Index is the largest and most comprehensive Real Estate market index available in the UK Such coverage however inevitably leads to delays in publication. In contrast there are a number of quarterly and monthly indices which are published within days of the year end but which lack the coverage in terms of size and numbers of properties. This paper analyses these smaller but more timely indices to see whether such indices can be used to predict the performance of the IPD Annual Index. Using a number of measures of forecasting accuracy it is shown that the smaller indices provide unbiased and efficient predictions of the IPD Annual Index. Such indices also significantly outperform a naive no-change model. Although no one index performs significantly better than the others. The more timely indices however do not perfectly track the IPD Annual Index. As a result any short run predictions of performance will be subject to a degree of error. Nevertheless the more timely indices, although lacking authoritative coverage, provide a valuable service to investors giving good estimates of Real Estates performance well before the publication of the IPD Annual Index.
Resumo:
Many different performance measures have been developed to evaluate field predictions in meteorology. However, a researcher or practitioner encountering a new or unfamiliar measure may have difficulty in interpreting its results, which may lead to them avoiding new measures and relying on those that are familiar. In the context of evaluating forecasts of extreme events for hydrological applications, this article aims to promote the use of a range of performance measures. Some of the types of performance measures that are introduced in order to demonstrate a six-step approach to tackle a new measure. Using the example of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble precipitation predictions for the Danube floods of July and August 2002, to show how to use new performance measures with this approach and the way to choose between different performance measures based on their suitability for the task at hand is shown. Copyright © 2008 Royal Meteorological Society
Resumo:
This paper reports on the progress made by a group of fourteen 11-year-old children who had been originally identified as being precocious readers before they started primary school at the age of 5-years. The data enable comparisons to be made with the performance of the children when they were younger so that a six year longitudinal analysis can be made. The children who began school as precocious readers continued to make progress in reading accuracy, rate and comprehension, thereby maintaining their superior performance relative to a comparison group. However, their progress appeared to follow the same developmental trajectory as that of the comparison group. Measures of phonological awareness showed that there are long term, stable individual differences which correlated with all measures of reading. The children who were reading precociously early showed significantly higher levels of phonological awareness than the comparison children. In addition, they showed the same levels of performance on this task as a further group of high achieving young adults. A positive effect of being able to read at precociously early age was identified in the significantly higher levels of receptive vocabulary found amongst the these children. The analyses indicated that rises in receptive vocabulary resulted from reading performance rather than the other way round
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
In wireless communication systems, all in-phase and quadrature-phase (I/Q) signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of multiple-input multiple-output (MIMO) maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. In order to perform the MIMO MRC, we propose a channel estimation algorithm that accounts for the I/Q imbalance. Moreover, a compensation algorithm for the I/Q imbalance in MIMO MRC systems is proposed, which first employs the least-squares (LS) rule to estimate the coefficients of the channel gain matrix, beamforming and combining weight vectors, and parameters of I/Q imbalance jointly, and then makes use of the received signal together with its conjugation to detect the transmitted signal. The performance of the MIMO MRC system under study is evaluated in terms of average symbol error probability (SEP), outage probability and ergodic capacity, which are derived considering transmission over Rayleigh fading channels. Numerical results are provided and show that the proposed compensation algorithm can efficiently mitigate the effect of I/Q imbalance.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
In this paper, we investigate the performance of multiple-input multiple-output (MIMO) transmit beamforming (TB) systems in the presence of nonlinear high-power amplifiers (HPAs). Due to the suboptimality of maximal ratio transmission/maximal ratio combining (MRT/MRC) under HPA nonlinearity, quantized equal gain transmission (QEGT) is suggested as a feasible TB scheme. The effect of HPA nonlinearity on the performance of MIMO QEGT/MRC is evaluated in terms of the average symbol error probability (SEP) and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, such as the parameters of nonlinear HPA, cardinality of the beamforming weight vector codebook, and modulation order of quadrature amplitude modulation (QAM), on performance.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
Resumo:
The assessment of chess players is an increasingly attractive opportunity and an unfortunate necessity. The chess community needs to limit potential reputational damage by inhibiting cheating and unjustified accusations of cheating: there has been a recent rise in both. A number of counter-intuitive discoveries have been made by benchmarking the intrinsic merit of players’ moves: these call for further investigation. Is Capablanca actually, objectively the most accurate World Champion? Has ELO rating inflation not taken place? Stimulated by FIDE/ACP, we revisit the fundamentals of the subject to advance a framework suitable for improved standards of computational experiment and more precise results. Other domains look to chess as the demonstrator of good practice, including the rating of professionals making high-value decisions under pressure, personnel evaluation by Multichoice Assessment and the organization of crowd-sourcing in citizen science projects. The ‘3P’ themes of performance, prediction and profiling pervade all these domains.