970 resultados para Statistical performance indexes
Resumo:
The question as to whether it is better to diversify a real estate portfolio within a property type across the regions or within a region across the property types is one of continuing interest for academics and practitioners alike. The current study, however, is somewhat different from the usual sector/regional analysis taking account of the fact that holdings in the UK real estate market are heavily concentrated in a single region, London. As a result this study is designed to investigate whether a real estate fund manager can obtain a statistically significant improvement in risk/return performance from extending out of a London based portfolio into firstly the rest of the South East of England and then into the remainder of the UK, or whether the manger would be better off staying within London and diversifying across the various property types. The results indicating that staying within London and diversifying across the various property types may offer performance comparable with regional diversification, although this conclusion largely depends on the time period and the fund manager’s ability to diversify efficiently.
Resumo:
Purpose – The purpose of this study is to examine the relationship between business-level strategy and organisational performance and to test the applicability of Porter's generic strategies in explaining differences in the performance of organisations. Design/methodology/approach – The study was focussed on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors. Data were collected through a postal survey using the survey instrument from 124 organisations and the respondents were all at CEO level. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and it was not found to be a major problem affecting this study. Appropriate measures were taken to ensure that common method variance (CMV) does not affect the results of this study. Statistical tests indicated that CMV problem does not affect the results of this study. Findings – The results of this study indicate that firms adopting one of the strategies, namely cost-leadership or differentiation, perform better than “stuck-in-the-middle” firms which do not have a dominant strategic orientation. The integrated strategy group has lower performance compared with cost-leaders and differentiators in terms of financial performance measures. This provides support for Porter's view that combination strategies are unlikely to be effective in organisations. However, the cost-leadership and differentiation strategies were not strongly correlated with the financial performance measures indicating the limitations of Porter's generic strategies in explaining performance heterogeneity in organisations. Originality/value – This study makes an important contribution to the literature by identifying some of the gaps in the literature through a systematic literature review and addressing those gaps.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
Many studies have widely accepted the assumption that learning processes can be promoted when teaching styles and learning styles are well matched. In this study, the synergy between learning styles, learning patterns, and gender as a selected demographic feature and learners’ performance were quantitatively investigated in a blended learning setting. This environment adopts a traditional teaching approach of ‘one-sizefits-all’ without considering individual user’s preferences and attitudes. Hence, evidence can be provided about the value of taking such factors into account in Adaptive Educational Hypermedia Systems (AEHSs). Felder and Soloman’s Index of Learning Styles (ILS) was used to identify the learning styles of 59 undergraduate students at the University of Babylon. Five hypotheses were investigated in the experiment. Our findings show that there is no statistical significance in some of the assessed factors. However, processing dimension, the total number of hits on course website and gender indicated a statistical significance on learners’ performance. This finding needs more investigation in order to identify the effective factors on students’ achievement to be considered in Adaptive Educational Hypermedia Systems (AEHSs).
Resumo:
Recent growth in brain-computer interface (BCI) research has increased pressure to report improved performance. However, different research groups report performance in different ways. Hence, it is essential that evaluation procedures are valid and reported in sufficient detail. In this chapter we give an overview of available performance measures such as classification accuracy, cohen’s kappa, information transfer rate (ITR), and written symbol rate. We show how to distinguish results from chance level using confidence intervals for accuracy or kappa. Furthermore, we point out common pitfalls when moving from offline to online analysis and provide a guide on how to conduct statistical tests on (BCI) results.
Resumo:
Currently, multi-attribute auctions are becoming widespread awarding mechanisms for contracts in construction, and in these auctions, criteria other than price are taken into account for ranking bidder proposals. Therefore, being the lowest-price bidder is no longer a guarantee of being awarded, thus increasing the importance of measuring any bidder’s performance when not only the first position (lowest price) matters. Modeling position performance allows a tender manager to calculate the probability curves related to the more likely positions to be occupied by any bidder who enters a competitive auction irrespective of the actual number of future participating bidders. This paper details a practical methodology based on simple statistical calculations for modeling the performance of a single bidder or a group of bidders, constituting a useful resource for analyzing one’s own success while benchmarking potential bidding competitors.
Resumo:
A dynamical wind-wave climate simulation covering the North Atlantic Ocean and spanning the whole 21st century under the A1B scenario has been compared with a set of statistical projections using atmospheric variables or large scale climate indices as predictors. As a first step, the performance of all statistical models has been evaluated for the present-day climate; namely they have been compared with a dynamical wind-wave hindcast in terms of winter Significant Wave Height (SWH) trends and variance as well as with altimetry data. For the projections, it has been found that statistical models that use wind speed as independent variable predictor are able to capture a larger fraction of the winter SWH inter-annual variability (68% on average) and of the long term changes projected by the dynamical simulation. Conversely, regression models using climate indices, sea level pressure and/or pressure gradient as predictors, account for a smaller SWH variance (from 2.8% to 33%) and do not reproduce the dynamically projected long term trends over the North Atlantic. Investigating the wind-sea and swell components separately, we have found that the combination of two regression models, one for wind-sea waves and another one for the swell component, can improve significantly the wave field projections obtained from single regression models over the North Atlantic.
Resumo:
The Natural History of Human Papillomavirus (HPV) Infection in Men: The HIM Study is a prospective multi-center cohort study that, among other factors, analyzes participants` diet. A parallel cross-sectional study was designed to evaluate the validity and reproducibility of the quantitative food frequency questionnaire (QFFQ) used in the Brazilian center from the HIM Study. For this, a convenience subsample of 98 men aged 18 to 70 years from the HIM Study in Brazil answered three 54-item QFFQ and three 24-hour recall interviews, with 6-month intervals between them (data collection January to September 2007). A Bland-Altman analysis indicated that the difference between instruments was dependent on the magnitude of the intake for energy and most nutrients included in the validity analysis, with the exception of carbohydrates, fiber, polyunsaturated fat, vitamin C, and vitamin E. The correlation between the QFFQ and the 24-hour recall for the deattenuated and energy-adjusted data ranged from 0.05 (total fat) to 0.57 (calcium). For the energy and nutrients consumption included in the validity analysis, 33.5% of participants on average were correctly classified into quartiles, and the average value of 0.26 for weighted kappa shows a reasonable agreement. The intraclass correlation coefficients for all nutrients were greater than 0.40 in the reproducibility analysis. The QFFQ demonstrated good reproducibility and acceptable validity. The results support the use of this instrument in the HIM Study. J Am Diet Assoc. 2011;111:1045-1051.
Resumo:
We investigated the evolution of anuran locomotor performance and its morphological correlates as a function of habitat use and lifestyles. We reanalysed a subset of the data reported by Zug (Smithson. Contrib. Zool. 1978; 276: 1-31) employing phylogenetically explicit statistical methods (n = 56 species), and assembled morphological data on the ratio between hind-limb length and snout-vent length (SVL) from the literature and museum specimens for a large subgroup of the species from the original paper (n = 43 species). Analyses using independent contrasts revealed that classifying anurans into terrestrial, semi-aquatic, and arboreal categories cannot distinguish between the effects of phylogeny and ecological diversification in anuran locomotor performance. However, a more refined classification subdividing terrestrial species into `fossorials` and `non-fossorials`, and arboreal species into `open canopy`, `low canopy` and `high canopy`, suggests that part of the variation in locomotor performance and in hind-limb morphology can be attributed to ecological diversification. In particular, fossorial species had significantly lower jumping performances and shorter hind limbs than other species after controlling for SVL, illustrating how the trade-off between burrowing efficiency and jumping performance has resulted in morphological specialization in this group.
Resumo:
This paper tackles the problem of showing that evolutionary algorithms for fuzzy clustering can be more efficient than systematic (i.e. repetitive) approaches when the number of clusters in a data set is unknown. To do so, a fuzzy version of an Evolutionary Algorithm for Clustering (EAC) is introduced. A fuzzy cluster validity criterion and a fuzzy local search algorithm are used instead of their hard counterparts employed by EAC. Theoretical complexity analyses for both the systematic and evolutionary algorithms under interest are provided. Examples with computational experiments and statistical analyses are also presented.
Resumo:
The use of inter-laboratory test comparisons to determine the performance of individual laboratories for specific tests (or for calibration) [ISO/IEC Guide 43-1, 1997. Proficiency testing by interlaboratory comparisons - Part 1: Development and operation of proficiency testing schemes] is called Proficiency Testing (PT). In this paper we propose the use of the generalized likelihood ratio test to compare the performance of the group of laboratories for specific tests relative to the assigned value and illustrate the procedure considering an actual data from the PT program in the area of volume. The proposed test extends the test criteria in use allowing to test for the consistency of the group of laboratories. Moreover, the class of elliptical distributions are considered for the obtained measurements. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We propose a likelihood ratio test ( LRT) with Bartlett correction in order to identify Granger causality between sets of time series gene expression data. The performance of the proposed test is compared to a previously published bootstrapbased approach. LRT is shown to be significantly faster and statistically powerful even within non- Normal distributions. An R package named gGranger containing an implementation for both Granger causality identification tests is also provided.
Resumo:
In this project, two broad facets in the design of a methodology for performance optimization of indexable carbide inserts were examined. They were physical destructive testing and software simulation.For the physical testing, statistical research techniques were used for the design of the methodology. A five step method which began with Problem definition, through System identification, Statistical model formation, Data collection and Statistical analyses and results was indepthly elaborated upon. Set-up and execution of an experiment with a compression machine together with roadblocks and possible solution to curb road blocks to quality data collection were examined. 2k factorial design was illustrated and recommended for process improvement. Instances of first-order and second-order response surface analyses were encountered. In the case of curvature, test for curvature significance with center point analysis was recommended. Process optimization with method of steepest ascent and central composite design or process robustness studies of response surface analyses were also recommended.For the simulation test, AdvantEdge program was identified as the most used software for tool development. Challenges to the efficient application of this software were identified and possible solutions proposed. In conclusion, software simulation and physical testing were recommended to meet the objective of the project.