670 resultados para Weighting
Resumo:
Construction of an international index of standards of living, incorporating social indicators and economic output, typically involves scaling and weighting procedures that lack welfare-economic foundations. Revealed preference axioms can be used to make quality-of-life comparisons if we can estimate the representative household's production technology for the social indicators. This method is applied to comparisons of gross domestic product (GDP) and life expectancy for 58 countries. Neither GDP rankings, nor the rankings of the Human Development Index (HDI), are consistent with the partial ordering of revealed preference. A method of constructing a utility-consistent index incorporating both consumption and life expectancy is suggested. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
This study has three main objectives. First, it develops a generalization of the commonly used EKS method to multilateral price comparisons. It is shown that the EKS system can be generalized so that weights can be attached to each of the link comparisons used in the EKS computations. These weights can account for differing levels of reliability of the underlying binary comparisons. Second, various reliability measures and corresponding weighting schemes are presented and their merits discussed. Finally, these new methods are applied to an international data set of manufacturing prices from the ICOP project. Although theoretically superior, it appears that the empirical impact of the weighted EKS method is generally small compared to the unweighted EKS. It is also found that this impact is larger when it is applied at lower levels of aggregation. Finally, the importance of using sector specific PPPs in assessing relative levels of manufacturing productivity is indicated.
Resumo:
Assessments for assigning the conservation status of threatened species that are based purely on subjective judgements become problematic because assessments can be influenced by hidden assumptions, personal biases and perceptions of risks, making the assessment process difficult to repeat. This can result in inconsistent assessments and misclassifications, which can lead to a lack of confidence in species assessments. It is almost impossible to Understand an expert's logic or visualise the underlying reasoning behind the many hidden assumptions used throughout the assessment process. In this paper, we formalise the decision making process of experts, by capturing their logical ordering of information, their assumptions and reasoning, and transferring them into a set of decisions rules. We illustrate this through the process used to evaluate the conservation status of species under the NatureServe system (Master, 1991). NatureServe status assessments have been used for over two decades to set conservation priorities for threatened species throughout North America. We develop a conditional point-scoring method, to reflect the current subjective process. In two test comparisons, 77% of species' assessments using the explicit NatureServe method matched the qualitative assessments done subjectively by NatureServe staff. Of those that differed, no rank varied by more than one rank level under the two methods. In general, the explicit NatureServe method tended to be more precautionary than the subjective assessments. The rank differences that emerged from the comparisons may be due, at least in part, to the flexibility of the qualitative system, which allows different factors to be weighted on a species-by-species basis according to expert judgement. The method outlined in this study is the first documented attempt to explicitly define a transparent process for weighting and combining factors under the NatureServe system. The process of eliciting expert knowledge identifies how information is combined and highlights any inconsistent logic that may not be obvious in Subjective decisions. The method provides a repeatable, transparent, and explicit benchmark for feedback, further development, and improvement. (C) 2004 Elsevier SAS. All rights reserved.
Resumo:
The Euro has been used as the largest weighting element in a basket of currencies for forex arrangements adopted by several Central European countries outside the European Union (EU). The paper uses a new time-series approach to examine the relationship between the Euro exchange rate and the level of foreign reserves. It employs Zero-no-zero (ZNZ) patterned vector error-correction (VECM) modelling to investigate Granger causal relations among foreign reserves, the European Monetary Union money supply and the Euro exchange rate. The findings confirm that foreign reserves may influence movements in the Euro's exchange rate. Further, ZNZ patterned VECM modelling with exogenous variables is used to estimate the amount of foreign reserves currently required in order to again achieve a targetted Euro exchange rate
Resumo:
This paper presents a rectangular array antenna with a suitable signal-processing algorithm that is able to steer the beam in azimuth over a wide frequency band. In the previous approach, which was reported in the literature, an inverse discrete Fourier transform technique was proposed for obtaining the signal weighting coefficients. This approach was demonstrated for large arrays in which the physical parameters of the antenna elements were not considered. In this paper, a modified signal-weighting algorithm that works for arbitrary-size arrays is described. Its validity is demonstrated in examples of moderate-size arrays with real antenna elements. It is shown that in some cases, the original beam-forming algorithm fails, while the new algorithm is able to form the desired radiation pattern over a wide frequency band. The performance of the new algorithm is assessed for two cases when the mutual coupling between array elements is both neglected and taken into account.
Resumo:
This article presents an array antenna with beam-steering capability in azimuth over a wide frequency band using real-valued weighting coefficients that can be realized in practice by amplifiers or attenuators. The described beamforming scheme relies on a 2D (instead of 1D) array structure in order to make sure that there are enough degrees of freedom to realize a given radiation pattern in both the angular and frequency domains. In the presented approach, weights are determined using an inverse discrete Fourier transform (IDFT) technique by neglecting the mutual coupling between array elements. Because of the presence of mutual coupling, the actual array produces a radiation pattern with increased side-lobe levels. In order to counter this effect, the design aims to realize the initial radiation pattern with a lower side-lobe level. This strategy is demonstrated in the design example of 4 X 4 element array. (C) 2005 Wiley Periodicals. Inc.
Resumo:
In the past, the accuracy of facial approximations has been assessed by resemblance ratings (i.e., the comparison of a facial approximation directly to a target individual) and recognition tests (e.g., the comparison of a facial approximation to a photo array of faces including foils and a target individual). Recently, several research studies have indicated that recognition tests hold major strengths in contrast to resemblance ratings. However, resemblance ratings remain popularly employed and/or are given weighting when judging facial approximations, thus indicating that no consensus has been reached. This study aims to further investigate the matter by comparing the results of resemblance ratings and recognition tests for two facial approximations which clearly differed in their morphological appearance. One facial approximation was constructed by an experienced practitioner privy to the appearance of the target individual (practitioner had direct access to an antemortem frontal photograph during face construction), while the other facial approximation was constructed by a novice under blind conditions. Both facial approximations, whilst clearly morphologically different, were given similar resemblance scores even though recognition test results produced vastly different results. One facial approximation was correctly recognized almost without exception while the other was not correctly recognized above chance rates. These results suggest that resemblance ratings are insensitive measures of the accuracy of facial approximations and lend further weight to the use of recognition tests in facial approximation assessment. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This paper describes a spatial beamformer which by using a rectangular array antenna steers a beam in azimuth over a wide frequency band without frequency filters or tap-delay networks. The weighting coefficients are real numbers which can be realized by attenuators or amplifiers. A prototype including a 4 x 4 array of square planar monopoles and a feeding network composed of attenuators, power divider/combiners and a rat-race hybrid is developed to test the validity of this wide-band beamforming concept. The experimental results prove the validity of this wide-band spatial beamformer for small size arrays.
Resumo:
Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.
Resumo:
Over a number of years, as the Higher Education Funding Council for England (HEFCE)'s funding models became more transparent, Aston University was able to discover how its funding for teaching and research was calculated. This enabled calculations to be made on the funds earned by each school in the University, and Aston Business School (ABS) in turn to develop models to calculate the funds earned by its programmes and academic groups. These models were a 'load' and a 'contribution' model. The 'load' model records the weighting of activities undertaken by individual members of staff; the 'contribution' model is the means by which funds are allocated to academic units. The 'contribution' model is informed by the 'load' model in determining the volume of activity for which each academic unit is to be funded.
Resumo:
Spectral and coherence methodologies are ubiquitous for the analysis of multiple time series. Partial coherence analysis may be used to try to determine graphical models for brain functional connectivity. The outcome of such an analysis may be considerably influenced by factors such as the degree of spectral smoothing, line and interference removal, matrix inversion stabilization and the suppression of effects caused by side-lobe leakage, the combination of results from different epochs and people, and multiple hypothesis testing. This paper examines each of these steps in turn and provides a possible path which produces relatively ‘clean’ connectivity plots. In particular we show how spectral matrix diagonal up-weighting can simultaneously stabilize spectral matrix inversion and reduce effects caused by side-lobe leakage, and use the stepdown multiple hypothesis test procedure to help formulate an interaction strength.
Resumo:
During 1999 and 2000 a large number of articles appeared in the financial press which argued that the concentration of the FTSE 100 had increased. Many of these reports suggested that stock market volatility in the UK had risen, because the concentration of its stock markets had increased. This study undertakes a comprehensive measurement of stock market concentration using the FTSE 100 index. We find that during 1999, 2000 and 2001 stock market concentration was noticeably higher than at any other time since the index was introduced. When we measure the volatility of the FTSE 100 index we do not find an association between concentration and its volatility. When we examine the variances and covariance’s of the FTSE 100 constituents we find that security volatility appears to be positively related to concentration changes but concentration and the size of security covariances appear to be negatively related. We simulate the variance of four versions of the FTSE 100 index; in each version of the index the weighting structure reflects either an equally weighted index, or one with levels of low, intermediate or high concentration. We find that moving from low to high concentration has very little impact on the volatility of the index. To complete the study we estimate the minimum variance portfolio for the FTSE 100, we then compare concentration levels of this index to those formed on the basis of market weighting. We find that realised FTSE index weightings are higher than for the minimum variance index.
Resumo:
Mistuning a harmonic produces an exaggerated change in its pitch. This occurs because the component becomes inconsistent with the regular pattern that causes the other harmonics (constituting the spectral frame) to integrate perceptually. These pitch shifts were measured when the fundamental (F0) component of a complex tone (nominal F0 frequency = 200 Hz) was mistuned by +8% and -8%. The pitch-shift gradient was defined as the difference between these values and its magnitude was used as a measure of frame integration. An independent and random perturbation (spectral jitter) was applied simultaneously to most or all of the frame components. The gradient magnitude declined gradually as the degree of jitter increased from 0% to ±40% of F0. The component adjacent to the mistuned target made the largest contribution to the gradient, but more distant components also contributed. The stimuli were passed through an auditory model, and the exponential height of the F0-period peak in the averaged summary autocorrelation function correlated well with the gradient magnitude. The fit improved when the weighting on more distant channels was attenuated by a factor of three per octave. The results are consistent with a grouping mechanism that computes a weighted average of periodicity strength across several components. © 2006 Elsevier B.V. All rights reserved.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
Resumo:
In practical term any result obtained using an ordered weighted averaging (OWA) operator heavily depends upon the method to determine the weighting vector. Several approaches for obtaining the associated weights have been suggested in the literature, in which none of them took into account the preference of alternatives. This paper presents a method for determining the OWA weights when the preferences of alternatives across all the criteria are considered. An example is given to illustrate this method and an application in internet search engine shows the use of this new OWA operator.