792 resultados para biased measurement
Resumo:
This paper studies a balance whose unobservable fulcrum is not necessarilylocated at the middle of its two pans. It presents three differentmodels, showing how this lack of symmetry modifies the observation, theformalism and the interpretation of such a biased measuring device. Itargues that the biased balance can be an interesting source of inspirationfor broadening the representational theory of measurement.
Resumo:
We represent interval ordered homothetic preferences with a quantitative homothetic utility function and a multiplicative bias. When preferences are weakly ordered (i.e. when indifference is transitive), such a bias equals 1. When indifference is intransitive, the biasing factor is a positive function smaller than 1 and measures a threshold of indifference. We show that the bias is constant if and only if preferences are semiordered, and we identify conditions ensuring a linear utility function. We illustrate our approach with indifference sets on a two dimensional commodity space.
Resumo:
BETs is a three-year project financed by the Space Program of the European Commission, aimed at developing an efficient deorbit system that could be carried on board any future satellite launched into Low Earth Orbit (LEO). The operational system involves a conductive tape-tether left bare to establish anodic contact with the ambient plasma as a giant Langmuir probe. As a part of this project, we are carrying out both numerical and experimental approaches to estimate the collected current by the positive part of the tether. This paper deals with experimental measurements performed in the IONospheric Atmosphere Simulator (JONAS) plasma chamber of the Onera-Space Environment Department. The JONAS facility is a 9- m3 vacuum chamber equipped with a plasma source providing drifting plasma simulating LEO conditions in terms of density and temperature. A thin metallic cylinder, simulating the tether, is set inside the chamber and polarized up to 1000 V. The Earth's magnetic field is neutralized inside the chamber. In a first time, tether collected current versus tether polarization is measured for different plasma source energies and densities. In complement, several types of Langmuir probes are used at the same location to allow the extraction of both ion densities and electron parameters by computer modeling (classical Langmuir probe characteristics are not accurate enough in the present situation). These two measurements permit estimation of the discrepancies between the theoretical collection laws, orbital motion limited law in particular, and the experimental data in LEO-like conditions without magnetic fields. In a second time, the spatial variations and the time evolutions of the plasma properties around the tether are investigated. Spherical and emissive Langmuir probes are also used for a more extensive characterization of the plasma in space and time dependent analysis. Results show the ion depletion because of the wake effect and the accumulation of- ions upstream of the tether. In some regimes (at large positive potential), oscillations are observed on the tether collected current and on Langmuir probe collected current in specific sites.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
In the homogeneous case of one-dimensional objects, we show that any preference relation that is positive and homothetic can be represented by a quantitative utility function and unique bias. This bias may favor or disfavor the preference for an object. In the first case, preferences are complete but not transitive and an object may be preferred even when its utility is lower. In the second case, preferences are asymmetric and transitive but not negatively transitive and it may not be sufficient for an object to have a greater utility for be preferred. In this manner, the bias reflects the extent to which preferences depart from the maximization of a utility function.
Resumo:
This paper presents a new respiratory impedance estimator to minimize the error due to breathing. Its practical reliability was evaluated in a simulation using realistic signals. These signals were generated by superposing pressure and flow records obtained in two conditions: 1) when applying forced oscillation to a resistance- inertance- elastance (RIE) mechanical model; 2) when healthy subjects breathed through the unexcited forced oscillation generator. Impedances computed (4-32 Hz) from the simulated signals with the new estimator resulted in a mean value which was scarcely biased by the added breathing (errors less than 1 percent in the mean R, I , and E ) and had a small variability (coefficients of variation of R, I, and E of 1.3, 3.5, and 9.6 percent, respectively). Our results suggest that the proposed estimator reduces the error in measurement of respiratory impedance without appreciable extracomputational cost.
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights
Resumo:
Productivity growth is conventionally measured by indices representing discreet approximations of the Divisia TFP index under the assumption that technological change is Hicks-neutral. When this assumption is violated, these indices are no longer meaningful because they conflate the effects of factor accumulation and technological change. We propose a way of adjusting the conventional TFP index that solves this problem. The method adopts a latent variable approach to the measurement of technical change biases that provides a simple means of correcting product and factor shares in the standard Tornqvist-Theil TFP index. An application to UK agriculture over the period 1953-2000 demonstrates that technical progress is strongly biased. The implications of that bias for productivity measurement are shown to be very large, with the conventional TFP index severely underestimating productivity growth. The result is explained primarily by the fact that technological change has favoured the rapidly accumulating factors against labour, the factor leaving the sector. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which points to some difficulties in the interpretation of such predictors. (C) 2011 Elsevier By. All rights reserved.
Resumo:
This paper examines the measurement of long-horizon abnormal performance when stock selection is conditional on an extended period of past survival. Filtering on survival results in a sample driven towards more-established, frequently traded stocks and this has implications for the choice of benchmark used in performance measurement (especially in the presence of the well-documented size effect). A simulation study is conducted to document the properties of commonly employed performance measures conditional on past survival. The results suggest that the popular index benchmarks used in long-horizon event studies are severely biased and yield test statistics that are badly misspecified. In contrast, a matched-stock benchmark based on size and industry performs consistently well. Also, an eligible-stock index designed to mitigate the influence of the size effect proves effective.
Resumo:
We report measurements of single- and double-spin asymmetries for W^{±} and Z/γ^{*} boson production in longitudinally polarized p+p collisions at sqrt[s]=510 GeV by the STAR experiment at RHIC. The asymmetries for W^{±} were measured as a function of the decay lepton pseudorapidity, which provides a theoretically clean probe of the proton's polarized quark distributions at the scale of the W mass. The results are compared to theoretical predictions, constrained by polarized deep inelastic scattering measurements, and show a preference for a sizable, positive up antiquark polarization in the range 0.05
Biased Random-key Genetic Algorithms For The Winner Determination Problem In Combinatorial Auctions.
Resumo:
Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Resumo:
One of the most important properties of quantum dots (QDs) is their size. Their size will determine optical properties and in a colloidal medium their range of interaction. The most common techniques used to measure QD size are transmission electron microscopy (TEM) and X-ray diffraction. However, these techniques demand the sample to be dried and under a vacuum. This way any hydrodynamic information is excluded and the preparation process may alter even the size of the QDs. Fluorescence correlation spectroscopy (FCS) is an optical technique with single molecule sensitivity capable of extracting the hydrodynamic radius (HR) of the QDs. The main drawback of FCS is the blinking phenomenon that alters the correlation function implicating in a QD apparent size smaller than it really is. In this work, we developed a method to exclude blinking of the FCS and measured the HR of colloidal QDs. We compared our results with TEM images, and the HR obtained by FCS is higher than the radius measured by TEM. We attribute this difference to the cap layer of the QD that cannot be seen in the TEM images.