884 resultados para random sample
Resumo:
We used microsatellites to study the fine-scale genetic structure of a highly polygynous and largely uni-colonial population of the ant Formica paralugubris. Genetic data indicate that long-distance gene flow between established nests is limited and new queens are primarily recruited from within their natal nest. Most matings occur between nestmates and are random at this level. In the center of the study area, budding and permanent connections between nests result in strong population viscosity, with close nests being more similar generically than distant nests. In contrast, nests located outside of this supercolony show no isolation by distance, suggesting that they have been initiated by queens that participated in mating flights rather than by budding from nearby nests in our sample population. Recruitment of nestmates as new reproductive individuals and population viscosity in the supercolony increase genetic differentiation between nests. This in turn inflates relatedness estimates among worker nestmates (r = 0.17) above what is due to close pedigree links. Local spatial genetic differentiation may favor the maintenance of altruism when workers raise queens that will disperse on foot and compete with less related queens from neighboring nests or disperse on the wing and compete with unrelated queens.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
Random coefficient regression models have been applied in differentfields and they constitute a unifying setup for many statisticalproblems. The nonparametric study of this model started with Beranand Hall (1992) and it has become a fruitful framework. In thispaper we propose and study statistics for testing a basic hypothesisconcerning this model: the constancy of coefficients. The asymptoticbehavior of the statistics is investigated and bootstrapapproximations are used in order to determine the critical values ofthe test statistics. A simulation study illustrates the performanceof the proposals.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
BACKGROUND: Community-based diabetes screening programs can help sensitize the population and identify new cases. However, the impact of such programs is rarely assessed in high-income countries, where concurrent health information and screening opportunities are common place. INTERVENTION AND METHODS: A 2-week screening and awareness campaign was organized as part of a new diabetes program in the canton of Vaud (population of 697,000) in Switzerland. Screening was performed without appointment in 190 out of 244 pharmacies in the canton at the subsidized cost of 10 Swiss Francs per participant. Screening included questions on risk behaviors, measurement of body mass index, blood pressure, blood cholesterol, random blood glucose (RBG), and A1c if RBG was >/=7.0 mmol/L. A mass media campaign promoting physical activity and a healthy diet was channeled through several media, eg, 165 spots on radio, billboards in 250 public places, flyers in 360 public transport vehicles, and a dozen articles in several newspapers. A telephone survey in a representative sample of the population of the canton was performed after the campaign to evaluate the program. RESULTS: A total of 4222 participants (0.76% of all persons aged >/=18 years) underwent the screening program (median age: 53 years, 63% females). Among participants not treated for diabetes, 3.7% had RBG >/= 7.8 mmol/L and 1.8% had both RBG >/= 7.0 mmol/L and A1c >/= 6.5. Untreated blood pressure >/=140/90 mmHg and/or untreated cholesterol >/=5.2 mmol/L were found in 50.5% of participants. One or several treated or untreated modifiable risk factors were found in 78% of participants. The telephone survey showed that 53% of all adults in the canton were sensitized by the campaign. Excluding fees paid by the participants, the program incurred a cost of CHF 330,600. CONCLUSION: A community-based screening program had low efficiency for detecting new cases of diabetes, but it identified large numbers of persons with elevated other cardiovascular risk factors. Our findings suggest the convenience of A1c for mass screening of diabetes, the usefulness of extending diabetes screening to other cardiovascular risk factors, and the importance of a robust background communication campaign.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
This paper presents a new framework for studying irreversible (dis)investment whena market follows a random number of random-length cycles (such as a high-tech productmarket). It is assumed that a firm facing such market evolution is always unsure aboutwhether the current cycle is the last one, although it can update its beliefs about theprobability of facing a permanent decline by observing that no further growth phasearrives. We show that the existence of regime shifts in fluctuating markets suffices for anoption value of waiting to (dis)invest to arise, and we provide a marginal interpretationof the optimal (dis)investment policies, absent in the real options literature. Thepaper also shows that, despite the stochastic process of the underlying variable has acontinuous sample path, the discreteness in the regime changes implies that the samplepath of the firm s value experiences jumps whenever the regime switches all of a sudden,irrespective of whether the firm is active or not.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
OBJECTIVE: The aim of this study was to evaluate a French language version of the Adolescent Drug Abuse Diagnosis (ADAD) instrument in a Swiss sample of adolescent illicit drug and/or alcohol users. PARTICIPANTS AND SETTING: The participants in the study were 102 French-speaking adolescents aged 13-19 years who fitted the criteria of illicit drug or alcohol use (at least one substance--except tobacco--once a week during the last 3 months). They were recruited in hospitals, institutions and leisure places. Procedure. The ADAD was administered individually by trained psychologists. It was integrated into a broader protocol including alcohol and drug abuse DSM-IV diagnoses, the BDI-13 (Beck Depression Inventory), life events and treatment trajectories. RESULTS: The ADAD appears to show good inter-rater reliability; the subscales showed good internal coherence and the correlations between the composite scores and the severity ratings were moderate to high. Finally, the results confirmed good concurrent validity for three out of eight ADAD dimensions. CONCLUSIONS: The French language version of the ADAD appears to be an adequate instrument for assessing drug use and associated problems in adolescents. Despite its complexity, the instrument has acceptable validity, reliability and usefulness criteria, enabling international and transcultural comparisons.