999 resultados para permutation test
Resumo:
Hierarchical clustering is a popular method for finding structure in multivariate data,resulting in a binary tree constructed on the particular objects of the study, usually samplingunits. The user faces the decision where to cut the binary tree in order to determine the numberof clusters to interpret and there are various ad hoc rules for arriving at a decision. A simplepermutation test is presented that diagnoses whether non-random levels of clustering are presentin the set of objects and, if so, indicates the specific level at which the tree can be cut. The test isvalidated against random matrices to verify the type I error probability and a power study isperformed on data sets with known clusteredness to study the type II error.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
With the advent of functional neuroimaging techniques, in particular functional magnetic resonance imaging (fMRI), we have gained greater insight into the neural correlates of visuospatial function. However, it may not always be easy to identify the cerebral regions most specifically associated with performance on a given task. One approach is to examine the quantitative relationships between regional activation and behavioral performance measures. In the present study, we investigated the functional neuroanatomy of two different visuospatial processing tasks, judgement of line orientation and mental rotation. Twenty-four normal participants were scanned with fMRI using blocked periodic designs for experimental task presentation. Accuracy and reaction time (RT) to each trial of both activation and baseline conditions in each experiment was recorded. Both experiments activated dorsal and ventral visual cortical areas as well as dorsolateral prefrontal cortex. More regionally specific associations with task performance were identified by estimating the association between (sinusoidal) power of functional response and mean RT to the activation condition; a permutation test based on spatial statistics was used for inference. There was significant behavioral-physiological association in right ventral extrastriate cortex for the line orientation task and in bilateral (predominantly right) superior parietal lobule for the mental rotation task. Comparable associations were not found between power of response and RT to the baseline conditions of the tasks. These data suggest that one region in a neurocognitive network may be most strongly associated with behavioral performance and this may be regarded as the computationally least efficient or rate-limiting node of the network.
Resumo:
SETTING: Chronic obstructive pulmonary disease (COPD) is the third leading cause of death among adults in Brazil. OBJECTIVE: To evaluate the mortality and hospitalisation trends in Brazil caused by COPD during the period 1996-2008. DESIGN: We used the health official statistics system to obtain data about mortality (1996-2008) and morbidity (1998-2008) due to COPD and all respiratory diseases (tuberculosis: codes A15-16; lung cancer: code C34, and all diseases coded from J40 to 47 in the 10th Revision of the International Classification of Diseases) as the underlying cause, in persons aged 45-74 years. We used the Joinpoint Regression Program log-linear model using Poisson regression that creates a Monte Carlo permutation test to identify points where trend lines change significantly in magnitude/direction to verify peaks and trends. RESULTS: The annual per cent change in age-adjusted death rates due to COPD declined by 2.7% in men (95%CI -3.6 to -1.8) and -2.0% (95%CI -2.9 to -1.0) in women; and due to all respiratory causes it declined by -1.7% (95%CI 2.4 to -1.0) in men and -1.1% (95%CI -1.8 to -0.3) in women. Although hospitalisation rates for COPD are declining, the hospital admission fatality rate increased in both sexes. CONCLUSION: COPD is still a leading cause of mortality in Brazil despite the observed decline in the mortality/hospitalisation rates for both sexes.
Resumo:
Understanding why dispersal is sex-biased in many taxa is still a major concern in evolutionary ecology. Dispersal tends to be male-biased in mammals and female-biased in birds, but counter-examples exist and little is known about sex bias in other taxa. Obtaining accurate measures of dispersal in the field remains a problem. Here we describe and compare several methods for detecting sex-biased dispersal using bi-parentally inherited, codominant genetic markers. If gene flow is restricted among populations, then the genotype of an individual tells something about its origin. Provided that dispersal occurs at the juvenile stage and that sampling is carried out on adults, genotypes sampled from the dispersing sex should on average be less likely (compared to genotypes from the philopatric sex) in the population in which they were sampled. The dispersing sex should be less genetically structured and should present a larger heterozygote deficit. In this study we use computer simulations and a permutation test on four statistics to investigate the conditions under which sex-biased dispersal can be detected. Two tests emerge as fairly powerful. We present results concerning the optimal sampling strategy (varying number of samples, individuals, loci per individual and level of polymorphism) under different amounts of dispersal for each sex. These tests for biases in dispersal are also appropriate for any attribute (e.g. size, colour, status) suspected to influence the probability of dispersal. A windows program carrying out these tests can be freely downloaded from http://www.unil.ch/izea/softwares/fstat.html
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
INTRODUCTION: Adaptive statistical iterative reconstruction (ASIR) can decrease image noise, thereby generating CT images of comparable diagnostic quality with less radiation. The purpose of this study is to quantify the effect of systematic use of ASIR versus filtered back projection (FBP) for neuroradiology CT protocols on patients' radiation dose and image quality. METHODS: We evaluated the effect of ASIR on six types of neuroradiologic CT studies: adult and pediatric unenhanced head CT, adult cervical spine CT, adult cervical and intracranial CT angiography, adult soft tissue neck CT with contrast, and adult lumbar spine CT. For each type of CT study, two groups of 100 consecutive studies were retrospectively reviewed: 100 studies performed with FBP and 100 studies performed with ASIR/FBP blending factor of 40 %/60 % with appropriate noise indices. The weighted volume CT dose index (CTDIvol), dose-length product (DLP) and noise were recorded. Each study was also reviewed for image quality by two reviewers. Continuous and categorical variables were compared by t test and free permutation test, respectively. RESULTS: For adult unenhanced brain CT, CT cervical myelography, cervical and intracranial CT angiography and lumbar spine CT both CTDIvol and DLP were lowered by up to 10.9 % (p < 0.001), 17.9 % (p = 0.005), 20.9 % (p < 0.001), and 21.7 % (p = 0.001), respectively, by using ASIR compared with FBP alone. Image quality and noise were similar for both FBP and ASIR. CONCLUSION: We recommend routine use of iterative reconstruction for neuroradiology CT examinations because this approach affords a significant dose reduction while preserving image quality.
Resumo:
BACKGROUND: Psychogenic non-epileptic seizures (PNES) are involuntary paroxysmal events that are unaccompanied by epileptiform EEG discharges. We hypothesised that PNES are a disorder of distributed brain networks resulting from their functional disconnection.The disconnection may underlie a dissociation mechanism that weakens the influence of unconsciously presented traumatising information but exerts maladaptive effects leading to episodic failures of behavioural control manifested by psychogenic 'seizures'. METHODS: To test this hypothesis, we compared functional connectivity (FC) derived from resting state high-density EEGs of 18 patients with PNES and 18 age-matched and gender-matched controls. To this end, the EEGs were transformed into source space using the local autoregressive average inverse solution. FC was estimated with a multivariate measure of lagged synchronisation in the θ, α and β frequency bands for 66 brain sites clustered into 18 regions. A multiple comparison permutation test was applied to deduce significant between-group differences in inter-regional and intraregional FC. RESULTS: The significant effect of PNES-a decrease in lagged FC between the basal ganglia and limbic, prefrontal, temporal, parietal and occipital regions-was found in the α band. CONCLUSION: We believe that this finding reveals a possible neurobiological substrate of PNES, which explains both attenuation of the effect of potentially disturbing mental representations and the occurrence of PNES episodes. By improving understanding of the aetiology of this condition, our results suggest a potential refinement of diagnostic criteria and management principles.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
In contrast to prior studies showing a positive lapse-rate feedback associated with the Arctic inversion, Boé et al. reported that strong present-day Arctic temperature inversions are associated with stronger negative longwave feedbacks and thus reduced Arctic amplification in the model ensemble from phase 3 of the Coupled Model Intercomparison Project (CMIP3). A permutation test reveals that the relation between longwave feedbacks and inversion strength is an artifact of statistical self-correlation and that shortwave feedbacks have a stronger correlation with intermodel spread. The present comment concludes that the conventional understanding of a positive lapse-rate feedback associated with the Arctic inversion is consistent with the CMIP3 model ensemble.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
Nonparametric simple-contrast estimates for one-way layouts based on Hodges-Lehmann estimators for two samples and confidence intervals for all contrasts involving only two treatments are found in the literature.Tests for such contrasts are performed from the distribution of the maximum of the rank sum between two treatments. For random block designs, simple contrast estimates based on Hodges-Lehmann estimators for one sample are presented. However, discussions concerning the significance levels of more complex contrast tests in nonparametric statistics are not well outlined.This work aims at presenting a methodology to obtain p-values for any contrast types based on the construction of the permutations required by each design model using a C-language program for each design type. For small samples, all possible treatment configurations are performed in order to obtain the desired p-value. For large samples, a fixed number of random configurations are used. The program prompts the input of contrast coefficients, but does not assume the existence or orthogonality among them.In orthogonal contrasts, the decomposition of the value of the suitable statistic for each case is performed and it is observed that the same procedure used in the parametric analysis of variance can be applied in the nonparametric case, that is, each of the orthogonal contrasts has a chi(2) distribution with one degree of freedom. Also, the similarities between the p-values obtained for nonparametric contrasts and those obtained through approximations suggested in the literature are discussed.
Resumo:
Pós-graduação em Biometria - IBB