945 resultados para Analysis of Variance
Resumo:
Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.
Resumo:
Background Foot dorsiflexion plays an essential role in both controlling balance and human gait. Electromyography (EMG) and sonomyography (SMG) can provide information on several aspects of muscle function. The aim was to establish the relationship between the EMG and SMG variables during isotonic contractions of foot dorsiflexors. Methods Twenty-seven healthy young adults performed the foot dorsiflexion test on a device designed ad hoc. EMG variables were maximum peak and area under the curve. Muscular architecture variables were muscle thickness and pennation angle. Descriptive statistical analysis, inferential analysis and a multivariate linear regression model were carried out. The confidence level was established with a statistically significant p-value of less than 0.05. Results The correlation between EMG variables and SMG variables was r = 0.462 (p < 0.05). The linear regression model to the dependent variable “peak normalized tibialis anterior (TA)” from the independent variables “pennation angle and thickness”, was significant (p = 0.002) with an explained variance of R2 = 0.693 and SEE = 0.16. Conclusions There is a significant relationship and degree of contribution between EMG and SMG variables during isotonic contractions of the TA muscle. Our results suggest that EMG and SMG can be feasible tools for monitoring and assessment of foot dorsiflexors. TA muscle parameterization and assessment is relevant in order to know that increased strength accelerates the recovery of lower limb injuries.
Resumo:
Human brain connectivity is disrupted in a wide range of disorders from Alzheimer's disease to autism but little is known about which specific genes affect it. Here we conducted a genome-wide association for connectivity matrices that capture information on the density of fiber connections between 70 brain regions. We scanned a large twin cohort (N=366) with 4-Tesla high angular resolution diffusion imaging (105-gradient HARDI). Using whole brain HARDI tractography, we extracted a relatively sparse 70×70 matrix representing fiber density between all pairs of cortical regions automatically labeled in co-registered anatomical scans. Additive genetic factors accounted for 1-58% of the variance in connectivity between 90 (of 122) tested nodes. We discovered genome-wide significant associations between variants and connectivity. GWAS permutations at various levels of heritability, and split-sample replication, validated our genetic findings. The resulting genes may offer new leads for mechanisms influencing aberrant connectivity and neurodegeneration. © 2012 IEEE.
Resumo:
Imaging genetics is a new field of neuroscience that blends methods from computational anatomy and quantitative genetics to identify genetic influences on brain structure and function. Here we analyzed brain MRI data from 372 young adult twins to identify cortical regions in which gray matter volume is influenced by genetic differences across subjects. Thickness maps, reconstructed from surface models of the cortical gray/white and gray/CSF interfaces, were smoothed with a 25 mm FWHM kernel and automatically parcellated into 34 regions of interest per hemisphere. In structural equation models fitted to volume values at each surface vertex, we computed components of variance due to additive genetic (A), shared (C) and unique (E) environmental factors, and tested their significance. Cortical regions in the vicinity of the perisylvian language cortex, and at the frontal and temporal poles, showed significant additive genetic variance, suggesting that volume measures from these regions may provide quantitative phenotypes to narrow the search for quantitative trait loci that influence brain structure.
Resumo:
Information from the full diffusion tensor (DT) was used to compute voxel-wise genetic contributions to brain fiber microstructure. First, we designed a new multivariate intraclass correlation formula in the log-Euclidean framework. We then analyzed used the full multivariate structure of the tensor in a multivariate version of a voxel-wise maximum-likelihood structural equation model (SEM) that computes the variance contributions in the DTs from genetic (A), common environmental (C) and unique environmental (E) factors. Our algorithm was tested on DT images from 25 identical and 25 fraternal twin pairs. After linear and fluid registration to a mean template, we computed the intraclass correlation and Falconer's heritability statistic for several scalar DT-derived measures and for the full multivariate tensors. Covariance matrices were found from the DTs, and inputted into SEM. Analyzing the full DT enhanced the detection of A and C effects. This approach should empower imaging genetics studies that use DTI.
Resumo:
This is a methodological paper describing when and how manifest items dropped from a latent construct measurement model (e.g., factor analysis) can be retained for additional analysis. Presented are protocols for assessment for retention in the measurement model, evaluation of dropped items as potential items separate from the latent construct, and post hoc analyses that can be conducted using all retained (manifest or latent) variables. The protocols are then applied to data relating to the impact of the NAPLAN test. The variables examined are teachers’ achievement goal orientations and teachers’ perceptions of the impact of the test on curriculum and pedagogy. It is suggested that five attributes be considered before retaining dropped manifest items for additional analyses. (1) Items can be retained when employed in service of an established or hypothesized theoretical model. (2) Items should only be retained if sufficient variance is present in the data set. (3) Items can be retained when they provide a rational segregation of the data set into subsamples (e.g., a consensus measure). (4) The value of retaining items can be assessed using latent class analysis or latent mean analysis. (5) Items should be retained only when post hoc analyses with these items produced significant and substantive results. These suggested exploratory strategies are presented so that other researchers using survey instruments might explore their data in similar and more innovative ways. Finally, suggestions for future use are provided.
Resumo:
With growing population and fast urbanization in Australia, it is a challenging task to maintain our water quality. It is essential to develop an appropriate statistical methodology in analyzing water quality data in order to draw valid conclusions and hence provide useful advices in water management. This paper is to develop robust rank-based procedures for analyzing nonnormally distributed data collected over time at different sites. To take account of temporal correlations of the observations within sites, we consider the optimally combined estimating functions proposed by Wang and Zhu (Biometrika, 93:459-464, 2006) which leads to more efficient parameter estimation. Furthermore, we apply the induced smoothing method to reduce the computational burden. Smoothing leads to easy calculation of the parameter estimates and their variance-covariance matrix. Analysis of water quality data from Total Iron and Total Cyanophytes shows the differences between the traditional generalized linear mixed models and rank regression models. Our analysis also demonstrates the advantages of the rank regression models for analyzing nonnormal data.
Resumo:
Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
In analysis of longitudinal data, the variance matrix of the parameter estimates is usually estimated by the 'sandwich' method, in which the variance for each subject is estimated by its residual products. We propose smooth bootstrap methods by perturbing the estimating functions to obtain 'bootstrapped' realizations of the parameter estimates for statistical inference. Our extensive simulation studies indicate that the variance estimators by our proposed methods can not only correct the bias of the sandwich estimator but also improve the confidence interval coverage. We applied the proposed method to a data set from a clinical trial of antibiotics for leprosy.
Resumo:
We consider the analysis of longitudinal data when the covariance function is modeled by additional parameters to the mean parameters. In general, inconsistent estimators of the covariance (variance/correlation) parameters will be produced when the "working" correlation matrix is misspecified, which may result in great loss of efficiency of the mean parameter estimators (albeit the consistency is preserved). We consider using different "Working" correlation models for the variance and the mean parameters. In particular, we find that an independence working model should be used for estimating the variance parameters to ensure their consistency in case the correlation structure is misspecified. The designated "working" correlation matrices should be used for estimating the mean and the correlation parameters to attain high efficiency for estimating the mean parameters. Simulation studies indicate that the proposed algorithm performs very well. We also applied different estimation procedures to a data set from a clinical trial for illustration.
Resumo:
Analyses of variance and co variance were carried out on the activities of three lysosomal enzymes in mononuclear blood cells from Brahman cattle. These were hexosaminidase (HEX), beta-D-galacto-sidase (GAL) and acid alpha-glucosidase (GLU) which had been measured in blood mononuclear cells from 1752 cattle from 6 herds in a Pompe's disease control programme. Herd of origin and date of bleeding significantly affected the level of activity of all enzymes. In addition, HEX and GAL were affected by age and HEX by the sex of the animal bled. Estimates of heritability from sire variances were 0.29:t 0.09 for HEX, 0.31 :t 0.09 for GAL and 0.44:t 0.09 for GLU. Genetic correlations between all enzymes were positive. The data indicate the existence of a major gene causing Pompe's disease and responsible for 16% of the genetic variation in GLU. One standard deviation of selection differential for high GLU should almost eliminate Pompe's disease from the population. The effi-ciency of selection would be aided by estimating the breeding value for GLU using measurements of HEX and GLU and taking account of an animal's sex, age, date of bleeding and herd of origin.
Resumo:
Most information in linkage analysis for quantitative traits comes from pairs of relatives that are phenotypically most discordant or concordant. Confounding this, within-family outliers from non-genetic causes may create false positives and negatives. We investigated the influence of within-family outliers empirically, using one of the largest genome-wide linkage scans for height. The subjects were drawn from Australian twin cohorts consisting of 8447 individuals in 2861 families, providing a total of 5815 possible pairs of siblings in sibships. A variance component linkage analysis was performed, either including or excluding the within-family outliers. Using the entire dataset, the largest LOD scores were on chromosome 15q (LOD 2.3) and 11q (1.5). Excluding within-family outliers increased the LOD score for most regions, but the LOD score on chromosome 15 decreased from 2.3 to 1.2, suggesting that the outliers may create false negatives and false positives, although rare alleles of large effect may also be an explanation. Several regions suggestive of linkage to height were found after removing the outliers, including 1q23.1 (2.0), 3q22.1 (1.9) and 5q32 (2.3). We conclude that the investigation of the effect of within-family outliers, which is usually neglected, should be a standard quality control measure in linkage analysis for complex traits and may reduce the noise for the search of common variants of modest effect size as well as help identify rare variants of large effect and clinical significance. We suggest that the effect of within-family outliers deserves further investigation via theoretical and simulation studies.
Resumo:
- Objectives Preschool-aged children spend substantial amounts of time engaged in screen-based activities. As parents have considerable control over their child's health behaviours during the younger years, it is important to understand those influences that guide parents' decisions about their child's screen time behaviours. - Design A prospective design with two waves of data collection, 1 week apart, was adopted. - Methods Parents (n = 207) completed a Theory of Planned Behaviour (TPB)-based questionnaire, with the addition of parental role construction (i.e., parents' expectations and beliefs of responsibility for their child's behaviour) and past behaviour. A number of underlying beliefs identified in a prior pilot study were also assessed. - Results The model explained 77% (with past behaviour accounting for 5%) of the variance in intention and 50% (with past behaviour accounting for 3%) of the variance in parental decisions to limit child screen time. Attitude, subjective norms, perceived behavioural control, parental role construction, and past behaviour predicted intentions, and intentions and past behaviour predicted follow-up behaviour. Underlying screen time beliefs (e.g., increased parental distress, pressure from friends, inconvenience) were also identified as guiding parents' decisions. - Conclusion Results support the TPB and highlight the importance of beliefs for understanding parental decisions for children's screen time behaviours, as well as the addition of parental role construction. This formative research provides necessary depth of understanding of sedentary lifestyle behaviours in young children which can be adopted in future interventions to test the efficacy of the TPB mechanisms in changing parental behaviour for their child's health.
Resumo:
We address the issue of noise robustness of reconstruction techniques for frequency-domain optical-coherence tomography (FDOCT). We consider three reconstruction techniques: Fourier, iterative phase recovery, and cepstral techniques. We characterize the reconstructions in terms of their statistical bias and variance and obtain approximate analytical expressions under the assumption of small noise. We also perform Monte Carlo analyses and show that the experimental results are in agreement with the theoretical predictions. It turns out that the iterative and cepstral techniques yield reconstructions with a smaller bias than the Fourier method. The three techniques, however, have identical variance profiles, and their consistency increases linearly as a function of the signal-to-noise ratio.