505 resultados para genotypic variance
Resumo:
We propose a simple method of constructing quasi-likelihood functions for dependent data based on conditional-mean-variance relationships, and apply the method to estimating the fractal dimension from box-counting data. Simulation studies were carried out to compare this method with the traditional methods. We also applied this technique to real data from fishing grounds in the Gulf of Carpentaria, Australia
Resumo:
Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.
Resumo:
This article develops a method for analysis of growth data with multiple recaptures when the initial ages for all individuals are unknown. The existing approaches either impute the initial ages or model them as random effects. Assumptions about the initial age are not verifiable because all the initial ages are unknown. We present an alternative approach that treats all the lengths including the length at first capture as correlated repeated measures for each individual. Optimal estimating equations are developed using the generalized estimating equations approach that only requires the first two moment assumptions. Explicit expressions for estimation of both mean growth parameters and variance components are given to minimize the computational complexity. Simulation studies indicate that the proposed method works well. Two real data sets are analyzed for illustration, one from whelks (Dicathais aegaota) and the other from southern rock lobster (Jasus edwardsii) in South Australia.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
We consider the problem of estimating a population size from successive catches taken during a removal experiment and propose two estimating functions approaches, the traditional quasi-likelihood (TQL) approach for dependent observations and the conditional quasi-likelihood (CQL) approach using the conditional mean and conditional variance of the catch given previous catches. Asymptotic covariance of the estimates and the relationship between the two methods are derived. Simulation results and application to the catch data from smallmouth bass show that the proposed estimating functions perform better than other existing methods, especially in the presence of overdispersion.
Resumo:
The paper studies stochastic approximation as a technique for bias reduction. The proposed method does not require approximating the bias explicitly, nor does it rely on having independent identically distributed (i.i.d.) data. The method always removes the leading bias term, under very mild conditions, as long as auxiliary samples from distributions with given parameters are available. Expectation and variance of the bias-corrected estimate are given. Examples in sequential clinical trials (non-i.i.d. case), curved exponential models (i.i.d. case) and length-biased sampling (where the estimates are inconsistent) are used to illustrate the applications of the proposed method and its small sample properties.
Resumo:
Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function that stems from the conditional mean-variance relationship. Unlike traditional QL approaches to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. A simulation study is carried out to examine the performance of the proposed QL method. Fish mortality data from quantal response experiments are used for illustration.
Resumo:
Traditional comparisons between the capture efficiency of sampling devices have generally looked at the absolute differences between devices. We recommend that the signal-to-noise ratio be used when comparing the capture efficiency of benthic sampling devices. Using the signal-to-noise ratio rather than the absolute difference has the advantages that the variance is taken into account when determining how important the difference is, the hypothesis and minimum detectable difference can be made identical for all taxa, it is independent of the units used for measurement, and the sample-size calculation is independent of the variance. This new technique is illustrated by comparing the capture efficiency of a 0.05 m(2) van Veen grab and an airlift suction device, using samples taken from Heron and One Tree lagoons, Australia.
Resumo:
Introduction: Decompressive hemicraniectomy, clot evacuation, and aneurysmal interventions are considered aggressive surgical therapeutic options for treatment of massive cerebral artery infarction (MCA), intracerebral hemorrhage (ICH), and severe subarachnoid hemorrhage (SAH) respectively. Although these procedures are saving lives, little is actually known about the impact on outcomes other than short-term survival and functional status. The purpose of this study was to gain a better understanding of personal and social consequences of surviving these aggressive surgical interventions in order to aid acute care clinicians in helping family members make difficult decisions about undertaking such interventions. Methods: An exploratory mixed method study using a convergent parallel design was conducted to examine functional recovery (NIHSS, mRS & BI), cognitive status (Montreal Cognitive Assessment Scale, MoCA), quality of life (Euroqol 5-D), and caregiver outcomes (Bakas Caregiver Outcome Scale, BCOS) in a cohort of patients and families who had undergone aggressive surgical intervention for severe stroke between the years 2000–2007. Data were analyzed using descriptive statistics, univariate and multivariate analysis of variance, and multivariate logistic regression. Content analysis was used to analyze the qualitative interviews conducted with stroke survivors and family members. Results: Twenty-seven patients and 13 spouses participated in this study. Based on patient MOCA scores, overall cognitive status was 25.18 (range 23.4-26.9); current functional outcomes scores: NIHSS 2.22, mRS 1.74, and BI 88.5. EQ-5D scores revealed no significant differences between patients and caregivers (p=0.585) and caregiver outcomes revealed no significant differences between male/female caregivers or patient diagnostic group (MCA, SAH, ICH; p=""0.103).<"/span><"/span> Discussion: Overall, patients and families were satisfied with quality of life and decisions made at the time of the initial stroke. There was consensus among study participants that formal community-based support (e.g., handibus, caregiving relief, rehabilitation assessments) should be continued for extended periods (e.g., years) post-stroke. Ongoing contact with health care professionals is valuable to help them navigate in the community as needs change over time.
Resumo:
Multiphenotype genome-wide association studies (GWAS) may reveal pleiotropic genes, which would remain undetected using single phenotype analyses. Analysis of large pedigrees offers the added advantage of more accurately assessing trait heritability, which can help prioritise genetically influenced phenotypes for GWAS analysis. In this study we performed a principal component analysis (PCA), heritability (h2) estimation and pedigree-based GWAS of 37 cardiovascular disease -related phenotypes in 330 related individuals forming a large pedigree from the Norfolk Island genetic isolate. PCA revealed 13 components explaining >75% of the total variance. Nine components yielded statistically significant h2 values ranging from 0.22 to 0.54 (P<0.05). The most heritable component was loaded with 7 phenotypic measures reflecting metabolic and renal dysfunction. A GWAS of this composite phenotype revealed statistically significant associations for 3 adjacent SNPs on chromosome 1p22.2 (P<1x10-8). These SNPs form a 42kb haplotype block and explain 11% of the genetic variance for this renal function phenotype. Replication analysis of the tagging SNP (rs1396315) in an independent US cohort supports the association (P = 0.000011). Blood transcript analysis showed 35 genes were associated with rs1396315 (P<0.05). Gene set enrichment analysis of these genes revealed the most enriched pathway was purine metabolism (P = 0.0015). Overall, our findings provide convincing evidence for a major pleiotropic effect locus on chromosome 1p22.2 influencing risk of renal dysfunction via purine metabolism pathways in the Norfolk Island population. Further studies are now warranted to interrogate the functional relevance of this locus in terms of renal pathology and cardiovascular disease risk.
Resumo:
Research in the field of teenage drinking behavior has shown relationships between both social skills and drinking and alcohol expectancies and drinking. The present research investigated the comparative power of both of these sets of variables in predicting teenage drinking behavior, as well as looking at the contribution of more global cognitive structures. It was hypothesised that adolescents with high alcohol involvement would be discriminated from those with low involvement on the basis of social skills, cognitive structures, and alcohol expectancies. Seven hundred thirty-two adolescents participated in the study. Results indicated that adolescent alcohol involvement was associated with social skills deficits, positive alcohol expectancies, and negative cognitive structures concerning parents and teachers. The results revealed that, although the bulk of the variance in drinking behavior was explained by the independent effects of social skills and expectancies, the interaction of the two constructs explained an additional and significant proportion of the variance. Implications for preventive and treatment programs are discussed.
Resumo:
In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate unconditional skewness. We consider modeling the unconditional mean and variance using models that respond nonlinearly or asymmetrically to shocks. We investigate the implications of these models on the third-moment structure of the marginal distribution as well as conditions under which the unconditional distribution exhibits skewness and nonzero third-order autocovariance structure. In this respect, an asymmetric or nonlinear specification of the conditional mean is found to be of greater importance than the properties of the conditional variance. Several examples are discussed and, whenever possible, explicit analytical expressions provided for all third-order moments and cross-moments. Finally, we introduce a new tool, the shock impact curve, for investigating the impact of shocks on the conditional mean squared error of return series.
Resumo:
- Background This study examined relationships between adiposity, physical functioning and physical activity. - Methods Obese (N=107) and healthy-weight (N=132) children aged 10-13 years underwent assessments of percent body fat (%BF, dual energy X-ray absorptiometry), knee extensor strength (KE, isokinetic dynamometry), cardiorespiratory fitness (CRF, peak oxygen uptake by cycle ergometry), physical health-related quality of life (HRQOL), worst pain intensity and walking capacity [six-minute walk (6MWT)]. Structural equation modelling was used to assess relationships between variables. - Results Moderate relationships were observed between %BF and 6MWT, KE strength corrected for mass and CRF relative to mass (r -.36 to -.69, P≤.007). Weak relationships were found between: %BF and physical HRQOL (r -.27, P=.008); CRF relative to mass and physical HRQOL (r -.24, P=.003); physical activity and 6MWT (r .17, P=.004). Squared multiple correlations showed that 29.6% variance in physical HRQOL was explained by %BF, pain and CRF relative to mass, while 28% variance in 6MWT was explained by %BF and physical activity. - Conclusions It appears that children with a higher body fat percentage have poorer KE strength, CRF and overall physical functioning. Reducing percent fat appears to be the best target to improve functioning. However, a combined approach to intervention, targeting reductions in body fat percentage, pain and improvements in physical activity and CRF may assist physical functioning.