12 resultados para Multi-level Analysis

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research adopts a resource allocation theoretical framework to generate predictions regarding the relationship between self-efficacy and task performance from two levels of analysis and specificity. Participants were given multiple trials of practice on an air traffic control task. Measures of task-specific self-efficacy and performance were taken at repeated intervals. The authors used multilevel analysis to demonstrate dynamic main effects, dynamic mediation and dynamic moderation. As predicted, the positive effects of overall task specific self-efficacy and general self-efficacy on task performance strengthened throughout practice. In line with these dynamic main effects, the effect of general self-efficacy was mediated by overall task specific self-efficacy; however this pattern emerged over time. Finally, changes in task specific self-efficacy were negatively associated with changes in performance at the within-person level; however this effect only emerged towards the end of practice for individuals with high levels of overall task specific self-efficacy. These novel findings emphasise the importance of conceptualising self-efficacy within a multi-level and multi-specificity framework and make a significant contribution to understanding the way this construct relates to task performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Slag composition determines the physical and chemical properties as well as the application performance of molten oxide mixtures. Therefore, it is necessary to establish a routine instrumental technique to produce accurate and precise analytical results for better process and production control. In the present paper, a multi-component analysis technique of powdered metallurgical slag samples by X-ray Fluorescence Spectrometer (XRFS) has been demonstrated. This technique provides rapid and accurate results, with minimum sample preparation. It eliminates the requirement for a fused disc, using briquetted samples protected by a layer of Borax(R). While the use of theoretical alpha coefficients has allowed accurate calibrations to be made using fewer standard samples, the application of pseudo-Voight function to curve fitting makes it possible to resolve overlapped peaks in X-ray spectra that cannot be physically separated. The analytical results of both certified reference materials and industrial slag samples measured using the present technique are comparable to those of the same samples obtained by conventional fused disc measurements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present investigation aimed to critically examine the factor structure and psychometric properties of the Anxiety Sensitivity Index - Revised (ASI-R). Confirmatory factor analysis using a clinical sample of adults (N = 248) revealed that the ASI-R could be improved substantially through the removal of 15 problematic items in order to account for the most robust dimensions of anxiety sensitivity. This modified scale was renamed the 21-item Anxiety Sensitivity Index (21-item ASI) and reanalyzed with a large sample of normative adults (N = 435), revealing configural and metric invariance across groups. Further comparisons with other alternative models, using multi-sample analysis, indicated the 21-item ASI to be the best fitting model for both groups. There was also evidence of internal consistency, test-retest reliability, and construct validity for both samples suggesting that the 21-item ASI is a useful assessment device for investigating the construct of anxiety sensitivity in both clinical and normative populations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.