836 resultados para Mixed linear models
Resumo:
According to Bandura (1997) efficacy beliefs are a primary determinant of motivation. Still, very little is known about the processes through which people integrate situational factors to form efficacy beliefs (Myers & Feltz, 2007). The aim of this study was to gain insight into the cognitive construction of subjective group-efficacy beliefs. Only with a sound understanding of those processes is there a sufficient base to derive psychological interventions aimed at group-efficacy beliefs. According to cognitive theories (e.g., Miller, Galanter, & Pribram, 1973) individual group-efficacy beliefs can be seen as the result of a comparison between the demands of a group task and the resources of the performing group. At the center of this comparison are internally represented structures of the group task and plans to perform it. The empirical plausibility of this notion was tested using functional measurement theory (Anderson, 1981). Twenty-three students (M = 23.30 years; SD = 3.39; 35 % females) of the University of Bern repeatedly judged the efficacy of groups in different group tasks. The groups consisted of the subjects and another one to two fictive group members. The latter were manipulated by their value (low, medium, high) in task-relevant abilities. Data obtained from multiple full factorial designs were structured with individuals as second level units and analyzed using mixed linear models. The task-relevant abilities of group members, specified as fixed factors, all had highly significant effects on subjects’ group-efficacy judgments. The effect sizes of the ability factors showed to be dependent on the respective abilities’ importance in a given task. In additive tasks (Steiner, 1972) group resources were integrated in a linear fashion whereas significant interaction between factors was obtained in interdependent tasks. The results also showed that people take into account other group members’ efficacy beliefs when forming their own group-efficacy beliefs. The results support the notion that personal group-efficacy beliefs are obtained by comparing the demands of a task with the performing groups’ resources. Psychological factors such as other team members’ efficacy beliefs are thereby being considered task relevant resources and affect subjective group-efficacy beliefs. This latter finding underlines the adequacy of multidimensional measures. While the validity of collective efficacy measures is usually estimated by how well they predict performances, the results of this study allow for a somewhat internal validity criterion. It is concluded that Information Integration Theory holds potential to further help understand people’s cognitive functioning in sport relevant situations.
Resumo:
Introduction Research has shown that individuals infer their group-efficacy beliefs from the groups’ abilities to perform in specific tasks. Group abilities also seem to affect team members’ performance motivation adding a psychological advantage to teams already high on task relevant abilities. In a recent study we found the effect of group abilities on individual performance motivation to be partially mediated by the team members’ individual group-efficacy beliefs which is an example of how attributes on a group-level can be affecting individual-level parameters. Objectives The study aimed at testing the possibility to reduce the direct and mediated effects of low group abilities on performance motivation by augmenting the visibility of individual contributions to group performances via the inclusion of a separate ranking on individual performances. Method Forty-seven students (M=22.83 years, SD=2.83, 34% women) of the University of Bern participated in the study. At three collection points (t1-3) subjects were provided information about fictive team members with whom they had to imagine performing a group triathlon. Three values (low, medium, high) of the other team members’ abilities to perform in their parts of the triathlon (swimming and biking) were combined in a 3x3 full factorial design yielding nine groups with different ability profiles. At t1 subjects were asked to rate their confidence that the teams would perform well in the triathlon task, at t2 and t3 subjects were asked how motivated they were to perform at their best in the respective groups. At t3 the presence of an individual performance ranking was mentioned in the cover story. Mixed linear models (SPSS) and structural equation models for complex survey data (Mplus) were specified to estimate the effects of the individual performance rankings on the relationship between group-efficacy beliefs and performance motivation. Results A significant interaction effect for individual group-efficacy beliefs and the triathlon condition on performance motivation was found; the effect of group-efficacy beliefs on performance motivation being smaller with individual performance rankings available. The partial mediation of group attributes on performance motivation by group-efficacy beliefs disappeared with the announcement of individual performance rankings. Conclusion In teams low in task relevant abilities the disadvantageous effect of group-efficacy beliefs on performance motivation might be reduced by providing means of evaluating individual performances apart from a group’s overall performance. While it is believed that a common group goal is a core criterion for a well performing sport group future studies should also aim at the possible benefit of individualized goal setting in groups.
Resumo:
Introduction Research has shown that individuals infer their group-efficacy beliefs from the groups’ abilities to perform in specific tasks. Group abilities also seem to affect team members’ performance motivation adding a psychological advantage to teams already high on task relevant abilities. In a recent study we found the effect of group abilities on individual performance motivation to be partially mediated by the team members’ individual group-efficacy beliefs which is an example of how attributes on a group-level can be affecting individual-level parameters. Objectives The study aimed at testing the possibility to reduce the direct and mediated effects of low group abilities on performance motivation by augmenting the visibility of individual contributions to group performances via the inclusion of a separate ranking on individual performances. Method Forty-seven students (M=22.83 years, SD=2.83, 34% women) of the University of Bern participated in the study. At three collection points (t1-3) subjects were provided information about fictive team members with whom they had to imagine performing a group triathlon. Three values (low, medium, high) of the other team members’ abilities to perform in their parts of the triathlon (swimming and biking) were combined in a 3x3 full factorial design yielding nine groups with different ability profiles. At t1 subjects were asked to rate their confidence that the teams would perform well in the triathlon task, at t2 and t3 subjects were asked how motivated they were to perform at their best in the respective groups. At t3 the presence of an individual performance ranking was mentioned in the cover story. Mixed linear models (SPSS) and structural equation models for complex survey data (Mplus) were specified to estimate the effects of the individual performance rankings on the relationship between group-efficacy beliefs and performance motivation. Results A significant interaction effect for individual group-efficacy beliefs and the triathlon condition on performance motivation was found; the effect of group-efficacy beliefs on performance motivation being smaller with individual performance rankings available. The partial mediation of group attributes on performance motivation by group-efficacy beliefs disappeared with the announcement of individual performance rankings. Conclusion In teams low in task relevant abilities the disadvantageous effect of group-efficacy beliefs on performance motivation might be reduced by providing means of evaluating individual performances apart from a group’s overall performance. While it is believed that a common group goal is a core criterion for a well performing sport group future studies should also aim at the possible benefit of individualized goal setting in groups.
Resumo:
AIMS To estimate physical activity trajectories for people who quit smoking, and compare them to what would have been expected had smoking continued. DESIGN, SETTING AND PARTICIPANTS A total of 5115 participants in the Coronary Artery Risk Development in Young Adults Study (CARDIA) study, a population-based study of African American and European American people recruited at age 18-30 years in 1985/6 and followed over 25 years. MEASUREMENTS Physical activity was self-reported during clinical examinations at baseline (1985/6) and at years 2, 5, 7, 10, 15, 20 and 25 (2010/11); smoking status was reported each year (at examinations or by telephone, and imputed where missing). We used mixed linear models to estimate trajectories of physical activity under varying smoking conditions, with adjustment for participant characteristics and secular trends. FINDINGS We found significant interactions by race/sex (P = 0.02 for the interaction with cumulative years of smoking), hence we investigated the subgroups separately. Increasing years of smoking were associated with a decline in physical activity in black and white women and black men [e.g. coefficient for 10 years of smoking: -0.14; 95% confidence interval (CI) = -0.20 to -0.07, P < 0.001 for white women]. An increase in physical activity was associated with years since smoking cessation in white men (coefficient 0.06; 95% CI = 0 to 0.13, P = 0.05). The physical activity trajectory for people who quit diverged progressively towards higher physical activity from the expected trajectory had smoking continued. For example, physical activity was 34% higher (95% CI = 18 to 52%; P < 0.001) for white women 10 years after stopping compared with continuing smoking for those 10 years (P = 0.21 for race/sex differences). CONCLUSIONS Smokers who quit have progressively higher levels of physical activity in the years after quitting compared with continuing smokers.
Resumo:
Eucalyptus pellita demonstrated good growth and wood quality traits in this study, with young plantation grown timber being suitable for both solid and pulp wood products. All traits examined were under moderate levels of genetic control with little genotype by environment interaction when grown on two contrasting sites in Vietnam. Eucalyptus pellita currently has a significant role in reforestation in the tropics. Research to support expanded of use of this species is needed: particularly, research to better understand the genetic control of key traits will facilitate the development of genetically improved planting stock. This study aimed to provide estimates of the heritability of diameter at breast height over bark, wood basic density, Kraft pulp yield, modulus of elasticity and microfibril angle, and the genetic correlations among these traits, and understand the importance of genotype by environment interactions in Vietnam. Data for diameter and wood properties were collected from two 10-year-old, open-pollinated progeny trials of E. pellita in Vietnam that evaluated 104 families from six native range and three orchard sources. Wood properties were estimated from wood samples using near-infrared (NIR) spectroscopy. Data were analysed using mixed linear models to estimate genetic parameters (heritability, proportion of variance between seed sources and genetic correlations). Variation among the nine sources was small compared to additive variance. Narrow-sense heritability and genetic correlation estimates indicated that simultaneous improvements in most traits could be achieved from selection among and within families as the genetic correlations among traits were either favourable or close to zero. Type B genetic correlations approached one for all traits suggesting that genotype by environment interactions were of little importance. These results support a breeding strategy utilizing a single breeding population advanced by selecting the best individuals across all seed sources. Both growth and wood properties have been evaluated. Multi-trait selection for growth and wood property traits will lead to more productive populations of E. pellita both with improved productivity and improved timber and pulp properties.
Resumo:
Objetivou-se, neste trabalho, avaliar os ganhos genéticos preditos por meio de diferentes índices de seleção pela metodologia REML/BLUP, em cinco caracteres de interesse ao programa de melhoramento do café conilon do Incaper. Foram avaliadas 8 progênies de meios-irmãos, de ciclo de maturação precoce, média de duas safras, com três repetições, o que totalizou 1368 observações, utilizados os índices de seleção clássico, multiplicativo e com base na soma de postos. Avaliaramse, na época de colheita, as características tamanho dos grãos (TG), produtividade (PRO), porte (PT), vigor vegetativo (VIG) e grau de inclinação (GI). A população foi avaliada na Fazenda Experimental de Marilândia, região Noroeste do estado do Espírito Santo. As análises genético-estatísticas foram realizadas pelo programa Selegen - REM/BLUP. Verificou-se, a partir da análise dos parâmetros genéticos, um excelente potencial seletivo entre famílias, para todas as características avaliadas. O índice Mulamba e Mock foi o que mostrou maior eficiência de seleção entre famílias de meios-irmãos de café conilon.
Resumo:
Linear mixed effects models have been widely used in analysis of data where responses are clustered around some random effects, so it is not reasonable to assume independence between observations in the same cluster. In most biological applications, it is assumed that the distributions of the random effects and of the residuals are Gaussian. This makes inferences vulnerable to the presence of outliers. Here, linear mixed effects models with normal/independent residual distributions for robust inferences are described. Specific distributions examined include univariate and multivariate versions of the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted and Markov chain Monte Carlo is used to carry out the posterior analysis. The procedures are illustrated using birth weight data on rats in a texicological experiment. Results from the Gaussian and robust models are contrasted, and it is shown how the implementation can be used for outlier detection. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process in linear mixed models, and they are easily implemented using data augmentation and MCMC techniques.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
Linear mixed effects models are frequently used to analyse longitudinal data, due to their flexibility in modelling the covariance structure between and within observations. Further, it is easy to deal with unbalanced data, either with respect to the number of observations per subject or per time period, and with varying time intervals between observations. In most applications of mixed models to biological sciences, a normal distribution is assumed both for the random effects and for the residuals. This, however, makes inferences vulnerable to the presence of outliers. Here, linear mixed models employing thick-tailed distributions for robust inferences in longitudinal data analysis are described. Specific distributions discussed include the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted, and the Gibbs sampler and the Metropolis-Hastings algorithms are used to carry out the posterior analyses. An example with data on orthodontic distance growth in children is discussed to illustrate the methodology. Analyses based on either the Student-t distribution or on the usual Gaussian assumption are contrasted. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process for modelling distributions of the random effects and of residuals in linear mixed models, and the MCMC implementation allows the computations to be performed in a flexible manner.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.
Resumo:
The effects of the initial height on the temporal persistence probability of steady-state height fluctuations in up-down symmetric linear models of surface growth are investigated. We study the (1 + 1)-dimensional Family model and the (1 + 1)-and (2 + 1)-dimensional larger curvature (LC) model. Both the Family and LC models have up-down symmetry, so the positive and negative persistence probabilities in the steady state, averaged over all values of the initial height h(0), are equal to each other. However, these two probabilities are not equal if one considers a fixed nonzero value of h(0). Plots of the positive persistence probability for negative initial height versus time exhibit power-law behavior if the magnitude of the initial height is larger than the interface width at saturation. By symmetry, the negative persistence probability for positive initial height also exhibits the same behavior. The persistence exponent that describes this power-law decay decreases as the magnitude of the initial height is increased. The dependence of the persistence probability on the initial height, the system size, and the discrete sampling time is found to exhibit scaling behavior.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.
Resumo:
Els estudis de supervivència s'interessen pel temps que passa des de l'inici de l'estudi (diagnòstic de la malaltia, inici del tractament,...) fins que es produeix l'esdeveniment d'interès (mort, curació, millora,...). No obstant això, moltes vegades aquest esdeveniment s'observa més d'una vegada en un mateix individu durant el període de seguiment (dades de supervivència multivariant). En aquest cas, és necessari utilitzar una metodologia diferent a la utilitzada en l'anàlisi de supervivència estàndard. El principal problema que l'estudi d'aquest tipus de dades comporta és que les observacions poden no ser independents. Fins ara, aquest problema s'ha solucionat de dues maneres diferents en funció de la variable dependent. Si aquesta variable segueix una distribució de la família exponencial s'utilitzen els models lineals generalitzats mixtes (GLMM); i si aquesta variable és el temps, variable amb una distribució de probabilitat no pertanyent a aquesta família, s'utilitza l'anàlisi de supervivència multivariant. El que es pretén en aquesta tesis és unificar aquests dos enfocs, és a dir, utilitzar una variable dependent que sigui el temps amb agrupacions d'individus o d'observacions, a partir d'un GLMM, amb la finalitat d'introduir nous mètodes pel tractament d'aquest tipus de dades.
Resumo:
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.