931 resultados para Mixed Linear Model
Resumo:
An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.
Resumo:
Mild cognitive impairment (MCI) often refers to the preclinical stage of dementia, where the majority develop Alzheimer's disease (AD). Given that neurodegenerative burden and compensatory mechanisms might exist before accepted clinical symptoms of AD are noticeable, the current prospective study aimed to investigate the functioning of brain regions in the visuospatial networks responsible for preclinical symptoms in AD using event-related functional magnetic resonance imaging (fMRI). Eighteen MCI patients were evaluated and clinically followed for approximately 3 years. Five progressed to AD (PMCI) and eight remained stable (SMCI). Thirteen age-, gender- and education-matched controls also participated. An angle discrimination task with varying task demands was used. Brain activation patterns as well as task demand-dependent and -independent signal changes between the groups were investigated by using an extended general linear model including individual performance (reaction time [RT]) of each single trial. Similar behavioral (RT and accuracy) responses were observed between MCI patients and controls. A network of bilateral activations, e.g. dorsal pathway, which increased linearly with increasing task demand, was engaged in all subjects. Compared with SMCI patients and controls, PMCI patients showed a stronger relation between task demand and brain activity in left superior parietal lobules (SPL) as well as a general task demand-independent increased activation in left precuneus. Altered brain function can be detected at a group level in individuals that progress to AD before changes occur at the behavioral level. Increased parietal activation in PMCI could reflect a reduced neuronal efficacy due to accumulating AD pathology and might predict future clinical decline in patients with MCI.
Resumo:
Neural correlates of electroencephalographic (EEG) alpha rhythm are poorly understood. Here, we related EEG alpha rhythm in awake humans to blood-oxygen-level-dependent (BOLD) signal change determined by functional magnetic resonance imaging (fMRI). Topographical EEG was recorded simultaneously with fMRI during an open versus closed eyes and an auditory stimulation versus silence condition. EEG was separated into spatial components of maximal temporal independence using independent component analysis. Alpha component amplitudes and stimulus conditions served as general linear model regressors of the fMRI signal time course. In both paradigms, EEG alpha component amplitudes were associated with BOLD signal decreases in occipital areas, but not in thalamus, when a standard BOLD response curve (maximum effect at approximately 6 s) was assumed. The part of the alpha regressor independent of the protocol condition, however, revealed significant positive thalamic and mesencephalic correlations with a mean time delay of approximately 2.5 s between EEG and BOLD signals. The inverse relationship between EEG alpha amplitude and BOLD signals in primary and secondary visual areas suggests that widespread thalamocortical synchronization is associated with decreased brain metabolism. While the temporal relationship of this association is consistent with metabolic changes occurring simultaneously with changes in the alpha rhythm, sites in the medial thalamus and in the anterior midbrain were found to correlate with short time lag. Assuming a canonical hemodynamic response function, this finding is indicative of activity preceding the actual EEG change by some seconds.
Resumo:
OBJECTIVE: Resonance frequency analysis (RFA) is a method of measuring implant stability. However, little is known about RFA of implants with long loading periods. The objective of the present study was to determine standard implant stability quotients (ISQs) for clinical successfully osseointegrated 1-stage implants in the edentulous mandible. MATERIALS AND METHODS: Stability measurements by means of RFA were performed in regularly followed patients who had received 1- stage implants for overdenture support. The time interval between implant placement and measurement ranged from 1 year up to 10 years. The short-term group comprised patients who were followed up to 5 years, while the long-term group included patients with an observation time of > 5 years up to 10 years. For further comparison RFA measurements were performed in a matching group with unloaded implants at the end of the surgical procedure. For statistical analysis various parameters that might influence the ISQs of loaded implants were included, and a mixed-effects model applied (regression analysis, P <.0125). RESULTS: Ninety-four patients were available with a total of 205 loaded implants, and 16 patients with 36 implants immediately after the surgical procedure. The mean ISQ of all measured implants was 64.5 +/- 7.9 (range, 58 to 72). Statistical analysis did not reveal significant differences in the mean ISQ related to the observation time. The parameters with overall statistical significance were the diameter of the implants and changes in the attachment level. In the short-term group, the gender and the clinically measured attachment level had a significant effect. Implant diameter had a significant effect in the long-term group. CONCLUSIONS: A mean ISQ of 64.5 +/- 7.9 was found to be representative for stable asymptomatic interforaminal implants measured by the RFA instrument at any given time point. No significant differences in ISQ values were found between implants with different postsurgical time intervals. Implant diameter appears to influence the ISQ of interforaminal implants.
Resumo:
Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.
Resumo:
The flammability zone boundaries are very important properties to prevent explosions in the process industries. Within the boundaries, a flame or explosion can occur so it is important to understand these boundaries to prevent fires and explosions. Very little work has been reported in the literature to model the flammability zone boundaries. Two boundaries are defined and studied: the upper flammability zone boundary and the lower flammability zone boundary. Three methods are presented to predict the upper and lower flammability zone boundaries: The linear model The extended linear model, and An empirical model The linear model is a thermodynamic model that uses the upper flammability limit (UFL) and lower flammability limit (LFL) to calculate two adiabatic flame temperatures. When the proper assumptions are applied, the linear model can be reduced to the well-known equation yLOC = zyLFL for estimation of the limiting oxygen concentration. The extended linear model attempts to account for the changes in the reactions along the UFL boundary. Finally, the empirical method fits the boundaries with linear equations between the UFL or LFL and the intercept with the oxygen axis. xx Comparison of the models to experimental data of the flammability zone shows that the best model for estimating the flammability zone boundaries is the empirical method. It is shown that is fits the limiting oxygen concentration (LOC), upper oxygen limit (UOL), and the lower oxygen limit (LOL) quite well. The regression coefficient values for the fits to the LOC, UOL, and LOL are 0.672, 0.968, and 0.959, respectively. This is better than the fit of the "zyLFL" method for the LOC in which the regression coefficient’s value is 0.416.
Resumo:
Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.
Resumo:
The present study was conducted to determine the effects of different variables on the perception of vehicle speeds in a driving simulator. The motivations of the study include validation of the Michigan Technological University Human Factors and Systems Lab driving simulator, obtaining a better understanding of what influences speed perception in a virtual environment, and how to improve speed perception in future simulations involving driver performance measures. Using a fixed base driving simulator, two experiments were conducted, the first to evaluate the effects of subject gender, roadway orientation, field of view, barriers along the roadway, opposing traffic speed, and subject speed judgment strategies on speed estimation, and the second to evaluate all of these variables as well as feedback training through use of the speedometer during a practice run. A mixed procedure model (mixed model ANOVA) in SAS® 9.2 was used to determine the significance of these variables in relation to subject speed estimates, as there were both between and within subject variables analyzed. It was found that subject gender, roadway orientation, feedback training, and the type of judgment strategy all significantly affect speed perception. By using curved roadways, feedback training, and speed judgment strategies including road lines, speed limit experience, and feedback training, speed perception in a driving simulator was found to be significantly improved.
Resumo:
The need for a stronger and more durable building material is becoming more important as the structural engineering field expands and challenges the behavioral limits of current materials. One of the demands for stronger material is rooted in the effects that dynamic loading has on a structure. High strain rates on the order of 101 s-1 to 103 s-1, though a small part of the overall types of loading that occur anywhere between 10-8 s-1 to 104 s-1 and at any point in a structures life, have very important effects when considering dynamic loading on a structure. High strain rates such as these can cause the material and structure to behave differently than at slower strain rates, which necessitates the need for the testing of materials under such loading to understand its behavior. Ultra high performance concrete (UHPC), a relatively new material in the U.S. construction industry, exhibits many enhanced strength and durability properties compared to the standard normal strength concrete. However, the use of this material for high strain rate applications requires an understanding of UHPC’s dynamic properties under corresponding loads. One such dynamic property is the increase in compressive strength under high strain rate load conditions, quantified as the dynamic increase factor (DIF). This factor allows a designer to relate the dynamic compressive strength back to the static compressive strength, which generally is a well-established property. Previous research establishes the relationships for the concept of DIF in design. The generally accepted methodology for obtaining high strain rates to study the enhanced behavior of compressive material strength is the split Hopkinson pressure bar (SHPB). In this research, 83 Cor-Tuf UHPC specimens were tested in dynamic compression using a SHPB at Michigan Technological University. The specimens were separated into two categories: ambient cured and thermally treated, with aspect ratios of 0.5:1, 1:1, and 2:1 within each category. There was statistically no significant difference in mean DIF for the aspect ratios and cure regimes that were considered in this study. DIF’s ranged from 1.85 to 2.09. Failure modes were observed to be mostly Type 2, Type 4, or combinations thereof for all specimen aspect ratios when classified according to ASTM C39 fracture pattern guidelines. The Comite Euro-International du Beton (CEB) model for DIF versus strain rate does not accurately predict the DIF for UHPC data gathered in this study. Additionally, a measurement system analysis was conducted to observe variance within the measurement system and a general linear model analysis was performed to examine the interaction and main effects that aspect ratio, cannon pressure, and cure method have on the maximum dynamic stress.
Resumo:
INTRODUCTION Our objective was to investigate potential associations between maxillary sinus floor extension and inclination of maxillary second premolars and second molars in patients with Class II Division 1 malocclusion whose orthodontic treatment included maxillary first molar extractions. METHODS The records of 37 patients (18 boys, 19 girls; mean age, 13.2 years; SD, 1.62 years) treated between 1998 and 2004 by 1 orthodontist with full Begg appliances were used in this study. Inclusion criteria were white patients with Class II Division 1 malocclusion, sagittal overjet of ≥4 mm, treatment plan including extraction of the maxillary first permanent molars, no missing teeth, and no agenesis. Maxillary posterior tooth inclination and lower maxillary sinus area in relation to the palatal plane were measured on lateral cephalograms at 3 time points: at the start and end of treatment, and on average 2.5 years posttreatment. Data were analyzed for the second premolar and second molar inclinations by using mixed linear models. RESULTS The analysis showed that the second molar inclination angle decreased by 7° after orthodontic treatment, compared with pretreatment values, and by 11.5° at the latest follow-up, compared with pretreatment. There was evidence that maxillary sinus volume was negatively correlated with second molar inclination angle; the greater the volume, the smaller the inclination angle. For premolars, inclination increased by 15.4° after orthodontic treatment compared with pretreatment, and by 8.1° at the latest follow-up compared with baseline. The volume of the maxillary sinus was not associated with premolar inclination. CONCLUSIONS We found evidence of an association between maxillary second molar inclination and surface area of the lower sinus in patients treated with maxillary first molar extractions. Clinicians who undertake such an extraction scheme in Class II patients should be aware of this potential association and consider appropriate biomechanics to control root uprighting.
Resumo:
According to Bandura (1997) efficacy beliefs are a primary determinant of motivation. Still, very little is known about the processes through which people integrate situational factors to form efficacy beliefs (Myers & Feltz, 2007). The aim of this study was to gain insight into the cognitive construction of subjective group-efficacy beliefs. Only with a sound understanding of those processes is there a sufficient base to derive psychological interventions aimed at group-efficacy beliefs. According to cognitive theories (e.g., Miller, Galanter, & Pribram, 1973) individual group-efficacy beliefs can be seen as the result of a comparison between the demands of a group task and the resources of the performing group. At the center of this comparison are internally represented structures of the group task and plans to perform it. The empirical plausibility of this notion was tested using functional measurement theory (Anderson, 1981). Twenty-three students (M = 23.30 years; SD = 3.39; 35 % females) of the University of Bern repeatedly judged the efficacy of groups in different group tasks. The groups consisted of the subjects and another one to two fictive group members. The latter were manipulated by their value (low, medium, high) in task-relevant abilities. Data obtained from multiple full factorial designs were structured with individuals as second level units and analyzed using mixed linear models. The task-relevant abilities of group members, specified as fixed factors, all had highly significant effects on subjects’ group-efficacy judgments. The effect sizes of the ability factors showed to be dependent on the respective abilities’ importance in a given task. In additive tasks (Steiner, 1972) group resources were integrated in a linear fashion whereas significant interaction between factors was obtained in interdependent tasks. The results also showed that people take into account other group members’ efficacy beliefs when forming their own group-efficacy beliefs. The results support the notion that personal group-efficacy beliefs are obtained by comparing the demands of a task with the performing groups’ resources. Psychological factors such as other team members’ efficacy beliefs are thereby being considered task relevant resources and affect subjective group-efficacy beliefs. This latter finding underlines the adequacy of multidimensional measures. While the validity of collective efficacy measures is usually estimated by how well they predict performances, the results of this study allow for a somewhat internal validity criterion. It is concluded that Information Integration Theory holds potential to further help understand people’s cognitive functioning in sport relevant situations.
Resumo:
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Resumo:
BACKGROUND Microvascular anastomosis is the cornerstone of free tissue transfers. Irrespective of the microsurgical technique that one seeks to integrate or improve, the time commitment in the laboratory is significant. After extensive previous training on several animal models, we sought to identify an animal model that circumvents the following issues: ethical rules, cost, time-consuming and expensive anesthesia, and surgical preparation of tissues required to access vessels before performing the microsurgical training, not to mention that laboratories are closed on weekends. METHODS Between January 2012 and April 2012, a total of 91 earthworms were used for 150 microsurgical training exercises to simulate vascular end-to-side microanastomosis. The training sessions were divided into ten periods of 7 days. Each training session included 15 simulations of end-to-side vascular microanastomoses: larger than 1.5 mm (n=5), between 1.0 and 1.5 mm (n=5), and smaller than 1.0 mm (n=5). A linear model with the main variables being the number of weeks (as a numerical covariate) and the size of the animal (as a factor) was used to determine the trend in time of anastomosis over subsequent weeks as well as the differences between the different size groups. RESULTS The linear model shows a significant trend (p<0.001) in time of anastomosis in the course of the training, as well as significant differences (p<0.001) between the groups of animals of different sizes. For microanastomoses larger than 1.5 mm, the mean anastomosis time decreased from 19.3±1.0 to 11.1±0.4 min between the first and last week of training (decrease of 42.5%). For training with smaller diameters, the results showed a decrease in execution time of 43.2% (diameter between 1.0 and 1.5 mm) and 40.9% (diameter<1.0 mm) between the first and last periods. The study demonstrates an improvement in the dexterity and speed of nodes execution. CONCLUSION The earthworm appears to be a reliable experimental model for microsurgical training of end-to-side microanastomoses. Its numerous advantages are discussed here and we predict training on earthworms will significantly grow and develop in the near future. LEVEL OF EVIDENCE III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .