23 resultados para Multilevel linear model

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Left ventricular mass (LVM) is a strong predictor of cardiovascular disease (CVD) in adults. However, normal growth of LVM in healthy children is not well understood, and previous results on independent effects of body size and body fatness on LVM have been inconsistent. The purpose of this study was (1) to establish the normal growth curve of LVM from age 8 to age 18, and evaluate the determinants of change in LVM with age, and (2) to assess the independent effects of body size and body fatness on LVM.^ In Project HeartBeat!, 678 healthy children aged 8, 11 and 14 years at baseline were enrolled and examined at 4-monthly intervals for up to 4 years. A synthetic cohort with continuous observations from age 8 to 18 years was constructed. A total of 4608 LVM measurements was made from M-mode echocardiography. The multilevel linear model was used for analysis.^ Sex-specific trajectories of normal growth of LVM from age 8 to 18 was displayed. On average, LVM was 15 g higher in males than females. Average LVM increased linearly in males from 78 g at age 8 to 145 g at age 18. For females, the trajectory was curvilinear, nearly constant after age 14. No significant racial differences were found. After adjustment for the effects of body size and body fatness, average LVM decreased slightly from age 8 to 18, and sex differences in changes of LVM remained constant.^ The impact of body size on LVM was examined by adding to a basic LVM-sex-age model one of 9 body size indicators. The impact of body fatness was tested by further introducing into each of the 9 LVM models (with one or another of the body size indicators) one of 4 body fatness indicators, yielding 36 models with different body size and body fatness combinations. The results indicated that effects of body size on LVM can be distinguished between fat-free body mass and fat body mass, both being independent, positive predictors. The former is the stronger determinant. When a non-fat-free body size indicator is used as predictor, the estimated residual effect of body fatness on LVM becomes negative. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary heart disease remains the leading cause of death in the United States and increased blood cholesterol level has been found to be a major risk factor with roots in childhood. Tracking of cholesterol, i.e., the tendency to maintain a particular cholesterol level relative to the rest of the population, and variability in blood lipid levels with increase in age have implications for cholesterol screening and assessment of lipid levels in children for possible prevention of further rise to prevent adulthood heart disease. In this study the pattern of change in plasma lipids, over time, and their tracking were investigated. Also, within-person variance and retest reliability defined as the square root of within-person variance for plasma total cholesterol, HDL-cholesterol, LDL-cholesterol, and triglycerides and their relation to age, sex and body mass index among participants from age 8 to 18 years were investigated. ^ In Project HeartBeat!, 678 healthy children aged 8, 11 and 14 years at baseline were enrolled and examined at 4-monthly intervals for up to 4 years. We examined the relationship between repeated observations by Pearson's correlations. Age- and sex-specific quintiles were calculated and the probability of participants to remain in the uppermost quintile of their respective distribution was evaluated with life table methods. Plasma total cholesterol, HDL-C and LDL-C at baseline were strongly and significantly correlated with measurements at subsequent visits across the sex and age groups. Plasma triglyceride at baseline was also significantly correlated with subsequent measurements but less strongly than was the case for other plasma lipids. The probability to remain in the upper quintile was also high (60 to 70%) for plasma total cholesterol, HDL-C and LDL-C. ^ We used a mixed longitudinal, or synthetic cohort design with continuous observations from age 8 to 18 years to estimate within person variance of plasma total cholesterol, HDL-C, LDL-C and triglycerides. A total of 5809 measurements were available for both cholesterol and triglycerides. A multilevel linear model was used. Within-person variance among repeated measures over up to four years of follow-up was estimated for total cholesterol, HDL-C, LDL-C and triglycerides separately. The relationship of within-person and inter-individual variance with age, sex, and body mass index was evaluated. Likelihood ratio tests were conducted by calculating the deviation of −2log (likelihood) within the basic model and alternative models. The square root of within-person variance provided the retest reliability (within person standard deviation) for plasma total cholesterol, HDL-C, LDL-C and triglycerides. We found 13.6 percent retest reliability for plasma cholesterol, 6.1 percent for HDL-cholesterol, 11.9 percent for LDL-cholesterol and 32.4 percent for triglycerides. Retest reliability of plasma lipids was significantly related with age and body mass index. It increased with increase in body mass index and age. These findings have implications for screening guidelines, as participants in the uppermost quintile tended to maintain their status in each of the age groups during a four-year follow-up. The magnitude of within-person variability of plasma lipids influences the ability to classify children into risk categories recommended by the National Cholesterol Education Program. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unlike infections occurring during periods of chemotherapy-induced neutropenia, postoperative infections in patients with solid malignancy remain largely understudied. The purpose of this population-based study was to evaluate the clinical and economic burden, as well as the relationship of hospital surgical volume and outcomes associated with serious postoperative infection (SPI) – i.e., bacteremia/sepsis, pneumonia, and wound infection – following resection of common solid tumors.^ From the Texas Discharge Data Research File, we identified all Texas residents who underwent resection of cancer of the lung, esophagus, stomach, pancreas, colon, or rectum between 2002 and 2006. From their billing records, we identified ICD-9 codes indicating SPI and also subsequent SPI-related readmissions occurring within 30 days of surgery. Random-effects logistic regression was used to calculate the impact of SPI on mortality, as well as the association between surgical volume and SPI, adjusting for case-mix, hospital characteristics, and clustering of multiple surgical admissions within the same patient and patients within the same hospital. Excess bed days and costs were calculated by subtracting values for patients without infections from those with infections computed using multilevel mixed-effects generalized linear model by fitting a gamma distribution to the data using log link.^ Serious postoperative infection occurred following 9.4% of the 37,582 eligible tumor resections and was independently associated with an 11-fold increase in the odds of in-hospital mortality (95% Confidence Interval [95% CI], 6.7-18.5, P < 0.001). Patients with SPI required 6.3 additional hospital days (95% CI, 6.1 - 6.5) at an incremental cost of $16,396 (95% CI, $15,927–$16,875). There was a significant trend toward lower overall rates of SPI with higher surgical volume (P=0.037). ^ Due to the substantial morbidity, mortality, and excess costs associated with SPI following solid tumor resections and given that, under current reimbursement practices, most of this heavy burden is borne by acute care providers, it is imperative for hospitals to identify more effective prophylactic measures, so that these potentially preventable infections and their associated expenditures can be averted. Additional volume-outcomes research is also needed to identify infection prevention processes that can be transferred from higher- to lower-volume providers.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Scholars have found that socioeconomic status was one of the key factors that influenced early-stage lung cancer incidence rates in a variety of regions. This thesis examined the association between median household income and lung cancer incidence rates in Texas counties. A total of 254 individual counties in Texas with corresponding lung cancer incidence rates from 2004 to 2008 and median household incomes in 2006 were collected from the National Cancer Institute Surveillance System. A simple linear model and spatial linear models with two structures, Simultaneous Autoregressive Structure (SAR) and Conditional Autoregressive Structure (CAR), were used to link median household income and lung cancer incidence rates in Texas. The residuals of the spatial linear models were analyzed with Moran's I and Geary's C statistics, and the statistical results were used to detect similar lung cancer incidence rate clusters and disease patterns in Texas.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Any functionally important mutation is embedded in an evolutionary matrix of other mutations. Cladistic analysis, based on this, is a method of investigating gene effects using a haplotype phylogeny to define a set of tests which localize causal mutations to branches of the phylogeny. Previous implementations of cladistic analysis have not addressed the issue of analyzing data from related individuals, though in human studies, family data are usually needed to obtain unambiguous haplotypes. In this study, a method of cladistic analysis is described in which haplotype effects are parameterized in a linear model which accounts for familial correlations. The method was used to study the effect of apolipoprotein (Apo) B gene variation on total-, LDL-, and HDL-cholesterol, triglyceride, and Apo B levels in 121 French families. Five polymorphisms defined Apo B haplotypes: the signal peptide Insertion/deletion, Bsp 1286I, XbaI, MspI, and EcoRI. Eleven haplotypes were found, and a haplotype phylogeny was constructed and used to define a set of tests of haplotype effects on lipid and apo B levels.^ This new method of cladistic analysis, the parametric method, found significant effects for single haplotypes for all variables. For HDL-cholesterol, 3 clusters of evolutionarily-related haplotypes affecting levels were found. Haplotype effects accounted for about 10% of the genetic variance of triglyceride and HDL-cholesterol levels. The results of the parametric method were compared to those of a method of cladistic analysis based on permutational testing. The permutational method detected fewer haplotype effects, even when modified to account for correlations within families. Simulation studies exploring these differences found evidence of systematic errors in the permutational method due to the process by which haplotype groups were selected for testing.^ The applicability of cladistic analysis to human data was shown. The parametric method is suggested as an improvement over the permutational method. This study has identified candidate haplotypes for sequence comparisons in order to locate the functional mutations in the Apo B gene which may influence plasma lipid levels. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Blood cholesterol and blood pressure development in childhood and adolescence have important impact on the future adult level of cholesterol and blood pressure, and on increased risk of cardiovascular diseases. The U.S. has higher mortality rates of coronary heart diseases than Japan. A longitudinal comparison in children of risk factor development in the two countries provides more understanding about the causes of cardiovascular disease and its prevention. Such comparisons have not been reported in the past. ^ In Project HeartBeat!, 506 non-Hispanic white, 136 black and 369 Japanese children participated in the study in the U.S. and Japan from 1991 to 1995. A synthetic cohort of ages 8 to 18 years was composed by three cohorts with starting ages at 8, 11, and 14. A multilevel regression model was used for data analysis. ^ The study revealed that the Japanese children had significantly higher slopes of mean total cholesterol (TC) and high density lipoprotein (HDL) cholesterol levels than the U.S. children after adjusting for age and sex. The mean TC level of Japanese children was not significantly different from white and black children. The mean HDL level of Japanese children was significantly higher than white and black children after adjusting for age and sex. The ratio of HDL/TC in Japanese children was significantly higher than in U.S. whites, but not significantly different from the black children. The Japanese group had significantly lower mean diastolic blood pressure phase IV (DBP4) and phase V (DBP5) than the two U.S. groups. The Japanese group also showed significantly higher slopes in systolic blood pressure, DBP5 and DBP4 during the study period than both U.S. groups. The differences were independent from height and body mass index. ^ The study provided the first longitudinal comparison of blood cholesterol and blood pressure between the U.S. and Japanese children and adolescents. It revealed the dynamic process of these factors in the three ethnic groups. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background. The purpose of this study was to describe the risk factors and demographics of persons with salmonellosis and shigellosis and to investigate both seasonal and spatial variations in the occurrence of these infections in Texas from 2000 to 2004, utilizing time series analyses and the geographic information system digital mapping methods. ^ Methods. Spatial Analysis: MapInfo software was used to map the distribution of age-adjusted rates of reported shigellosis and salmonellosis in Texas from 2000–2004 by zip codes. Census data on above or below poverty level, household income, highest level of educational attainment, race, ethnicity, and urban/rural community status was obtained from the 2000 Decennial Census for each zip code. The zip codes with the upper 10% and lower 10% were compared using t-tests and logistic regression to determine whether there were any potential risk factors. ^ Temporal analysis. Seasonal patterns in the prevalence of infections in Texas from 2000 to 2003 were determined by performing time-series analysis on the numbers of cases of salmonellosis and shigellosis. A linear regression was also performed to assess for trends in the incidence of each disease, along with auto-correlation and multi-component cosinor analysis. ^ Results. Spatial analysis: Analysis by general linear model showed a significant association between infection rates and age, with young children aged less than 5 and those aged 5–9 years having increased risk of infection for both disease conditions. The data demonstrated that those populations with high percentages of people who attained a higher than high school education were less likely to be represented in zip codes with high rates of shigellosis. However, for salmonellosis, logistic regression models indicated that when compared to populations with high percentages of non-high school graduates, having a high school diploma or equivalent increased the odds of having a high rate of infection. ^ Temporal analysis. For shigellosis, multi-component cosinor analyses were used to determine the approximated cosine curve which represented a statistically significant representation of the time series data for all age groups by sex. The shigellosis results show 2 peaks, with a major peak occurring in June and a secondary peak appearing around October. Salmonellosis results showed a single peak and trough in all age groups with the peak occurring in August and the trough occurring in February. ^ Conclusion. The results from this study can be used by public health agencies to determine the timing of public health awareness programs and interventions in order to prevent salmonellosis and shigellosis from occurring. Because young children depend on adults for their meals, it is important to increase the awareness of day-care workers and new parents about modes of transmission and hygienic methods of food preparation and storage. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The healthcare industry spends billions on worker injury and employee turnover. Hospitals and healthcare settings have one of the highest rates of lost days due to injuries. The occupational hazards for healthcare workers can be classified into biological, chemical, ergonomic, physical, organizational, and psychosocial. Therefore, interventions addressing a range of occupational health risks are needed to prevent injuries and reduce turnover and reduce costs. ^ The Sacred Vocation Program (SVP) seeks to change the content of work, i.e., the meaningfulness of work, to improve work environments. The SVP intervenes at both the individual and organizational level. First the SVP attempts to connect healthcare workers with meaning from their work through a series of 5 self-discovery group sessions. In a sixth session the graduates take an oath recommitting them to do their work as a vocation. Once motivated to connect with meaning in their work, a representative employee group meets in a second set of five meetings. This representative group suggests organizational changes to create a culture that supports employees in their calling. The employees present their plan in the twelfth session to management beginning a new phase in the existing dialogue between employees and management. ^ The SVP was implemented in a large Dallas hospital (almost 1000 licensed beds). The Baylor University Medical Center (BUMC) Pastoral Care department invited front-line caregivers (primarily Patient Care Assistants, PCAs, or Patient Care Technicians, PCTs) to participate in the SVP. Participants completed SVP questionnaires at the beginning and following SVP implementation. Following implementation, employer records were collected on injury, absence and turnover to further evaluate the program's effectiveness on metrics that are meaningful to managers in assessing organizational performance. This provided an opportunity to perform an epidemiological evaluation of the intervention using the two sources of information: employee self-reports and employer administrative data. ^ The ability to evaluate the effectiveness of the SVP on program outcomes could be limited by the strength of the measures used. An ordinal CFA performed on baseline SVP questionnaire measurements examined the construct validity and reliability of the SVP scales. Scales whose item-factor structure was confirmed in ordinal CFA were evaluated for their psychometric properties (i.e., reliability, mean, ceiling and floor effects). CFA supported the construct validity of six of the proposed scales: blocks to spirituality, meaning at work, work satisfaction, affective commitment, collaborative communication, and MHI-5. Five of the six scales confirmed had acceptable measures of reliability (all but MHI-5 had α>0.7). All six scales had a high percentage (>30%) of the scores at the ceiling. These findings supported the use of these items in the evaluation of change although strong ceiling effects may hinder discerning change. ^ Next, the confirmed SVP scales were used to evaluate whether the intervention improved program constructs. To evaluate the SVP a one group pretest-posttest design compared participants’ self-reports before and after the intervention. It was hypothesized that measurements of reduced blocks to spirituality (α = 0.76), meaning at work (α = 0.86), collaborative communication (α = 0.67) and SVP job tasks (α = 0.97) would improve following SVP implementation. The SVP job tasks scale was included even though it was not included in the ordinal CFA analysis due to a limited sample and high inter-item correlation. Changes in scaled measurements were assessed using multilevel linear regression methods. All post-intervention measurements increased (increases <0.28 points) but only reduced blocks to spirituality was statistically significant (0.22 points on a scale from 1 to 7, p < 0.05) after adjustment for covariates. Intensity of the intervention (stratifying on high participation units) strengthened effects; but were not statistically significant. The findings provide preliminary support for the hypothesis that meaning in work can be improved and, importantly, lend greater credence to any observed improvements in the outcomes. (Abstract shortened by UMI.)^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Generalized linear Poisson and logistic regression models were utilized to examine the relationship between temperature and precipitation and cases of Saint Louis encephalitis virus spread in the Houston metropolitan area. The models were investigated with and without repeated measures, with a first order autoregressive (AR1) correlation structure used for the repeated measures model. The two types of Poisson regression models, with and without correlation structure, showed that a unit increase in temperature measured in degrees Fahrenheit increases the occurrence of the virus 1.7 times and a unit increase in precipitation measured in inches increases the occurrence of the virus 1.5 times. Logistic regression did not show these covariates to be significant as predictors for encephalitis activity in Houston for either correlation structure. This discrepancy for the logistic model could be attributed to the small data set.^ Keywords: Saint Louis Encephalitis; Generalized Linear Model; Poisson; Logistic; First Order Autoregressive; Temperature; Precipitation. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^