883 resultados para Variable sample size X- control chart


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two studies investigated the influence of juror need for cognition on the systematic and heuristic processing of expert evidence. U.S. citizens reporting for jury duty in South Florida read a 15-page summary of a hostile work environment case containing expert testimony. The expert described a study she had conducted on the effects of viewing sexualized materials on men's behavior toward women. Certain methodological features of the expert's research varied across experimental conditions. In Study 1 (N = 252), the expert's study was valid, contained a confound, or included the potential for experimenter bias (internal validity) and relied on a small or large sample (sample size) of college undergraduates or trucking employees (ecological validity). When the expert's study included trucking employees, high need for cognition jurors in Study 1 rated the expert more credible and trustworthy than did low need for cognition jurors. Jurors were insensitive to variations in the study's internal validity or sample size. Juror ratings of plaintiff credibility, plaintiff trustworthiness, and study quality were positively correlated with verdict. In Study 2 (N = 162), the expert's published or unpublished study (general acceptance) was either valid or lacked an appropriate control group (internal validity) and included a sample of college undergraduates or trucking employees (ecological validity). High need for cognition jurors in Study 2 found the defendant liable more often and evaluated the expert evidence more favorably when the expert's study was internally valid than when an appropriate control group was missing. Low need for cognition jurors did not differentiate between the internally valid and invalid study. Variations in the study's general acceptance and ecological validity did not affect juror judgments. Juror ratings of expert and plaintiff credibility, plaintiff trustworthiness, and study quality were positively correlated with verdict. The present research demonstrated that the need for cognition moderates juror sensitivity to expert evidence quality and that certain message-related heuristics influence juror judgments when ability or motivation to process systematically is low. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective of this proposal was to determine whether mitochondrial oxidative stress and variation in a particular mtDNA lineage contribute to the risk of developing cortical dysplasia and are potential contributing factors in epileptogenesis in children. The occurrence of epilepsy in children is highly associated with malformations of cortical development (MCD). It appears that MCD might arise from developmental errors due to environmental exposures in combination with inherited variation in response to environmental exposures and mitochondrial function. Therefore, it is postulated that variation in a particular mtDNA lineage of children contributes to the effects of mitochondrial DNA damage on MCD phenotype. Quantitative PCR and dot blot were used to examine mitochondrial oxidative damage and single nucleotide polymorphism (SNP) in the mitochondrial genome in brain tissue from 48 pediatric intractable epilepsy patients from Miami Children’s Hospital and 11 control samples from NICHD Brain and Tissue Bank for Developmental Disorders. Epilepsy patients showed higher mtDNA copy number compared to normal health subjects (controls). Oxidative mtDNA damage was lower in non-neoplastic but higher in neoplastic epilepsy patients compared to controls. There was a trend of lower mtDNA oxidative damage in the non-neoplastic (MCD) patients compared to controls, yet, the reverse was observed in neoplastic (MCD and Non-MCD) epilepsy patients. The presence of mtDNA SNP and haplogroups did not show any statistically significant relationships with epilepsy phenotypes. However, SNPs G9804A and G9952A were found in higher frequencies in epilepsy samples. Logistic regression analysis showed no relationship between mtDNA oxidative stress, mtDNA copy number, mitochondrial haplogroups and SNP variations with epilepsy in pediatric patients. The levels of mtDNA copy number and oxidative mtDNA damage and the SNPs G9952A and T10010C predicted neoplastic epilepsy, however, this was not significant due to a small sample size of pediatric subjects. Findings of this study indicate that an increase in mtDNA content may be compensatory mechanisms for defective mitochondria in intractable epilepsy and brain tumor. Further validation of these findings related to mitochondrial genotypes and mitochondrial dysfunction in pediatric epilepsy and MCD may lay the ground for the development of new therapies and prevention strategies during embryogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diabetes self-management, an essential component of diabetes care, includes weight control practices and requires guidance from providers. Minorities are likely to have less access to quality health care than White non-Hispanics (WNH) (American College of Physicians-American Society of Internal Medicine, 2000). Medical advice received and understood may differ by race/ethnicity as a consequence of the patient-provider communication process; and, may affect diabetes self-management. ^ This study examined the relationships among participants’ report of: (1) medical advice given; (2) diabetes self-management, and; (3) health outcomes for Mexican-Americans (MA) and Black non-Hispanics (BNH) as compared to WNH (reference group) using data available through the National Health and Nutrition Examination Survey (NHANES) for the years 2007–2008. This study was a secondary, single point analysis. Approximately 30 datasets were merged; and, the quality and integrity was assured by analysis of frequency, range and quartiles. The subjects were extracted based on the following inclusion criteria: belonging to either the MA, BNH or WNH categories; 21 years or older; responded yes to being diagnosed with diabetes. A final sample size of 654 adults [MA (131); BNH (223); WNH (300)] was used for the analyses. The findings revealed significant statistical differences in medical advice reported given. BNH [OR = 1.83 (1.16, 2.88), p = 0.013] were more likely than WNH to report being told to reduce fat or calories. Similarly, BNH [OR = 2.84 (1.45, 5.59), p = 0.005] were more likely than WNH to report that they were told to increase their physical activity. Mexican-Americans were less likely to self-monitor their blood glucose than WNH [OR = 2.70 (1.66, 4.38), p<0.001]. There were differences among ethnicities for reporting receiving recent diabetes education. Black, non-Hispanics were twice as likely to report receiving diabetes education than WNH [OR = 2.29 (1.36, 3.85), p = 0.004]. Medical advice reported given and ethnicity/race, together, predicted several health outcomes. Having recent diabetes education increased the likelihood of performing several diabetes self-management behaviors, independent of race. ^ These findings indicate a need for patient-provider communication and care to be assessed for effectiveness and, the importance of ongoing diabetes education for persons with diabetes.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sample preparation technique is critical for valid chemical analyses. A main source of error comes from the fact that the great specific surface area of crusts or nodules enhances their tendency to retain or attract hygroscopic moisture. Variable treatment of this moisture can in extreme cases lead to analytical value differences as great as 40-50 %. In order to quantify these influences, samples of ferromanganese oxide-phosphorite pavement from the Blake Plateau have been subjected to various drying techniques before analysis using X-ray fluorescence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extremal quantile index is a concept that the quantile index will drift to zero (or one)

as the sample size increases. The three chapters of my dissertation consists of three

applications of this concept in three distinct econometric problems. In Chapter 2, I

use the concept of extremal quantile index to derive new asymptotic properties and

inference method for quantile treatment effect estimators when the quantile index

of interest is close to zero. In Chapter 3, I rely on the concept of extremal quantile

index to achieve identification at infinity of the sample selection models and propose

a new inference method. Last, in Chapter 4, I use the concept of extremal quantile

index to define an asymptotic trimming scheme which can be used to control the

convergence rate of the estimator of the intercept of binary response models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.

Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.

Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.

Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a sociocultural perspective, individuals learn best from contextualized experiences. In preservice teacher education, contextualized experiences include authentic literacy experiences, which include a real reader and writer and replicate real life communication. To be prepared to teach well, preservice teachers need to gain literacy content knowledge and possess reading maturity. The purpose of this study was to examine the effect of authentic literacy experiences as Book Buddies with Hispanic fourth graders on preservice teachers’ literacy content knowledge and reading maturity. The study was a pretest/posttest design conducted over 12 weeks. Preservice teacher participants, the focus of the study, were elementary education majors taking the third of four required reading courses in non-probabilistic convenience groups, 43 (n = 33 experimental, n = 10 comparison) Elementary Education majors. The Survey of Preservice Teachers’ Knowledge of Teaching and Technology (SPTKTT), specifically designed for preservice teachers majoring in elementary or early childhood education and the Reading Maturity Survey (RMS) were used in this study. Preservice teachers chose either the experimental or comparison group based on the opportunity to earn extra credit points (experimental = 30 points, comparison = 15). After exchanging introductory letters preservice teachers and Hispanic fourth graders each read four books. After reading each book preservice teachers wrote letters to their student asking higher order thinking questions. Preservice teachers received scanned copies of their student’s unedited letters via email which enabled them to see their student’s authentic answers and writing levels. A series of analyses of covariance were used to determine whether there were significant differences in the dependent variables between the experimental and comparison groups. This quasi-experimental study tested two hypotheses. Using the appropriate pretest scores as covariates for adjusting the posttest means of the subcategory Literacy Content Knowledge (LCK), of the SPTKTT and the RMS, the mean adjusted posttest scores from the experimental group and comparison group were compared. No significant differences were found on the LCK dependent variable using the .05 level of significance, which may be due to Type II error caused by the small sample size. Significant differences were found on RMS using the .05 level of significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the cleaning of the HPC core surfaces from Hole 480 for photography, the material removed was conserved carefully in approximately 10 cm intervals (by K. Kelts); this material was made available to us in the hope that it would be possible to obtain oxygen isotope stratigraphy for the site. The samples were, of course, somewhat variable in size, but the majority were probably between 5 and 10 cm**3. Had this been a normal marine environment, such sample sizes would have contained abundant planktonic foraminifers together with a small number of benthics. However, this is clearly not the case, for many samples contained no foraminifers, whereas others contained more benthics than planktonics. Among the planktonic foraminifers the commonest species are Globigerina bulloides, Neogloboquadrina dutertrei, and N. pachyderma. A few samples contain a more normal fauna with Globigerinoides spp. and occasional Globorotalia spp. Sample 480-3-3, 20-30 cm contained Globigerina rubescens, isolated specimens of which were noted in a few other samples in Cores 3,4, and 5. This is a particularly solution-sensitive species; in the open Pacific it is only found widely distributed at horizons of exceptionally low carbonate dissolution, such as. the last glacial-to-interglacial transition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lamination and burrowing patterns in 17 box cores were analyzed with the aid of X-ray photographs and thin sections. A standardized method of log plotting made statistical analysis of the data possible. Several 'structure types' were established, although it was realized that the boundaries are purely arbitrary divisions in what can sometimes be a continuous sequence. In the transition zone between marginal sand facies and fine-grained basin facies, muddy sediment is found which contains particularly well differentiated, alternating laminae. This zone is also characterized by layers rich in plant remains. The alternation of laminae shows a high degree of statistical scattering. Even though a small degree of cyclic periodicity could be defined, it was impossible to correlate individual layers from core to core across the bay. However, through a statistical handling of the plots, zones could be separated on the basis of the number of sand layers they contained. These more or minder sandy zones clarified the bottom reflections seen in the records of the echograph from the area. The manner of facies change across the bay, suggests that no strong bottom currents are effective in the Eckernförde Bay. The marked asymmetry between the north and south flanks of the profile can be attributed to the stronger action of waves on the more exposed areas. Grain size analyses were made from the more homogeneous units found in a core from the transition-facies zone. The results indicate that the most pronounced differences between layers appear in the silt range, and although the differences are slight, they are statistically significant. Layers rich in plant remains were wet-sieved in order to separate the plant detritus. This was than analyzed in a sediment settling balance and found to be hydrodynamically equivalent to a well-sorted, finegrained sand. A special, rhythmic cross-bedding type with dimensions in the millimeter range, has been named 'Crypto-cross-lamination' and is thought to represent rapid sedimentation in an area where only very weak bottom currents are present. It is found only in the deepest part of the basin. Relatively large sand grains, scattered within layers of clayey-silty matrix, seem to be transported by flotation. Thin section examination showed that the inner part of Eckernförder Bay carbonate grains (e. g. Foraminifera shells) were preserved throughout the cores, while in the outer part of the bay they were not present. Well defined tracks and burrows are relatively rare in all of the facies in comparision to the generally strongly developed deformation burrowing. The application of special measures for the deformation burrowing allowed to plot their intensity in profile for each core. A degree of regularity could be found in these burrowing intensity plots, with higher values appearing in the sandy facies, but with no clear differences between sand and silt layers in the transition facies. Small sections in the profiles of the deepest part of the bay show no bioturbation at all.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this updated analysis of the EXPERT-C trial we show that, in magnetic resonance imaging-defined, high-risk, locally advanced rectal cancer, adding cetuximab to a treatment strategy with neoadjuvant CAPOX followed by chemoradiotherapy, surgery, and adjuvant CAPOX is not associated with a statistically significant improvement in progression-free survival (PFS) and overall survival (OS) in both KRAS/BRAF wild-type and unselected patients. In a retrospective biomarker analysis, TP53 was not prognostic but emerged as an independent predictive biomarker for cetuximab benefit. After a median follow-up of 65.0 months, TP53 wild-type patients (n = 69) who received cetuximab had a statistically significant better PFS (89.3% vs 65.0% at 5 years; hazard ratio [HR] = 0.23; 95% confidence interval [CI] = 0.07 to 0.78; two-sided P = .02 by Cox regression) and OS (92.7% vs 67.5% at 5 years; HR = 0.16; 95% CI = 0.04 to 0.70; two-sided P = .02 by Cox regression) than TP53 wild-type patients who were treated in the control arm. An interaction between TP53 status and cetuximab effect was found (P <.05) and remained statistically significant after adjusting for statistically significant prognostic factors and KRAS.