2 resultados para questionnaire validation procedures

em DigitalCommons@The Texas Medical Center


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. Minimal Important Differences (MIDs) establish benchmarks for interpreting mean differences in clinical trials involving quality of life outcomes and inform discussions of clinically meaningful change in patient status. As such, the purpose of this study was to assess MIDs for the Functional Assessment of Cancer Therapy–Melanoma (FACT-M). ^ Methods. A prospective validation study of the FACT-M was performed with 273 patients with stage I to IV melanoma. FACT-M, Karnofsky Performance Status (KPS), and Eastern Cooperative Oncology Group Performance Status (ECOG-PS) scores were obtained at baseline and 3 months following enrollment. Anchor- and distribution-based methods were used to assess MIDs, and the correspondence between MID ranges derived from each method was evaluated. ^ Results. This study indicates that an approximate range for MIDs of the FACT-M subscales is between 5 to 8 points for the Trial Outcome Index, 4 to 5 points for the Melanoma Combined Subscale, 2 to 4 points for the Melanoma Subscale, and 1 to 2 points for the Melanoma Surgery Subscale. Each method produced similar but not identical ranges of MIDs. ^ Conclusions. The properties of the anchor instrument employed to derive MIDs directly affect resulting MID ranges and point values. When MIDs are offered as supportive evidence of a clinically meaningful change, the anchor instrument used to derive thresholds should be clearly stated along with evidence supporting the choice of anchor instrument as the most appropriate for the domain of interest. In this analysis, the KPS was a more appropriate measure than the ECOG-PS for assessing MIDs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^