14 resultados para predictive model

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human cytochrome P450 3A (CYP3A) subfamily is responsible for most of the metabolism of therapeutic drugs; however, an adequate in vivo model has yet to be discovered. This study begins with an investigation of a controversial topic surrounding the human CYP3As--estrogen regulation. A novel approach to this topic was used by defining expression in the estrogen-responsive endometrium. This study shows that estrogen down-regulates CYP3A4 expression in the endometrium. On the other hand, analogous studies showed an increase in CYP3A expression as age increases in liver tissue. Following the discussion of estrogen regulation, is an investigation of the cross-species relationships among all of the CYP3As was completed. The study compares isoforms from piscines, avians, rodents, canines, ovines, bovines, and primates. Using the traditional phylogenetic analyses and employing a novel approach using exon and intron lengths, the results show that only another primate could be the best animal model for analysis of the regulation of the expression of the human CYP3As. This analysis also demonstrated that the chimpanzee seems to be the best available human model. Moreover, the study showed the presence and similarities of one additional isoform in the chimpanzee genome that is absent in humans. Based on these results, initial characterization of the chimpanzee CYP3A subfamily was begun. While the human genome contains four isoforms--CYP3A4, CYP3A5, CYP3A7, and CYP3A43--the chimpanzee genome has five, the four previously mentioned and CYP3A67. Both species express CYP3A4, CYP3A5, and CYP3A43, but humans express CYP3A7 while chimpanzees express CYP3A67. In humans, CYP3A4 is expressed at higher levels than the other isoforms, but some chimpanzee individuals express CYP3A67 at higher levels than CYP3A4. Such a difference is expected to alter significantly the total CYP3A metabolism. On the other hand, any study considering individual isoforms would still constitute a valid method of study for the human CYP3A4, CYP3A5, and CYP3A43 isoforms. ^

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is a multifactorial disease process involving behavioral, inflammatory, clinical, thrombotic, and genetic components. Previous epidemiologic studies focused on identifying behavioral and demographic risk factors of CAD, but none focused on platelets. Current platelet literature lacks the known effects of platelet function and platelet receptor polymorphisms on CAD. This case-control analysis addressed these issues by analyzing data collected for a previous study. Cases were individuals who had undergone CABG and thus had been diagnosed with CAD, while the controls were volunteers presumed to be CAD free. The platelet function variables analyzed included fibrinogen Von Willebrand Factor activity (VWF), shear-induced platelet aggregation (SIPA), sCD40L, and mean platelet volume; and the platelet polymorphisms studied included PIA, α2 807, Ko, Kozak, and VNTR. Univariate analysis found fibrinogen, VWF, SIPA, and PIA to be independent risk factors of CAD. Logistic regression was used to build a predictive model for CAD using the platelet function and platelet polymorphism data adjusted for age, sex, race, and current smoking status. A model containing only platelet polymorphisms and their respective receptor densities, found polymorphisms within GPIbα to be associated with CAD, yielding an 86% (95% C.I. 0.97–3.55) increased risk with the presence of at least 1 polymorphism in Ko, Kozak, or VNTR. Another model included both platelet function and platelet polymorphism data. Fibrinogen, the receptor density of GPIbα, and the polymorphism in GPIa-IIa (α2 807) were all associated with CAD with odds ratios of 1.10, 1.04, and 2.30 for fibrinogen (10mg/dl increase), GPIbα receptors (1 MFI increase), and GPIa-IIa, respectively. In addition, risk estimates and 99% confidence intervals adjusted for race were calculated to determine if the presence of a platelet receptor polymorphism was associated with CAD. The results were as follows: PIA (1.64, 0.74–3.65); α2 807 (1.35, 0.77–2.37); Ko (1.71, 0.70–4.16); Kozak (1.17, 0.54–2.52); and VNTR (1.24, 0.52–2.91). Although not statistically significant, all platelet polymorphisms were associated with an increased risk for CAD. These exploratory findings indicate that platelets do appear to have a role in atherosclerosis and that anti-platelet drugs targeting GPI-IIa and GPIbα may be better treatment candidates for individuals with CAD. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coronary artery bypass graft (CABG) surgery is among the most common operations performed in the United States and accounts for more resources expended in cardiovascular medicine than any other single procedure. CABG surgery patients initially recover in the Cardiovascular Intensive Care Unit (CVICU). The post-procedure CVICU length of stay (LOS) goal is two days or less. A longer ICU LOS is associated with a prolonged hospital LOS, poor health outcomes, greater use of limited resources, and increased medical costs. ^ Research has shown that experienced clinicians can predict LOS no better than chance. Current CABG surgery LOS risk models differ greatly in generalizability and ease of use in the clinical setting. A predictive model that identified modifiable pre- and intra-operative risk factors for CVICU LOS greater than two days could have major public health implications as modification of these identified factors could decrease CVICU LOS and potentially minimize morbidity and mortality, optimize use of limited health care resources, and decrease medical costs. ^ The primary aim of this study was to identify modifiable pre-and intra-operative predictors of CVICU LOS greater than two days for CABG surgery patients with cardiopulmonary bypass (CPB). A secondary aim was to build a probability equation for CVICU LOS greater than two days. Data were extracted from 416 medical records of CABG surgery patients with CPB, 50 to 80 years of age, recovered in the CVICU of a large teaching, referral hospital in southeastern Texas, during the calendar year 2004 and the first quarter of 2005. Exclusion criteria included Diagnosis Related Group (DRG) 106, CABG surgery without CPB, CABG surgery with other procedures, and operative deaths. The data were analyzed using multivariate logistic regression for an alpha=0.05, power=0.80, and correlation=0.26. ^ This study found age, history of peripheral arterial disease, and total operative time equal to and greater than four hours to be independent predictors of CVICU LOS greater than two days. The probability of CVICU LOS greater than two days can be calculated by the following equation: -2.872941 +.0323081 (age in years) + .8177223 (history of peripheral arterial disease) + .70379 (operative time). ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tuberculosis (TB) is an infectious disease of great public health importance, particularly to institutions that provide health care to large numbers of TB patients such as Parkland Hospital in Dallas, TX. The purpose of this retrospective chart review was to analyze differences in TB positive and TB negative patients to better understand whether or not there were variables that could be utilized to develop a predictive model for use in the emergency department to reduce the overall number of suspected TB patients being sent to respiratory isolation for TB testing. This study included patients who presented to the Parkland Hospital emergency department between November 2006 and December 2007 and were isolated and tested for TB. Outcome of TB was defined as a positive sputum AFB test or a positive M. tuberculosis culture result. Data were collected utilizing the UT Southwestern Medical Center computerized database OACIS and included demographic information, TB risk factors, physical symptoms, and clinical results. Only two variables were significantly (P<0.05) related to TB outcome: dyspnea (shortness of breath) (P<0.001) and abnormal x-ray (P<0.001). Marginally significant variables included hemoptysis (P=0.06), weight loss (P=0.11), night sweats (P=0.20), history of homelessness or incarceration (P=0.15), and history of positive skin PPD (P=0.19). Using a combination of significant and marginally significant variables, a predictive model was designed which demonstrated a specificity of 24% and a sensitivity of 70%. In conclusion, a predictive model for TB outcome based on patients who presented to the Parkland Hospital emergency department between November 2006 and December 2007 was unsuccessful given the limited number of variables that differed significantly between TB positive and TB negative patients. It is suggested that a future prospective cohort study should be implemented to collect data on TB positive and TB negative patients. It may be possible that a more thorough prospective collection of data may lead to clearer comparisons between TB positive and TB negative patients and ultimately to the design of a more sensitive predictive model for TB outcome. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Privately practicing health care practitioners, such as physicians, dentists, and optometrists are facing increasing competitive pressures as the health care industry undergoes significant structural change. The eye care field has been affected by this change and one result has been the establishment of consultation/comanagement centers for optometrists. These centers, staffed primarily by an ophthalmologist, serve community optometrists as a secondary ophthalmic care center and are altering the traditional optometric - ophthalmologic referral system.^ This study was designed to examine the response of optometrists to the formation of a center by measuring the amount and type of optometric participation in a center and identifying factors affecting participation. A predictive model was specified to determine the probability of center use by practitioners.^ The results showed that the establishment of a center in a community did not result in its usage by all practitioners though there were specific practice (organizational) and practitioners (decision-maker) variables that could be used to predict use. Three practice variables and four practitioner variables were found to be important in influencing center use. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With substance abuse treatment expanding in prisons and jails, understanding how behavior change interacts with a restricted setting becomes more essential. The Transtheoretical Model (TTM) has been used to understand intentional behavior change in unrestricted settings, however, evidence indicates restrictive settings can affect the measurement and structure of the TTM constructs. The present study examined data from problem drinkers at baseline and end-of-treatment from three studies: (1) Project CARE (n = 187) recruited inmates from a large county jail; (2) Project Check-In (n = 116) recruited inmates from a state prison; (3) Project MATCH, a large multi-site alcohol study had two recruitment arms, aftercare (n = 724 pre-treatment and 650 post-treatment) and outpatient (n = 912 pre-treatment and 844 post-treatment). The analyses were conducted using cross-sectional data to test for non-invariance of measures of the TTM constructs: readiness, confidence, temptation, and processes of change (Structural Equation Modeling, SEM) across restricted and unrestricted settings. Two restricted (jail and aftercare) and one unrestricted group (outpatient) entering treatment and one restricted (prison) and two unrestricted groups (aftercare and outpatient) at end-of-treatment were contrasted. In addition TTM end-of-treatment profiles were tested as predictors of 12 month drinking outcomes (Profile Analysis). Although SEM did not indicate structural differences in the overall TTM construct model across setting types, there were factor structure differences on the confidence and temptation constructs at pre-treatment and in the factor structure of the behavioral processes at the end-of-treatment. For pre-treatment temptation and confidence, differences were found in the social situations factor loadings and in the variance for the confidence and temptation latent factors. For the end-of-treatment behavioral processes, differences across the restricted and unrestricted settings were identified in the counter-conditioning and stimulus control factor loadings. The TTM end-of-treatment profiles were not predictive of drinking outcomes in the prison sample. Both pre and post-treatment differences in structure across setting types involved constructs operationalized with behaviors that are limited for those in restricted settings. These studies suggest the TTM is a viable model for explicating addictive behavior change in restricted settings but calls for modification of subscale items that refer to specific behaviors and caution in interpreting the mean differences across setting types for problem drinkers. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically assess advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we review a multiscale, i.e., from the molecular to the gross tumor scale, mathematical and computational "first-principle" approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We review the feasibility of this methodology that, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as a phenotype-diagnostic tool to predict collective and individual tumor cell invasion of surrounding tissue. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A historical prospective study was designed to assess the man weight status of subjects who participated in a behavioral weight reduction program in 1983 and to determine whether there was an association between the dependent variable weight change and any of 31 independent variables after a 2 year follow-up period. Data was obtained by abstracting the subjects records and from a follow-up questionnaire administered 2 years following program participation. Five hundred nine subjects (386 females and 123 males) of 1460 subjects who participated in the program, completed and returned the questionnaire. Results showed that mean weight was significantly different (p < 0.001) between the measurement at baseline and after a 2 year follow-up period. The mean weight loss of the group was 5.8 pounds, 10.7 pounds for males and 4.2 pounds for females after a 2 year follow-up period. A total of 63.9% of the group, 69.9% of males and 61.9% of females were still below their initial weight after the 2 year follow-up period. Sixteen of the 31 variables assessed utilizing bivariate analyses were found to be significantly (p (LESSTHEQ) 0.05) associated with weight change after a 2 year follow-up period. These variables were then entered into a multivariate linear regression model. A total of 37.9% of the variance of the dependent variable, weight change, was accounted for by all 16 variables. Eight of these variables were found to be significantly (p (LESSTHEQ) 0.05) predictive of weight change in the stepwise multivariate process accounting for 37.1% of the variance. These variables included: Two baseline variables (percent over ideal body weight at enrollment and occupation) and six follow-up variables (feeling in control of eating habits, percent of body weight lost during treatment, frequency of weight measurement, physical activity, eating in response to emotions, and number of pounds of weight gain needed to resume a diet). It was concluded that a greater amount of emphasis should be placed on the six follow-up variables by clinicians involved in the treatment of obesity, and by the subjects themselves to enhance their chances of success at long-term weight loss. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breast cancer is the most common non-skin cancer and the second leading cause of cancer-related death in women in the United States. Studies on ipsilateral breast tumor relapse (IBTR) status and disease-specific survival will help guide clinic treatment and predict patient prognosis.^ After breast conservation therapy, patients with breast cancer may experience breast tumor relapse. This relapse is classified into two distinct types: true local recurrence (TR) and new ipsilateral primary tumor (NP). However, the methods used to classify the relapse types are imperfect and are prone to misclassification. In addition, some observed survival data (e.g., time to relapse and time from relapse to death)are strongly correlated with relapse types. The first part of this dissertation presents a Bayesian approach to (1) modeling the potentially misclassified relapse status and the correlated survival information, (2) estimating the sensitivity and specificity of the diagnostic methods, and (3) quantify the covariate effects on event probabilities. A shared frailty was used to account for the within-subject correlation between survival times. The inference was conducted using a Bayesian framework via Markov Chain Monte Carlo simulation implemented in softwareWinBUGS. Simulation was used to validate the Bayesian method and assess its frequentist properties. The new model has two important innovations: (1) it utilizes the additional survival times correlated with the relapse status to improve the parameter estimation, and (2) it provides tools to address the correlation between the two diagnostic methods conditional to the true relapse types.^ Prediction of patients at highest risk for IBTR after local excision of ductal carcinoma in situ (DCIS) remains a clinical concern. The goals of the second part of this dissertation were to evaluate a published nomogram from Memorial Sloan-Kettering Cancer Center, to determine the risk of IBTR in patients with DCIS treated with local excision, and to determine whether there is a subset of patients at low risk of IBTR. Patients who had undergone local excision from 1990 through 2007 at MD Anderson Cancer Center with a final diagnosis of DCIS (n=794) were included in this part. Clinicopathologic factors and the performance of the Memorial Sloan-Kettering Cancer Center nomogram for prediction of IBTR were assessed for 734 patients with complete data. Nomogram for prediction of 5- and 10-year IBTR probabilities were found to demonstrate imperfect calibration and discrimination, with an area under the receiver operating characteristic curve of .63 and a concordance index of .63. In conclusion, predictive models for IBTR in DCIS patients treated with local excision are imperfect. Our current ability to accurately predict recurrence based on clinical parameters is limited.^ The American Joint Committee on Cancer (AJCC) staging of breast cancer is widely used to determine prognosis, yet survival within each AJCC stage shows wide variation and remains unpredictable. For the third part of this dissertation, biologic markers were hypothesized to be responsible for some of this variation, and the addition of biologic markers to current AJCC staging were examined for possibly provide improved prognostication. The initial cohort included patients treated with surgery as first intervention at MDACC from 1997 to 2006. Cox proportional hazards models were used to create prognostic scoring systems. AJCC pathologic staging parameters and biologic tumor markers were investigated to devise the scoring systems. Surveillance Epidemiology and End Results (SEER) data was used as the external cohort to validate the scoring systems. Binary indicators for pathologic stage (PS), estrogen receptor status (E), and tumor grade (G) were summed to create PS+EG scoring systems devised to predict 5-year patient outcomes. These scoring systems facilitated separation of the study population into more refined subgroups than the current AJCC staging system. The ability of the PS+EG score to stratify outcomes was confirmed in both internal and external validation cohorts. The current study proposes and validates a new staging system by incorporating tumor grade and ER status into current AJCC staging. We recommend that biologic markers be incorporating into revised versions of the AJCC staging system for patients receiving surgery as the first intervention.^ Chapter 1 focuses on developing a Bayesian method to solve misclassified relapse status and application to breast cancer data. Chapter 2 focuses on evaluation of a breast cancer nomogram for predicting risk of IBTR in patients with DCIS after local excision gives the statement of the problem in the clinical research. Chapter 3 focuses on validation of a novel staging system for disease-specific survival in patients with breast cancer treated with surgery as the first intervention. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a Model Interruption is a known human factor that contributes to errors and catastrophic events in healthcare as well as other high-risk industries. The landmark Institute of Medicine (IOM) report, To Err is Human, brought attention to the significance of preventable errors in medicine and suggested that interruptions could be a contributing factor. Previous studies of interruptions in healthcare did not offer a conceptual model by which to study interruptions. As a result of the serious consequences of interruptions investigated in other high-risk industries, there is a need to develop a model to describe, understand, explain, and predict interruptions and their consequences in healthcare. Therefore, the purpose of this study was to develop a model grounded in the literature and to use the model to describe and explain interruptions in healthcare. Specifically, this model would be used to describe and explain interruptions occurring in a Level One Trauma Center. A trauma center was chosen because this environment is characterized as intense, unpredictable, and interrupt-driven. The first step in developing the model began with a review of the literature which revealed that the concept interruption did not have a consistent definition in either the healthcare or non-healthcare literature. Walker and Avant’s method of concept analysis was used to clarify and define the concept. The analysis led to the identification of five defining attributes which include (1) a human experience, (2) an intrusion of a secondary, unplanned, and unexpected task, (3) discontinuity, (4) externally or internally initiated, and (5) situated within a context. However, before an interruption could commence, five conditions known as antecedents must occur. For an interruption to take place (1) an intent to interrupt is formed by the initiator, (2) a physical signal must pass a threshold test of detection by the recipient, (3) the sensory system of the recipient is stimulated to respond to the initiator, (4) an interruption task is presented to recipient, and (5) the interruption task is either accepted or rejected by v the recipient. An interruption was determined to be quantifiable by (1) the frequency of occurrence of an interruption, (2) the number of times the primary task has been suspended to perform an interrupting task, (3) the length of time the primary task has been suspended, and (4) the frequency of returning to the primary task or not returning to the primary task. As a result of the concept analysis, a definition of an interruption was derived from the literature. An interruption is defined as a break in the performance of a human activity initiated internal or external to the recipient and occurring within the context of a setting or location. This break results in the suspension of the initial task by initiating the performance of an unplanned task with the assumption that the initial task will be resumed. The definition is inclusive of all the defining attributes of an interruption. This is a standard definition that can be used by the healthcare industry. From the definition, a visual model of an interruption was developed. The model was used to describe and explain the interruptions recorded for an instrumental case study of physicians and registered nurses (RNs) working in a Level One Trauma Center. Five physicians were observed for a total of 29 hours, 31 minutes. Eight registered nurses were observed for a total of 40 hours 9 minutes. Observations were made on either the 0700–1500 or the 1500-2300 shift using the shadowing technique. Observations were recorded in the field note format. The field notes were analyzed by a hybrid method of categorizing activities and interruptions. The method was developed by using both a deductive a priori classification framework and by the inductive process utilizing line-byline coding and constant comparison as stated in Grounded Theory. The following categories were identified as relative to this study: Intended Recipient - the person to be interrupted Unintended Recipient - not the intended recipient of an interruption; i.e., receiving a phone call that was incorrectly dialed Indirect Recipient – the incidental recipient of an interruption; i.e., talking with another, thereby suspending the original activity Recipient Blocked – the intended recipient does not accept the interruption Recipient Delayed – the intended recipient postpones an interruption Self-interruption – a person, independent of another person, suspends one activity to perform another; i.e., while walking, stops abruptly and talks to another person Distraction – briefly disengaging from a task Organizational Design – the physical layout of the workspace that causes a disruption in workflow Artifacts Not Available – supplies and equipment that are not available in the workspace causing a disruption in workflow Initiator – a person who initiates an interruption Interruption by Organizational Design and Artifacts Not Available were identified as two new categories of interruption. These categories had not previously been cited in the literature. Analysis of the observations indicated that physicians were found to perform slightly fewer activities per hour when compared to RNs. This variance may be attributed to differing roles and responsibilities. Physicians were found to have more activities interrupted when compared to RNs. However, RNs experienced more interruptions per hour. Other people were determined to be the most commonly used medium through which to deliver an interruption. Additional mediums used to deliver an interruption vii included the telephone, pager, and one’s self. Both physicians and RNs were observed to resume an original interrupted activity more often than not. In most interruptions, both physicians and RNs performed only one or two interrupting activities before returning to the original interrupted activity. In conclusion the model was found to explain all interruptions observed during the study. However, the model will require an even more comprehensive study in order to establish its predictive value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite continued research and public health efforts to reduce smoking during pregnancy, prenatal cessation rates in the United States have decreased and the incidence of low birth weight has increased from 1985 to 1991. Lower socioeconomic status women who are at increased risk for poor pregnancy outcomes may be resistant to current intervention efforts during pregnancy. The purpose of this dissertation was to investigate the determinants of continued smoking and quitting among low-income pregnant women.^ Using data from cross-sectional surveys of 323 low-income pregnant smokers, the first study developed and tested measures of the pros and cons of smoking during pregnancy. The original decisional balance measure for smoking was compared with a new measure that added items thought to be more salient to the target population. Confirmatory factor analysis using structural equation modeling showed neither the original nor new measure fit the data adequately. Using behavioral science theory, content from interviews with the population, and statistical evidence, two 7-item scales representing the pros and cons were developed from a portion (n = 215) of the sample and successfully cross-validated on the remainder of the sample (n = 108). Logistic regression found only pros were significantly associated with continued smoking. In a discriminant function analysis, stage of change was significantly associated with pros and cons of smoking.^ The second study examined the structural relationships between psychosocial constructs representing some of the levels of and the pros and cons of smoking. The cross-sectional design mandates that statements made regarding prediction do not prove causation or directionality from the data or methods analysis. Structural equation modeling found the following: more stressors and family criticism were significantly more predictive of negative affect than social support; a bi-directional relationship was found between negative affect and current nicotine addiction; and negative affect, addiction, stressors, and family criticism were significant predictors of pros of smoking.^ The findings imply reversing the trend of decreasing smoking cessation during pregnancy may require supplementing current interventions for this population of pregnant smokers with programs addressing nicotine addiction, negative affect, and other psychosocial factors such as family functioning and stressors. ^