16 resultados para Elementary Methods In Number Theory
em DigitalCommons@The Texas Medical Center
Resumo:
Background. Similar to parent support in the home environment, teacher support at school may positively influence children's fruit and vegetable (FV) consumption. This study assessed the relationship between teacher support for FV consumption and the FV intake of 4th and 5th grade students in low-income elementary schools in central Texas. Methods. A secondary analysis was performed on baseline data collected from 496 parent-child dyads during the Marathon Kids study carried out by the Michael & Susan Dell Center for Healthy Living at the University of Texas School of Public Health. A hierarchical linear regression analysis adjusting for key demographic variables, parent support, and home FV availability was conducted. In addition, separate linear regression models stratified by quartiles of home FV availability were conducted to assess the relationship between teacher support and FV intake by level of home FV availability. Results. Teacher support was not significantly related to students' FV intake (p = .44). However, the interaction of teacher support and home FV availability was positively associated with students' FV consumption (p < .05). For students in the lowest quartile of home FV availability, teacher support accounted for approximately 6% of the FV intake variance (p = .02). For higher levels of FV availability, teacher support and FV intake were not related. Conclusions. For lower income elementary school-aged children with low FV availability at home, greater teacher support may lead to modest increases in FV consumption.^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
OBJECTIVE: To systematically review published literature to examine the complications associated with the use of misoprostol and compare these complications to those associated with other forms of abortion induction. ^ DATA SOURCES: Studies were identified through searches of medical literature databases including Medline (Ovid), PubMed (NLM), LILACS, sciELO, and AIM (AFRO), and review of references of relevant articles. ^ STUDY SELECTION AND METHODS: A descriptive systematic review that included studies reported in English and published before December 2012. Eligibility criteria included: misoprostol (with or without other methods) and any other method of abortion in a developing country, as well as quantitative data on the complication of each method. The following is information extracted from each study: author/year, country/city, study design/study sample, age range, setting of data collection, sample size, the method of abortion induction, the number of cases for each method, and the percentage of complications with each method. RESULTS: A total of 4 studies were identified (all in Latin America) describing post-abortion complications of misoprostol and other methods in countries where abortion is generally considered unsafe and/or illegal. The four studies reported on a range of complications including: bleeding, infection, incomplete abortion, intense pelvic pain, uterine perforation, headache, diarrhea, nausea, mechanical lesions, and systemic collapse. The most prevalent complications of misoprostol-induced abortion reported were: bleeding (7-82%), incomplete abortion (33-70%), and infection (0.8-67%). The prevalence of these complications reported from other abortion methods include: bleeding (16-25%), incomplete abortion (15-82%), and infection (13-50%). ^ CONCLUSION: The literature identified by this systematic review is inadequate for determining the complications of misoprostol used in unsafe settings. Abortion is considered an illicit behavior in these countries, therefore making it difficult to investigate the details needed to conduct a study on abortion complications. Given the differences between the reviewed studies as well as a variety of study limitations, it is not possible to draw firm conclusions about the rates of specific-abortion related complications.^
Resumo:
Li-Fraumeni syndrome (LFS) is characterized by a variety of neoplasms occurring at a young age with an apparent autosomal dominant transmission. Individuals in pedigrees with LFS have high incidence of second malignancies. Recently LFS has been found to be associated with germline mutations of a tumor-suppressor gene, p53. Because LFS is rare and indeed not a clear-cut disease, it is not known whether all cases of LFS are attributable to p53 germline mutations and how p53 plays in cancer occurrence in such cancer syndrome families. In the present study, DNAs from constitutive cells of two-hundred and thirty-three family members from ten extended pedigrees were screened for p53 mutations. Six out of the ten LFS families had germline mutations at the p53 locus, including point and deletion mutations. In these six families, 55 out of 146 members were carriers of p53 mutations. Except one, all mutations occurred in exons 5 to 8 (i.e., the "hot spot" region) of the p53 gene. The age-specific penetrance of cancer was estimated after the genotype for each family member at risk was determined. The penetrance was 0.15, 0.29, 0.35, 0.77, and 0.91 by 20, 30, 40, 50 and 60 year-old, respectively, in male carriers; 0.19, 0.44, 0.76, and 0.90 by 20, 30, 40, and 50 year-old, respectively, in female carriers. These results indicated that one cannot escape from tumorigenesis if one inherits a p53 mutant allele; at least ninety percent of p53 carriers will develop cancer by the age of 60. To evaluate the possible bias due to the unexamined blood-relatives in LFS families, I performed a simulation analysis in which a p53 genotype was assigned to each unexamined person based on his cancer status and liability to cancer. The results showed that the penetrance estimates were not biased by the unexamined relatives. I also determined the sex, site, and age-specific penetrance of breast cancer in female carriers and lung cancer in male carriers. The penetrance of breast cancer in female carriers was 0.81 by age 45; the penetrance of lung cancer in male carriers was 0.78 by age 60, indicating that p53 play a key role for tumorigenesis in common cancers. ^
Resumo:
Background. With the rapid rise in childhood obesity, physical activity participation among young children has become the subject of much recent attention. Physical education classes have been specifically targeted as a method of providing opportunities for all children to be active. Unfortunately, student participation in moderate-to-vigorous physical activity during these classes still falls far below the current recommendations. While some research to date has reported the levels of activity among elementary-aged children, research is limited on the relationship between these activity levels and the environmental characteristics that exist within the PE classroom. ^ Purpose. The purpose of this study is to examine the association between specific classroom characteristics and contextual characteristics (lesson context, class size, class location, teacher gender, and teacher encouragement for PA) with elementary aged children's moderate-to-vigorous activity during PE class. ^ Methods. A secondary analysis of 211 3rd, 4th and 5th grade physical education classes amongst 39 elementary schools in Harris County, TX and 35 elementary schools in Travis County, TX was conducted using cross-sectional data from the evaluation of a school-based health program. Lesson context and student activity levels were measured using a direct observation measurement tool. Additionally, these variables were further analyzed against a number of classroom characteristics to determine any significant associations. ^ Results. Overall, elementary PE classes are still participating in low levels of moderate-to-vigorous physical activity averaging only 38% of class time. Additionally, close to 25% of class time is spent in classroom management. Male directed classes spent significantly more time in game activities and female directed classes spent more time in fitness, knowledge, and skill activities. Classes that took place outdoors were more active and spent more time in games than those that took place indoors. Significant correlations were demonstrated between class size and time spent in management context. Time spent in management context was also correlated with time spent sitting and standing. Additionally, positive correlations were demonstrated between time very active and teachers that praised students and encouraged physical activity among their classes.^
Resumo:
Objective Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular and for improving healthcare quality and patient safety in general. Method The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. Results The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. Conclusions Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Resumo:
OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
Screening for latent tuberculosis infection (LTBI) is an integral component of an effective tuberculosis control strategy, but one that is often relegated to the lowest priority. In a state with higher than national average rates of tuberculosis, due consideration should be given to LTBI screening. Recent large scale contact investigations in the middle school of Del Rio, Texas, raised questions about the status of school screening for LTBI. An evidence based approach was used to evaluate school screening in high risk areas of Texas. A review of the literature revealed that the current recommendations for LTBI screening in children is based on administration of a risk factor questionnaire that should be based on the four main risk factors for LTBI in children that have been identified. Six representative areas in Texas were identified for evaluation of the occurrence of contact investigations in schools for the period of 2006 to 2009 and any use of school screening programs. Of the five reporting areas that responded, only one utilized a school screening program; this reporting area had the lowest percentage of contact investigations occurring in schools. Contact investigations were most common in middle schools and least common in elementary schools. In metropolitan areas, colleges represented up to 42.9% of contact investigations. The number of contact investigations has increased from 2006 to 2008. This report represents a small sample, and further research into the frequency, distribution and risk for contact investigations in schools and the efficacy of screening programs should be done. ^
Resumo:
Uncertainty has been found to be a major component of the cancer experience and can dramatically affect psychosocial adaptation and outcomes of a patient's disease state (McCormick, 2002). Patients with a diagnosis of Carcinoma of Unknown Primary (CUP) may experience higher levels of uncertainty due to the unpredictability of current and future symptoms, limited treatment options and an undetermined life expectancy. To date, only one study has touched upon uncertainty and its' effects on those with CUP but no information exists concerning the effects of uncertainty regarding diagnosis and treatment on the distress level and psychosocial adjustment of this population (Parker & Lenzi, 2003). ^ Mishel's Uncertainty in Illness Theory (1984) proposes that uncertainty is preceded by three variables, one of which being Structure Providers. Structure Providers include credible authority, the degree of trust and confidence the patient has with their doctor, education and social support. It was the goal of this study to examine the relationship between uncertainty and Structure Providers to support the following hypotheses: (1) There will be a negative association between credible authority and uncertainty, (2) There will be a negative association between education level and uncertainty, and (3) There will be a negative association between social support and uncertainty. ^ This cross-sectional analysis utilized data from 219 patients following their initial consultation with their oncologist. Data included the Mishel Uncertainty in Illness Scale (MUIS) which was used to determine patients' uncertainty levels, the Medical Outcomes Study-Social Support Scale (MOSS-SSS) to assess patients, levels of social support, the Patient Satisfaction Questionnaire (PSQ-18) and the Cancer Diagnostic Interview Scale (CDIS) to measure credible authority and general demographic information to assess age, education, marital status and ethnicity. ^ In this study we found that uncertainty levels were generally higher in this sample as compared to other types of cancer populations. And while our results seemed to support most of our hypothesis, we were only able to show significant associations between two. The analyses indicated that credible authority measured by both the CDIS and the PSQ was a significant predictor of uncertainty as was social support measured by the MOSS-SS. Education has shown to have an inconsistent pattern of effect in relation to uncertainty and in the current study there was not enough data to significantly support our hypothesis. ^ The results of this study generally support Mishel's Theory of Uncertainty in Illness and highlight the importance of taking into consideration patients, psychosocial factors as well as employing proper communication practices between physicians and their patients.^
Resumo:
In recent years, disaster preparedness through assessment of medical and special needs persons (MSNP) has taken a center place in public eye in effect of frequent natural disasters such as hurricanes, storm surge or tsunami due to climate change and increased human activity on our planet. Statistical methods complex survey design and analysis have equally gained significance as a consequence. However, there exist many challenges still, to infer such assessments over the target population for policy level advocacy and implementation. ^ Objective. This study discusses the use of some of the statistical methods for disaster preparedness and medical needs assessment to facilitate local and state governments for its policy level decision making and logistic support to avoid any loss of life and property in future calamities. ^ Methods. In order to obtain precise and unbiased estimates for Medical Special Needs Persons (MSNP) and disaster preparedness for evacuation in Rio Grande Valley (RGV) of Texas, a stratified and cluster-randomized multi-stage sampling design was implemented. US School of Public Health, Brownsville surveyed 3088 households in three counties namely Cameron, Hidalgo, and Willacy. Multiple statistical methods were implemented and estimates were obtained taking into count probability of selection and clustering effects. Statistical methods for data analysis discussed were Multivariate Linear Regression (MLR), Survey Linear Regression (Svy-Reg), Generalized Estimation Equation (GEE) and Multilevel Mixed Models (MLM) all with and without sampling weights. ^ Results. Estimated population for RGV was 1,146,796. There were 51.5% female, 90% Hispanic, 73% married, 56% unemployed and 37% with their personal transport. 40% people attained education up to elementary school, another 42% reaching high school and only 18% went to college. Median household income is less than $15,000/year. MSNP estimated to be 44,196 (3.98%) [95% CI: 39,029; 51,123]. All statistical models are in concordance with MSNP estimates ranging from 44,000 to 48,000. MSNP estimates for statistical methods are: MLR (47,707; 95% CI: 42,462; 52,999), MLR with weights (45,882; 95% CI: 39,792; 51,972), Bootstrap Regression (47,730; 95% CI: 41,629; 53,785), GEE (47,649; 95% CI: 41,629; 53,670), GEE with weights (45,076; 95% CI: 39,029; 51,123), Svy-Reg (44,196; 95% CI: 40,004; 48,390) and MLM (46,513; 95% CI: 39,869; 53,157). ^ Conclusion. RGV is a flood zone, most susceptible to hurricanes and other natural disasters. People in the region are mostly Hispanic, under-educated with least income levels in the U.S. In case of any disaster people in large are incapacitated with only 37% have their personal transport to take care of MSNP. Local and state government’s intervention in terms of planning, preparation and support for evacuation is necessary in any such disaster to avoid loss of precious human life. ^ Key words: Complex Surveys, statistical methods, multilevel models, cluster randomized, sampling weights, raking, survey regression, generalized estimation equations (GEE), random effects, Intracluster correlation coefficient (ICC).^
Resumo:
Background. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% (1.38 million) of the total new cancer cases and 14% (458,400) of the total cancer deaths in 2008. [1] Triple-negative breast cancer (TNBC) is an aggressive phenotype comprising 10–20% of all breast cancers (BCs). [2-4] TNBCs show absence of estrogen, progesterone and HER2/neu receptors on the tumor cells. Because of the absence of these receptors, TNBCs are not candidates for targeted therapies. Circulating tumor cells (CTCs) are observed in blood of breast cancer patients even at early stages (Stage I & II) of the disease. Immunological and molecular analysis can be used to detect the presence of tumor cells in the blood (Circulating tumor cells; CTCs) of many breast cancer patients. These cells may explain relapses in early stage breast cancer patients even after adequate local control. CTC detection may be useful in identifying patients at risk for disease progression, and therapies targeting CTCs may improve outcome in patients harboring them. Methods . In this study we evaluated 80 patients with TNBC who are enrolled in a larger prospective study conducted at M D Anderson Cancer Center in order to determine whether the presence of circulating tumor cells is a significant prognostic factor in relapse free and overall survival . Patients with metastatic disease at the time of presentation were excluded from the study. CTCs were assessed using CellSearch System™ (Veridex, Raritan, NJ). CTCs were defined as nucleated cells lacking the presence of CD45 but expressing cytokeratins 8, 18 or 19. The distribution of patient and tumor characteristics was analyzed using chi square test and Fisher's exact test. Log rank test and Cox regression analysis was applied to establish the association of circulating tumor cells with relapse free and overall survival. Results. The median age of the study participants was 53years. The median duration of follow-up was 40 months. Eighty-eight percent (88%) of patients were newly diagnosed (without a previous history of breast cancer), and (60%) of patients were chemo naïve (had not received chemotherapy at the time of their blood draw for CTC analysis). Tumor characteristics such as stage (P=0.40), tumor size (P=69), sentinel nodal involvement (P=0.87), axillary lymph node involvement (P=0.13), adjuvant therapy (P=0.83), and high histological grade of tumor (P=0.26) did not predict the presence of CTCs. However, CTCs predicted worse relapse free survival (1 or more CTCs log rank P value = 0.04, at 2 or more CTCs P = 0.02 and at 3 or more CTCs P < 0.0001) and overall survival (at 1 or more CTCs log rank P value = 0.08, at 2 or more CTCs P = 0.01 and at 3 or more CTCs P = 0.0001. Conclusions. The number of circulating tumor cells predicted worse relapse free survival and overall survival in TNBC patients.^
Resumo:
Developing a Model Interruption is a known human factor that contributes to errors and catastrophic events in healthcare as well as other high-risk industries. The landmark Institute of Medicine (IOM) report, To Err is Human, brought attention to the significance of preventable errors in medicine and suggested that interruptions could be a contributing factor. Previous studies of interruptions in healthcare did not offer a conceptual model by which to study interruptions. As a result of the serious consequences of interruptions investigated in other high-risk industries, there is a need to develop a model to describe, understand, explain, and predict interruptions and their consequences in healthcare. Therefore, the purpose of this study was to develop a model grounded in the literature and to use the model to describe and explain interruptions in healthcare. Specifically, this model would be used to describe and explain interruptions occurring in a Level One Trauma Center. A trauma center was chosen because this environment is characterized as intense, unpredictable, and interrupt-driven. The first step in developing the model began with a review of the literature which revealed that the concept interruption did not have a consistent definition in either the healthcare or non-healthcare literature. Walker and Avant’s method of concept analysis was used to clarify and define the concept. The analysis led to the identification of five defining attributes which include (1) a human experience, (2) an intrusion of a secondary, unplanned, and unexpected task, (3) discontinuity, (4) externally or internally initiated, and (5) situated within a context. However, before an interruption could commence, five conditions known as antecedents must occur. For an interruption to take place (1) an intent to interrupt is formed by the initiator, (2) a physical signal must pass a threshold test of detection by the recipient, (3) the sensory system of the recipient is stimulated to respond to the initiator, (4) an interruption task is presented to recipient, and (5) the interruption task is either accepted or rejected by v the recipient. An interruption was determined to be quantifiable by (1) the frequency of occurrence of an interruption, (2) the number of times the primary task has been suspended to perform an interrupting task, (3) the length of time the primary task has been suspended, and (4) the frequency of returning to the primary task or not returning to the primary task. As a result of the concept analysis, a definition of an interruption was derived from the literature. An interruption is defined as a break in the performance of a human activity initiated internal or external to the recipient and occurring within the context of a setting or location. This break results in the suspension of the initial task by initiating the performance of an unplanned task with the assumption that the initial task will be resumed. The definition is inclusive of all the defining attributes of an interruption. This is a standard definition that can be used by the healthcare industry. From the definition, a visual model of an interruption was developed. The model was used to describe and explain the interruptions recorded for an instrumental case study of physicians and registered nurses (RNs) working in a Level One Trauma Center. Five physicians were observed for a total of 29 hours, 31 minutes. Eight registered nurses were observed for a total of 40 hours 9 minutes. Observations were made on either the 0700–1500 or the 1500-2300 shift using the shadowing technique. Observations were recorded in the field note format. The field notes were analyzed by a hybrid method of categorizing activities and interruptions. The method was developed by using both a deductive a priori classification framework and by the inductive process utilizing line-byline coding and constant comparison as stated in Grounded Theory. The following categories were identified as relative to this study: Intended Recipient - the person to be interrupted Unintended Recipient - not the intended recipient of an interruption; i.e., receiving a phone call that was incorrectly dialed Indirect Recipient – the incidental recipient of an interruption; i.e., talking with another, thereby suspending the original activity Recipient Blocked – the intended recipient does not accept the interruption Recipient Delayed – the intended recipient postpones an interruption Self-interruption – a person, independent of another person, suspends one activity to perform another; i.e., while walking, stops abruptly and talks to another person Distraction – briefly disengaging from a task Organizational Design – the physical layout of the workspace that causes a disruption in workflow Artifacts Not Available – supplies and equipment that are not available in the workspace causing a disruption in workflow Initiator – a person who initiates an interruption Interruption by Organizational Design and Artifacts Not Available were identified as two new categories of interruption. These categories had not previously been cited in the literature. Analysis of the observations indicated that physicians were found to perform slightly fewer activities per hour when compared to RNs. This variance may be attributed to differing roles and responsibilities. Physicians were found to have more activities interrupted when compared to RNs. However, RNs experienced more interruptions per hour. Other people were determined to be the most commonly used medium through which to deliver an interruption. Additional mediums used to deliver an interruption vii included the telephone, pager, and one’s self. Both physicians and RNs were observed to resume an original interrupted activity more often than not. In most interruptions, both physicians and RNs performed only one or two interrupting activities before returning to the original interrupted activity. In conclusion the model was found to explain all interruptions observed during the study. However, the model will require an even more comprehensive study in order to establish its predictive value.