7 resultados para detection-by-tracking
em DigitalCommons@The Texas Medical Center
Resumo:
Purpose: To evaluate normal tissue dose reduction in step-and-shoot intensity-modulated radiation therapy (IMRT) on the Varian 2100 platform by tracking the multileaf collimator (MLC) apertures with the accelerator jaws. Methods: Clinical radiation treatment plans for 10 thoracic, 3 pediatric and 3 head and neck patients were converted to plans with the jaws tracking each segment’s MLC apertures. Each segment was then renormalized to account for the change in collimator scatter to obtain target coverage within 1% of that in the original plan. The new plans were compared to the original plans in a commercial radiation treatment planning system (TPS). Reduction in normal tissue dose was evaluated in the new plan by using the parameters V5, V10, and V20 in the cumulative dose-volume histogram for the following structures: total lung minus GTV (gross target volume), heart, esophagus, spinal cord, liver, parotids, and brainstem. In order to validate the accuracy of our beam model, MLC transmission measurements were made and compared to those predicted by the TPS. Results: The greatest change between the original plan and new plan occurred at lower dose levels. The reduction in V20 was never more than 6.3% and was typically less than 1% for all patients. The reduction in V5 was 16.7% maximum and was typically less than 3% for all patients. The variation in normal tissue dose reduction was not predictable, and we found no clear parameters that indicated which patients would benefit most from jaw tracking. Our TPS model of MLC transmission agreed with measurements with absolute transmission differences of less than 0.1 % and thus uncertainties in the model did not contribute significantly to the uncertainty in the dose determination. Conclusion: The amount of dose reduction achieved by collimating the jaws around each MLC aperture in step-and-shoot IMRT does not appear to be clinically significant.
Resumo:
Objective. In 2003, the State of Texas instituted the Driver Responsibility Program (TDRP), a program consisting of a driving infraction point system coupled with a series of graded fines and annual surcharges for specific traffic violations such as driving while intoxicated (DWI). Approximately half of the revenues generated are earmarked to be disbursed to the state's trauma system to cover uncompensated trauma care costs. This study examined initial program implementation, the impact of trauma system funding, and initial impact on impaired driving knowledge, attitudes and behaviors. A model for targeted media campaigns to improve the program's deterrence effects was developed. ^ Methods. Data from two independent driver survey samples (conducted in 1999 and 2005), department of public safety records, state health department data and a state auditor's report were used to evaluate the program's initial implementation, impact and outcome with respect to drivers' impaired driving knowledge, attitudes and behavior (based on constructs of social cognitive theory) and hospital uncompensated trauma care funding. Survey results were used to develop a regression model of high risk drivers who should be targeted to improve program outcome with respect to deterring impaired driving. ^ Results. Low driver compliance with fee payment (28%) and program implementation problems were associated with lower surcharge revenues in the first two years ($59.5 million versus $525 million predicted). Program revenue distribution to trauma hospitals was associated with a 16% increase in designated trauma centers. Survey data demonstrated that only 28% of drivers are aware of the TDRP and that there has been no initial impact on impaired driving behavior. Logistical regression modeling suggested that target media campaigns highlighting the likelihood of DWI detection by law enforcement and the increased surcharges associated with the TDRP are required to deter impaired driving. ^ Conclusions. Although the TDRP raised nearly $60 million in surcharge revenue for the Texas trauma system over the first two years, this study did not find evidence of a change in impaired driving knowledge, attitudes or behaviors from 1999 to 2005. Further research is required to measure whether the program is associated with decreased alcohol-related traffic fatalities. ^
Resumo:
Developing a Model Interruption is a known human factor that contributes to errors and catastrophic events in healthcare as well as other high-risk industries. The landmark Institute of Medicine (IOM) report, To Err is Human, brought attention to the significance of preventable errors in medicine and suggested that interruptions could be a contributing factor. Previous studies of interruptions in healthcare did not offer a conceptual model by which to study interruptions. As a result of the serious consequences of interruptions investigated in other high-risk industries, there is a need to develop a model to describe, understand, explain, and predict interruptions and their consequences in healthcare. Therefore, the purpose of this study was to develop a model grounded in the literature and to use the model to describe and explain interruptions in healthcare. Specifically, this model would be used to describe and explain interruptions occurring in a Level One Trauma Center. A trauma center was chosen because this environment is characterized as intense, unpredictable, and interrupt-driven. The first step in developing the model began with a review of the literature which revealed that the concept interruption did not have a consistent definition in either the healthcare or non-healthcare literature. Walker and Avant’s method of concept analysis was used to clarify and define the concept. The analysis led to the identification of five defining attributes which include (1) a human experience, (2) an intrusion of a secondary, unplanned, and unexpected task, (3) discontinuity, (4) externally or internally initiated, and (5) situated within a context. However, before an interruption could commence, five conditions known as antecedents must occur. For an interruption to take place (1) an intent to interrupt is formed by the initiator, (2) a physical signal must pass a threshold test of detection by the recipient, (3) the sensory system of the recipient is stimulated to respond to the initiator, (4) an interruption task is presented to recipient, and (5) the interruption task is either accepted or rejected by v the recipient. An interruption was determined to be quantifiable by (1) the frequency of occurrence of an interruption, (2) the number of times the primary task has been suspended to perform an interrupting task, (3) the length of time the primary task has been suspended, and (4) the frequency of returning to the primary task or not returning to the primary task. As a result of the concept analysis, a definition of an interruption was derived from the literature. An interruption is defined as a break in the performance of a human activity initiated internal or external to the recipient and occurring within the context of a setting or location. This break results in the suspension of the initial task by initiating the performance of an unplanned task with the assumption that the initial task will be resumed. The definition is inclusive of all the defining attributes of an interruption. This is a standard definition that can be used by the healthcare industry. From the definition, a visual model of an interruption was developed. The model was used to describe and explain the interruptions recorded for an instrumental case study of physicians and registered nurses (RNs) working in a Level One Trauma Center. Five physicians were observed for a total of 29 hours, 31 minutes. Eight registered nurses were observed for a total of 40 hours 9 minutes. Observations were made on either the 0700–1500 or the 1500-2300 shift using the shadowing technique. Observations were recorded in the field note format. The field notes were analyzed by a hybrid method of categorizing activities and interruptions. The method was developed by using both a deductive a priori classification framework and by the inductive process utilizing line-byline coding and constant comparison as stated in Grounded Theory. The following categories were identified as relative to this study: Intended Recipient - the person to be interrupted Unintended Recipient - not the intended recipient of an interruption; i.e., receiving a phone call that was incorrectly dialed Indirect Recipient – the incidental recipient of an interruption; i.e., talking with another, thereby suspending the original activity Recipient Blocked – the intended recipient does not accept the interruption Recipient Delayed – the intended recipient postpones an interruption Self-interruption – a person, independent of another person, suspends one activity to perform another; i.e., while walking, stops abruptly and talks to another person Distraction – briefly disengaging from a task Organizational Design – the physical layout of the workspace that causes a disruption in workflow Artifacts Not Available – supplies and equipment that are not available in the workspace causing a disruption in workflow Initiator – a person who initiates an interruption Interruption by Organizational Design and Artifacts Not Available were identified as two new categories of interruption. These categories had not previously been cited in the literature. Analysis of the observations indicated that physicians were found to perform slightly fewer activities per hour when compared to RNs. This variance may be attributed to differing roles and responsibilities. Physicians were found to have more activities interrupted when compared to RNs. However, RNs experienced more interruptions per hour. Other people were determined to be the most commonly used medium through which to deliver an interruption. Additional mediums used to deliver an interruption vii included the telephone, pager, and one’s self. Both physicians and RNs were observed to resume an original interrupted activity more often than not. In most interruptions, both physicians and RNs performed only one or two interrupting activities before returning to the original interrupted activity. In conclusion the model was found to explain all interruptions observed during the study. However, the model will require an even more comprehensive study in order to establish its predictive value.
Resumo:
Early detection by screening is the key to colorectal cancer control. However, colorectal cancer screening and its determinants in rural areas have not been adequately studied. This goal of this study was to investigate the screening participation and determinants of colonoscopy, sigmoidoscopy, and/or fecal occult blood test (FOBT) in subjects of Project Frontier from the rural counties of Cochran, Bailey and Parmer, Texas. Subjects ( n=820 with 435 Hispanics, 355 Non-Hispanic Whites, 26 African Americans, and 4 unknown ethnicity; 255 males, 565 females, aged from 40 to 92 years) were from Project FRONTIER. Stepwise logistic regression analysis was performed. Explanatory variables included ethnicity (Hispanic, Non-Hispanic white and African American), gender, health insurance, smoking status, household income, education (years), physical activity, overweight, other health screenings, personal physicians, family history (first-degree relatives) of cancers, and preferred language (English vs. Spanish) for interview/testing. The screening percentage for ever having had a colonoscopy/sigmoidoscopy (51.8%) in this cohort aged 50 years or older is well below the percentage of the nation (65.2%) and Texas (64.6%) while the percentage for FOBT (29.2%) is higher than in the nation (17.2%) and Texas (14.9%). However, Hispanics had significantly lower participation than non-Hispanic whites for colonoscopy/sigmoidoscopy (37.0% vs. 66.0%) and FOBT (16.5% vs. 41.7%), respectively. Stepwise logistic regression showed that predictors for colonoscopy, sigmoidoscopy or FOBT included Hispanic race (p = 0.0045), age (p < 0.0001), other screening procedure (p < 0.0001), insurance status (p < 0.0001) and physician status (p = 0.0053). Screening percentage for colonoscopy/sigmoidoscopy in this rural cohort is well below the national and Texas level mainly due to the lower participation of Hispanics vs. Non-Hispanic whites. Health insurance, having had a personal physician, having had screenings for other cancers, race, and older age are among the main predictors.^
Resumo:
Enterotoxigenic Escherichia coli (ETEC) causes significant morbidity and mortality in infants of developing countries and is the most common cause of diarrhea in travelers to these areas. Enterotoxigenic Escherichia coli infections are commonly caused by ingestion of fecally contaminated food. A timely method for the detection of ETEC in foods would be important in the prevention of this disease. A multiplex polymerase chain reaction (PCR) assay which has been successful in detecting the heat-labile and heat-stable toxins of ETEC in stool was examined to determine its utility in foods. This PCR assay, preceded by a glass matrix and chaotropic DNA extraction, was effective in detecting high numbers of ETEC in a variety of foods. Ninety percent of 121 spiked food samples yielded positive results. Samples of salsa from Guadalajara, Mexico and Houston, Texas were collected and underwent DNA extraction and PCR. All samples yielded negative results for both the heat-labile and heat-stable toxins. Samples were also subjected to oligonucleotide probe analysis and resulted in 5 samples positive for ETEC. Upon dilution testing, it was found that positive PCR results only occurred when 12,000 to 1,000,000 bacteria were present in 200 mg of food. Although the DNA extraction and PCR method has been shown to be both sensitive and specific in stool, similar results were not obtained in food samples. ^
Resumo:
The cause of infection of about a third of all travelers' diarrhea patients studied is not identified. Stools of these diarrhea patients tested for known enteric pathogens are shown to be negative, and identified as pathogen negative stools. We proposed that the third of these diarrhea patients might not only include at present unknown pathogens, but also known pathogens that go undetected. Conventionally, a probability sample of five E. coli colonies are used detect enterotoxigenic E. coli (ETEC) and other diarrhea-producing E. coli from stool cultures. We compared this conventional method of using five E. coli colonies, to the use of up to twenty E. coli colonies. Testing for up to fifteen E. coli colonies detected about twice as many ETEC when compared to the detection of ETEC, testing for five E. coli colonies. When the number of E. coli colonies tested was increased from 5 to 15, the detection of ETEC increased from 19.0% to 38.8%. The sensitivity of the assay with 5 E. coli colonies was statistically significantly different to the sensitivity of the assay with 10 E. coli colonies, suggesting that for the detection of ETEC at least 10 colonies of E. coli should be tested.^
Resumo:
Detection of malarial sporozoites by a double antibody sandwich enzyme linked immunosorbent assay (ELISA) is described. This investigation utilized the Anopheles stephensi-Plasmodium berghei malaria model for the generation of sporozoites. Anti-sporozoite antibody was obtained from the sera of rats which had been bitten by An. stephensi with salivary gland sporozoites. Mosquitoes were irradiated prior to feeding on the rats to render the sporozoites non-viable.^ The assay employed microtiter plates coated with their rat anti-sporozoite antiserum or rat anti-sporozoite IgG. Intact and sonicated sporozoites were used as antigens. Initially, sporozoites were detected by an ELISA using staphylococcal protein A conjugated with alkaline phosphatase. Sporozoites were also detected using alkaline phosphatase or horseradish peroxidase conjugated to anti-sporozoite IgG. Best results were obtained using the alkaline phosphatase conjugate.^ This investigation included the titration of antigen, coating antibody and labelled antibody as well as studies of various incubation times. A radioimmunoassay (RIA) was also developed and compared with the ELISA for detecting sporozoites. Finally, the detection of a single infected mosquito in pools of 5 to 10 whole, uninfested ones was studied using both ELISA and RIA.^ Sonicated sporozoites were more readily detected than intact sporozoites. The lower limit of detection was approximately 500 sporozoites per ml. Results using ELISA or RIA were similar. The ability of the ELISA to detect a single infected mosquito in a pool of uninfected ones indicates that this technique has potential use in entomological field studies which aim at determining the vector status of anopheline mosquitoes. The potential of the ELISA for identifying sporozoites of different species of malaria is discussed. ^