6 resultados para MOST PROBABLE NUMBER
em DigitalCommons@The Texas Medical Center
Resumo:
A new technique for the detection of microbiological fecal pollution in drinking and in raw surface water has been modified and tested against the standard multiple-tube fermentation technique (most-probable-number, MPN). The performance of the new test in detecting fecal pollution in drinking water has been tested at different incubation temperatures. The basis for the new test was the detection of hydrogen sulfide produced by the hydrogen sulfide producing bacteria which are usually associated with the coliform group. The positive results are indicated by the appearance of a brown to black color in the contents of the fermentation tube within 18 to 24 hours of incubation at 35 (+OR-) .5(DEGREES)C. For this study 158 water samples of different sources have been used. The results were analyzed statistically with the paired t-test and the one-way analysis of variance. No statistically significant difference was noticed between the two methods, when tested 35 (+OR-) .5(DEGREES)C, in detecting fecal pollution in drinking water. The new test showed more positive results with raw surface water, which could be due to the presence of hydrogen sulfide producing bacteria of non-fecal origin like Desulfovibrio and Desulfomaculum. The survival of the hydrogen sulfide producing bacteria and the coliforms was also tested over a 7-day period, and the results showed no significant difference. The two methods showed no significant difference when used to detect fecal pollution at a very low coliform density. The results showed that the new test is mostly effective, in detecting fecal pollution in drinking water, when used at 35 (+OR-) .5(DEGREES)C. The new test is effective, simple, and less expensive when used to detect fecal pollution in drinking water and raw surface water at 35 (+OR-) .5(DEGREES)C. The method can be used for qualitative and/or quantitative analysis of water in the field and in the laboratory. ^
Resumo:
Background The literature suggests that the distribution of female breast cancer mortality demonstrates spatial concentration. There remains a lack of studies on how the mortality burden may impact racial groups across space and over time. The present study evaluated the geographic variations in breast cancer mortality in Texas females according to three predominant racial groups (non-Hispanic White, Black, and Hispanic females) over a twelve-year period. It sought to clarify whether the spatiotemporal trend might place an uneven burden on particular racial groups, and whether the excess trend has persisted into the current decade. Methods The Spatial Scan Statistic was employed to examine the geographic excess of breast cancer mortality by race in Texas counties between 1990 and 2001. The statistic was conducted with a scan window of a maximum of 90% of the study period and a spatial cluster size of 50% of the population at risk. The next scan was conducted with a purely spatial option to verify whether the excess mortality persisted further. Spatial queries were performed to locate the regions of excess mortality affecting multiple racial groups. Results The first scan identified 4 regions with breast cancer mortality excess in both non-Hispanic White and Hispanic female populations. The most likely excess mortality with a relative risk of 1.12 (p = 0.001) occurred between 1990 and 1996 for non-Hispanic Whites, including 42 Texas counties along Gulf Coast and Central Texas. For Hispanics, West Texas with a relative risk of 1.18 was the most probable region of excess mortality (p = 0.001). Results of the second scan were identical to the first. This suggested that the excess mortality might not persist to the present decade. Spatial queries found that 3 counties in Southeast and 9 counties in Central Texas had excess mortality involving multiple racial groups. Conclusion Spatiotemporal variations in breast cancer mortality affected racial groups at varying levels. There was neither evidence of hot-spot clusters nor persistent spatiotemporal trends of excess mortality into the present decade. Non-Hispanic Whites in the Gulf Coast and Hispanics in West Texas carried the highest burden of mortality, as evidenced by spatial concentration and temporal persistence.
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
INTRODUCTION: Medical schools are charged with providing both a strong basic science and clinical curriculum for their students. In most institutions instruction in performing the core clinical procedures is part of the curriculum, but because of many constraints do medical students practice these procedures as many times as medical students in the past? Several studies have concluded that medical students today feel incompetent to perform basic clinical procedures at the time of graduation. [See PDF for complete abstract]
Resumo:
This study examined the effects of skipping breakfast on selected aspects of children's cognition, specifically their memory (both immediate and one week following presentation of stimuli), mental tempo, and problem solving accuracy. Test instruments used included the Hagen Central/Incidental Recall Test, Matching Familiar Figures Test, McCarthy Digit Span and Tapping Tests. The study population consisted of 39 nine-to eleven year old healthy children who were admitted for overnight stays at a clinical research setting for two nights approximately one week apart. The study was designed to be able to adequately monitor and control subjects' food consumption. The design chosen was the cross-over design where randomly on either the first or second visit, the child skipped breakfast. In this way, subjects acted as their own controls. Subjects were tested at noon of both visits, this representing an 18-hour fast.^ Analysis focused on whether or not fasting for this period of time affected an individual's performance. Results indicated that for most of the tests, subjects were not significantly affected by skipping breakfast for one morning. However, on tests of short-term central and incidental recall, subjects who had skipped breakfast recalled significantly more of the incidental cues although they did so at no apparent expense to their storing of central information. In the area of problem-solving accuracy, subjects skipping breakfast at time two made significantly more errors on hard sections of the MFF Test. It should be noted that although a large number of tests were conducted, these two tests showed the only significant differences.^ These significant results in the areas of short-term incidental memory and in problem solving accuracy were interpreted as being an effect of subject fatigue. That is, when subjects missed breakfast, they were more likely to become fatigued and in the novel environment presented in the study setting, it is probable that these subjects responded by entering Class II fatigue which is characterized by behavioral excitability, diffused attention and altered performance patterns. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^