7 resultados para mixed verification methods
em DigitalCommons@The Texas Medical Center
Resumo:
Despite many researches on development in education and psychology, not often is the methodology tested with real data. A major barrier to test the growth model is that the design of study includes repeated observations and the nature of the growth is nonlinear. The repeat measurements on a nonlinear model require sophisticated statistical methods. In this study, we present mixed effects model in a negative exponential curve to describe the development of children's reading skills. This model can describe the nature of the growth on children's reading skills and account for intra-individual and inter-individual variation. We also apply simple techniques including cross-validation, regression, and graphical methods to determine the most appropriate curve for data, to find efficient initial values of parameters, and to select potential covariates. We illustrate with an example that motivated this research: a longitudinal study of academic skills from grade 1 to grade 12 in Connecticut public schools. ^
Resumo:
The current standard treatment for head and neck cancer at our institution uses intensity-modulated x-ray therapy (IMRT), which improves target coverage and sparing of critical structures by delivering complex fluence patterns from a variety of beam directions to conform dose distributions to the shape of the target volume. The standard treatment for breast patients is field-in-field forward-planned IMRT, with initial tangential fields and additional reduced-weight tangents with blocking to minimize hot spots. For these treatment sites, the addition of electrons has the potential of improving target coverage and sparing of critical structures due to rapid dose falloff with depth and reduced exit dose. In this work, the use of mixed-beam therapy (MBT), i.e., combined intensity-modulated electron and x-ray beams using the x-ray multi-leaf collimator (MLC), was explored. The hypothesis of this study was that addition of intensity-modulated electron beams to existing clinical IMRT plans would produce MBT plans that were superior to the original IMRT plans for at least 50% of selected head and neck and 50% of breast cases. Dose calculations for electron beams collimated by the MLC were performed with Monte Carlo methods. An automation system was created to facilitate communication between the dose calculation engine and the treatment planning system. Energy and intensity modulation of the electron beams was accomplished by dividing the electron beams into 2x2-cm2 beamlets, which were then beam-weight optimized along with intensity-modulated x-ray beams. Treatment plans were optimized to obtain equivalent target dose coverage, and then compared with the original treatment plans. MBT treatment plans were evaluated by participating physicians with respect to target coverage, normal structure dose, and overall plan quality in comparison with original clinical plans. The physician evaluations did not support the hypothesis for either site, with MBT selected as superior in 1 out of the 15 head and neck cases (p=1) and 6 out of 18 breast cases (p=0.95). While MBT was not shown to be superior to IMRT, reductions were observed in doses to critical structures distal to the target along the electron beam direction and to non-target tissues, at the expense of target coverage and dose homogeneity. ^
Resumo:
Detection of multidrug-resistant tuberculosis (MDR-TB), a frequent cause of treatment failure, takes 2 or more weeks to identify by culture. RIF-resistance is a hallmark of MDR-TB, and detection of mutations in the rpoB gene of Mycobacterium tuberculosis using molecular beacon probes with real-time quantitative polymerase chain reaction (qPCR) is a novel approach that takes ≤2 days. However, qPCR identification of resistant isolates, particularly for isolates with mixed RIF-susceptible and RIF-resistant bacteria, is reader dependent and limits its clinical use. The aim of this study was to develop an objective, reader-independent method to define rpoB mutants using beacon qPCR. This would facilitate the transition from a research protocol to the clinical setting, where high-throughput methods with objective interpretation are required. For this, DNAs from 107 M. tuberculosis clinical isolates with known susceptibility to RIF by culture-based methods were obtained from 2 regions where isolates have not previously been subjected to evaluation using molecular beacon qPCR: the Texas–Mexico border and Colombia. Using coded DNA specimens, mutations within an 81-bp hot spot region of rpoB were established by qPCR with 5 beacons spanning this region. Visual and mathematical approaches were used to establish whether the qPCR cycle threshold of the experimental isolate was significantly higher (mutant) compared to a reference wild-type isolate. Visual classification of the beacon qPCR required reader training for strains with a mixture of RIF-susceptible and RIF-resistant bacteria. Only then had the visual interpretation by an experienced reader had 100% sensitivity and 94.6% specificity versus RIF-resistance by culture phenotype and 98.1% sensitivity and 100% specificity versus mutations based on DNA sequence. The mathematical approach was 98% sensitive and 94.5% specific versus culture and 96.2% sensitive and 100% specific versus DNA sequence. Our findings indicate the mathematical approach has advantages over the visual reading, in that it uses a Microsoft Excel template to eliminate reader bias or inexperience, and allows objective interpretation from high-throughput analyses even in the presence of a mixture of RIF-resistant and RIF-susceptible isolates without the need for reader training.^
Resumo:
Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^
Resumo:
Mixed longitudinal designs are important study designs for many areas of medical research. Mixed longitudinal studies have several advantages over cross-sectional or pure longitudinal studies, including shorter study completion time and ability to separate time and age effects, thus are an attractive choice. Statistical methodology used in general longitudinal studies has been rapidly developing within the last few decades. Common approaches for statistical modeling in studies with mixed longitudinal designs have been the linear mixed-effects model incorporating an age or time effect. The general linear mixed-effects model is considered an appropriate choice to analyze repeated measurements data in longitudinal studies. However, common use of linear mixed-effects model on mixed longitudinal studies often incorporates age as the only random-effect but fails to take into consideration the cohort effect in conducting statistical inferences on age-related trajectories of outcome measurements. We believe special attention should be paid to cohort effects when analyzing data in mixed longitudinal designs with multiple overlapping cohorts. Thus, this has become an important statistical issue to address. ^ This research aims to address statistical issues related to mixed longitudinal studies. The proposed study examined the existing statistical analysis methods for the mixed longitudinal designs and developed an alternative analytic method to incorporate effects from multiple overlapping cohorts as well as from different aged subjects. The proposed study used simulation to evaluate the performance of the proposed analytic method by comparing it with the commonly-used model. Finally, the study applied the proposed analytic method to the data collected by an existing study Project HeartBeat!, which had been evaluated using traditional analytic techniques. Project HeartBeat! is a longitudinal study of cardiovascular disease (CVD) risk factors in childhood and adolescence using a mixed longitudinal design. The proposed model was used to evaluate four blood lipids adjusting for age, gender, race/ethnicity, and endocrine hormones. The result of this dissertation suggest the proposed analytic model could be a more flexible and reliable choice than the traditional model in terms of fitting data to provide more accurate estimates in mixed longitudinal studies. Conceptually, the proposed model described in this study has useful features, including consideration of effects from multiple overlapping cohorts, and is an attractive approach for analyzing data in mixed longitudinal design studies.^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^