12 resultados para Basophil Degranulation Test -- methods
em DigitalCommons@The Texas Medical Center
Resumo:
BACKGROUND: Early detection of colorectal cancer through timely follow-up of positive Fecal Occult Blood Tests (FOBTs) remains a challenge. In our previous work, we found 40% of positive FOBT results eligible for colonoscopy had no documented response by a treating clinician at two weeks despite procedures for electronic result notification. We determined if technical and/or workflow-related aspects of automated communication in the electronic health record could lead to the lack of response. METHODS: Using both qualitative and quantitative methods, we evaluated positive FOBT communication in the electronic health record of a large, urban facility between May 2008 and March 2009. We identified the source of test result communication breakdown, and developed an intervention to fix the problem. Explicit medical record reviews measured timely follow-up (defined as response within 30 days of positive FOBT) pre- and post-intervention. RESULTS: Data from 11 interviews and tracking information from 490 FOBT alerts revealed that the software intended to alert primary care practitioners (PCPs) of positive FOBT results was not configured correctly and over a third of positive FOBTs were not transmitted to PCPs. Upon correction of the technical problem, lack of timely follow-up decreased immediately from 29.9% to 5.4% (p<0.01) and was sustained at month 4 following the intervention. CONCLUSION: Electronic communication of positive FOBT results should be monitored to avoid limiting colorectal cancer screening benefits. Robust quality assurance and oversight systems are needed to achieve this. Our methods may be useful for others seeking to improve follow-up of FOBTs in their systems.
Resumo:
BACKGROUND: Follow-up of abnormal outpatient laboratory test results is a major patient safety concern. Electronic medical records can potentially address this concern through automated notification. We examined whether automated notifications of abnormal laboratory results (alerts) in an integrated electronic medical record resulted in timely follow-up actions. METHODS: We studied 4 alerts: hemoglobin A1c > or =15%, positive hepatitis C antibody, prostate-specific antigen > or =15 ng/mL, and thyroid-stimulating hormone > or =15 mIU/L. An alert tracking system determined whether the alert was acknowledged (ie, provider clicked on and opened the message) within 2 weeks of transmission; acknowledged alerts were considered read. Within 30 days of result transmission, record review and provider contact determined follow-up actions (eg, patient contact, treatment). Multivariable logistic regression models analyzed predictors for lack of timely follow-up. RESULTS: Between May and December 2008, 78,158 tests (hemoglobin A1c, hepatitis C antibody, thyroid-stimulating hormone, and prostate-specific antigen) were performed, of which 1163 (1.48%) were transmitted as alerts; 10.2% of these (119/1163) were unacknowledged. Timely follow-up was lacking in 79 (6.8%), and was statistically not different for acknowledged and unacknowledged alerts (6.4% vs 10.1%; P =.13). Of 1163 alerts, 202 (17.4%) arose from unnecessarily ordered (redundant) tests. Alerts for a new versus known diagnosis were more likely to lack timely follow-up (odds ratio 7.35; 95% confidence interval, 4.16-12.97), whereas alerts related to redundant tests were less likely to lack timely follow-up (odds ratio 0.24; 95% confidence interval, 0.07-0.84). CONCLUSIONS: Safety concerns related to timely patient follow-up remain despite automated notification of non-life-threatening abnormal laboratory results in the outpatient setting.
Resumo:
BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.
Resumo:
Calcium levels in spines play a significant role in determining the sign and magnitude of synaptic plasticity. The magnitude of calcium influx into spines is highly dependent on influx through N-methyl D-aspartate (NMDA) receptors, and therefore depends on the number of postsynaptic NMDA receptors in each spine. We have calculated previously how the number of postsynaptic NMDA receptors determines the mean and variance of calcium transients in the postsynaptic density, and how this alters the shape of plasticity curves. However, the number of postsynaptic NMDA receptors in the postsynaptic density is not well known. Anatomical methods for estimating the number of NMDA receptors produce estimates that are very different than those produced by physiological techniques. The physiological techniques are based on the statistics of synaptic transmission and it is difficult to experimentally estimate their precision. In this paper we use stochastic simulations in order to test the validity of a physiological estimation technique based on failure analysis. We find that the method is likely to underestimate the number of postsynaptic NMDA receptors, explain the source of the error, and re-derive a more precise estimation technique. We also show that the original failure analysis as well as our improved formulas are not robust to small estimation errors in key parameters.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
A new technique for the detection of microbiological fecal pollution in drinking and in raw surface water has been modified and tested against the standard multiple-tube fermentation technique (most-probable-number, MPN). The performance of the new test in detecting fecal pollution in drinking water has been tested at different incubation temperatures. The basis for the new test was the detection of hydrogen sulfide produced by the hydrogen sulfide producing bacteria which are usually associated with the coliform group. The positive results are indicated by the appearance of a brown to black color in the contents of the fermentation tube within 18 to 24 hours of incubation at 35 (+OR-) .5(DEGREES)C. For this study 158 water samples of different sources have been used. The results were analyzed statistically with the paired t-test and the one-way analysis of variance. No statistically significant difference was noticed between the two methods, when tested 35 (+OR-) .5(DEGREES)C, in detecting fecal pollution in drinking water. The new test showed more positive results with raw surface water, which could be due to the presence of hydrogen sulfide producing bacteria of non-fecal origin like Desulfovibrio and Desulfomaculum. The survival of the hydrogen sulfide producing bacteria and the coliforms was also tested over a 7-day period, and the results showed no significant difference. The two methods showed no significant difference when used to detect fecal pollution at a very low coliform density. The results showed that the new test is mostly effective, in detecting fecal pollution in drinking water, when used at 35 (+OR-) .5(DEGREES)C. The new test is effective, simple, and less expensive when used to detect fecal pollution in drinking water and raw surface water at 35 (+OR-) .5(DEGREES)C. The method can be used for qualitative and/or quantitative analysis of water in the field and in the laboratory. ^
Resumo:
Background. About a third of the world’s population is infected with tuberculosis (TB) with sub-Saharan Africa being the worst hit. Uganda is ranked 16th among the countries with the biggest TB burden. The burden in children however has not been determined. The burden of TB has been worsened by the advent of HIV and TB is the leading cause of mortality in HIV infected individuals. Development of TB disease can be prevented if TB is diagnosed during its latent stage and treated with isoniazid. For over a century, latent TB infection (LTBI) was diagnosed using the Tuberculin Skin Test (TST). New interferon gamma release assays (IGRA) have been approved by FDA for the diagnosis of LTBI and adult studies have shown that IGRAs are superior to the TST but there have been few studies in children especially in areas of high TB and HIV endemicity. ^ Objective. The objective of this study was to examine whether the IGRAs had a role in LTBI diagnosis in HIV infected children in Uganda. ^ Methods. Three hundred and eighty one (381) children were recruited at the Baylor College of Medicine-Bristol Meyers Squibb Children’s Clinical Center of Excellence at Mulago Hospital, Kampala, Uganda between March and August 2010. All the children were subjected to a TST and T-SPOT ®.TB test which was the IGRA chosen for this study. Sputum examination and chest x-rays were also done to rule out active TB. ^ Results. There was no statistically significant difference between the tests. The agreement between the two assays was 95.9% and the kappa statistic was 0.7 (95% CI: 0.55–0.85, p-value<0.05) indicating a substantial or good agreement. The TST was associated with older age and higher weight for age z-scores but the T-SPOT®. TB was not. Both tests were associated with history of taking anti-retroviral therapy (ART). ^ Conclusion. Before promoting use of IGRAs in children living in HIV/TB endemic countries, more research needs to be done. ^
Resumo:
Purpose. To evaluate the use of the Legionella Urine Antigen Test as a cost effective method for diagnosing Legionnaires’ disease in five San Antonio Hospitals from January 2007 to December 2009. ^ Methods. The data reported by five San Antonio hospitals to the San Antonio Metropolitan Health District during a 3-year retrospective study (January 2007 to December 2009) were evaluated for the frequency of non-specific pneumonia infections, the number of Legionella Urine Antigen Tests performed, and the percentage of positive cases of Legionnaires’ disease diagnosed by the Legionella Urine Antigen Test.^ Results. There were a total of 7,087 cases of non-specific pneumonias reported across the five San Antonio hospitals studied from 2007 to 2009. A total of 5,371 Legionella Urine Antigen Tests were performed from January, 2007 to December, 2009 across the five San Antonio hospitals in the study. A total of 38 positive cases of Legionnaires’ disease were identified by the use of Legionella Urinary Antigen Test from 2007-2009.^ Conclusions. In spite of the limitations of this study in obtaining sufficient relevant data to evaluate the cost effectiveness of Legionella Urinary Antigen Test in diagnosing Legionnaires’ disease, the Legionella Urinary Antigen Test is simple, accurate, faster, as results can be obtained within minutes to hours; and convenient because it can be performed in emergency room department to any patient who presents with the clinical signs or symptoms of pneumonia. Over the long run, it remains to be shown if this test may decrease mortality, lower total medical costs by decreasing the number of broad-spectrum antibiotics prescribed, shorten patient wait time/hospital stay, and decrease the need for unnecessary ancillary testing, and improve overall patient outcomes.^
Resumo:
It has been hypothesized that results from the short term bioassays will ultimately provide information that will be useful for human health hazard assessment. Although toxicologic test systems have become increasingly refined, to date, no investigator has been able to provide qualitative or quantitative methods which would support the use of short term tests in this capacity.^ Historically, the validity of the short term tests have been assessed using the framework of the epidemiologic/medical screens. In this context, the results of the carcinogen (long term) bioassay is generally used as the standard. However, this approach is widely recognized as being biased and, because it employs qualitative data, cannot be used in the setting of priorities. In contrast, the goal of this research was to address the problem of evaluating the utility of the short term tests for hazard assessment using an alternative method of investigation.^ Chemical carcinogens were selected from the list of carcinogens published by the International Agency for Research on Carcinogens (IARC). Tumorigenicity and mutagenicity data on fifty-two chemicals were obtained from the Registry of Toxic Effects of Chemical Substances (RTECS) and were analyzed using a relative potency approach. The relative potency framework allows for the standardization of data "relative" to a reference compound. To avoid any bias associated with the choice of the reference compound, fourteen different compounds were used.^ The data were evaluated in a format which allowed for a comparison of the ranking of the mutagenic relative potencies of the compounds (as estimated using short term data) vs. the ranking of the tumorigenic relative potencies (as estimated from the chronic bioassays). The results were statistically significant (p $<$.05) for data standardized to thirteen of the fourteen reference compounds. Although this was a preliminary investigation, it offers evidence that the short term test systems may be of utility in ranking the hazards represented by chemicals which may be human carcinogens. ^
Resumo:
The pattern of the births during the week has been reported by many studies. The births occurred in weekends are found consistently less then births occurred in weekdays. This study employed two statistical methods, two-way ANOVA and two-way Friedman's test to analyse the daily variations in amount of births of 222,735 births from 2005-2007 in Harris County, Texas. The two methods were compared on their assumptions, procedures and results. Both of the tests showed a significant result which indicated that the births through the week are not uniformly distributed. The result of multiple comparison demonstrated the births occurring on weekends were significantly different than the births occurring on weekdays with least amount on Sundays.^
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^