8 resultados para Accuracy.

em DigitalCommons@The Texas Medical Center


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project was comparing the accuracy of capturing the oral pathology diagnoses among different coding systems. 55 diagnoses were selected for comparison among 5 coding systems. The results of accuracy in capturing oral diagnoses are: AFIP (96.4%), followed by Read 99 (85.5%), SNOMED 98 (74.5%), ICD-9 (43.6%), and CDT-3 (14.5%). It shows that the currently used coding systems, ICD-9 and CDT-3, were inadequate, whereas the AFIP coding system captured the majority of oral diagnoses. In conclusion, the most commonly used medical and dental coding systems lack terms for the diagnosis of oral and dental conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: To determine the prevalence of false or misleading statements in messages posted by internet cancer support groups and whether these statements were identified as false or misleading and corrected by other participants in subsequent postings. DESIGN: Analysis of content of postings. SETTING: Internet cancer support group Breast Cancer Mailing List. MAIN OUTCOME MEASURES: Number of false or misleading statements posted from 1 January to 23 April 2005 and whether these were identified and corrected by participants in subsequent postings. RESULTS: 10 of 4600 postings (0.22%) were found to be false or misleading. Of these, seven were identified as false or misleading by other participants and corrected within an average of four hours and 33 minutes (maximum, nine hours and nine minutes). CONCLUSIONS: Most posted information on breast cancer was accurate. Most false or misleading statements were rapidly corrected by participants in subsequent postings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The PEM Flex Solo II (Naviscan, Inc., San Diego, CA) is currently the only commercially-available positron emission mammography (PEM) scanner. This scanner does not apply corrections for count rate effects, attenuation or scatter during image reconstruction, potentially affecting the quantitative accuracy of images. This work measures the overall quantitative accuracy of the PEM Flex system, and determines the contributions of error due to count rate effects, attenuation and scatter. Materials and Methods: Gelatin phantoms were designed to simulate breasts of different sizes (4 – 12 cm thick) with varying uniform background activity concentration (0.007 – 0.5 μCi/cc), cysts and lesions (2:1, 5:1, 10:1 lesion-to-background ratios). The overall error was calculated from ROI measurements in the phantoms with a clinically relevant background activity concentration (0.065 μCi/cc). The error due to count rate effects was determined by comparing the overall error at multiple background activity concentrations to the error at 0.007 μCi/cc. A point source and cold gelatin phantoms were used to assess the errors due to attenuation and scatter. The maximum pixel values in gelatin and in air were compared to determine the effect of attenuation. Scatter was evaluated by comparing the sum of all pixel values in gelatin and in air. Results: The overall error in the background was found to be negative in phantoms of all thicknesses, with the exception of the 4-cm thick phantoms (0%±7%), and it increased with thickness (-34%±6% for the 12-cm phantoms). All lesions exhibited large negative error (-22% for the 2:1 lesions in the 4-cm phantom) which increased with thickness and with lesion-to-background ratio (-85% for the 10:1 lesions in the 12-cm phantoms). The error due to count rate in phantoms with 0.065 μCi/cc background was negative (-23%±6% for 4-cm thickness) and decreased with thickness (-7%±7% for 12 cm). Attenuation was a substantial source of negative error and increased with thickness (-51%±10% to -77% ±4% in 4 to 12 cm phantoms, respectively). Scatter contributed a relatively constant amount of positive error (+23%±11%) for all thicknesses. Conclusion: Applying corrections for count rate, attenuation and scatter will be essential for the PEM Flex Solo II to be able to produce quantitatively accurate images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ACCURACY OF THE BRCAPRO RISK ASSESSMENT MODEL IN MALES PRESENTING TO MD ANDERSON FOR BRCA TESTING Publication No. _______ Carolyn A. Garby, B.S. Supervisory Professor: Banu Arun, M.D. Hereditary Breast and Ovarian Cancer (HBOC) syndrome is due to mutations in BRCA1 and BRCA2 genes. Women with HBOC have high risks to develop breast and ovarian cancers. Males with HBOC are commonly overlooked because male breast cancer is rare and other male cancer risks such as prostate and pancreatic cancers are relatively low. BRCA genetic testing is indicated for men as it is currently estimated that 4-40% of male breast cancers result from a BRCA1 or BRCA2 mutation (Ottini, 2010) and management recommendations can be made based on genetic test results. Risk assessment models are available to provide the individualized likelihood to have a BRCA mutation. Only one study has been conducted to date to evaluate the accuracy of BRCAPro in males and was based on a cohort of Italian males and utilized an older version of BRCAPro. The objective of this study is to determine if BRCAPro5.1 is a valid risk assessment model for males who present to MD Anderson Cancer Center for BRCA genetic testing. BRCAPro has been previously validated for determining the probability of carrying a BRCA mutation, however has not been further examined particularly in males. The total cohort consisted of 152 males who had undergone BRCA genetic testing. The cohort was stratified by indication for genetic counseling. Indications included having a known familial BRCA mutation, having a personal diagnosis of a BRCA-related cancer, or having a family history suggestive of HBOC. Overall there were 22 (14.47%) BRCA1+ males and 25 (16.45%) BRCA2+ males. Receiver operating characteristic curves were constructed for the cohort overall, for each particular indication, as well as for each cancer subtype. Our findings revealed that the BRCAPro5.1 model had perfect discriminating ability at a threshold of 56.2 for males with breast cancer, however only 2 (4.35%) of 46 were found to have BRCA2 mutations. These results are significantly lower than the high approximation (40%) reported in previous literature. BRCAPro does perform well in certain situations for men. Future investigation of male breast cancer and men at risk for BRCA mutations is necessary to provide a more accurate risk assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.