222 resultados para Classification accuracy
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
Total disc replacement (TDR) clinical success has been reported to be related to the residual motion of the operated level. Thus, accurate measurement of TDR range of motion (ROM) is of utmost importance. One commonly used tool in measuring ROM is the Oxford Cobbometer. Little is known however on its accuracy (precision and bias) in measuring TDR angles. The aim of this study was to assess the ability of the Cobbometer to accurately measure radiographic TDR angles. An anatomically accurate synthetic L4-L5 motion segment was instrumented with a CHARITE artificial disc. The TDR angle and anatomical position between L4 and L5 was fixed to prohibit motion while the motion segment was radiographically imaged in various degrees of rotation and elevation, representing a sample of possible patient placement positions. An experienced observer made ten readings of the TDR angle using the Cobbometer at each different position. The Cobbometer readings were analyzed to determine measurement accuracy at each position. Furthermore, analysis of variance was used to study rotation and elevation of the motion segment as treatment factors. Cobbometer TDR angle measurements were most accurate (highest precision and lowest bias) at the centered position (95.5%), which placed the TDR directly inline with the x-ray beam source without any rotation. In contrast, the lowest accuracy (75.2%) was observed in the most rotated and off-centered view. A difference as high as 4 degrees between readings at any individual position, and as high as 6 degrees between all the positions was observed. Furthermore, the Cobbometer was unable to detect the expected trend in TDR angle projection with changing position. Although the Cobbometer has been reported to be reliable in different clinical applications, it lacks the needed accuracy to measure TDR angles and ROM. More accurate ROM measurement methods need to be developed to help surgeons and researchers assess radiological success of TDRs.
Resumo:
Galton (1907) first demonstrated the "wisdom of crowds" phenomenon by averaging independent estimates of unknown quantities given by many individuals. Herzog and Hertwig (2009; hereafter H&H in Psychological Science) showed that individuals' own estimates can be improved by asking them to make two estimates at separate times and averaging them. H&H claimed to observe far greater improvement in accuracy when participants received "dialectical" instructions to consider why their first estimate might be wrong before making their second estimates than when they received standard instructions. We reanalyzed H&H's data using measures of accuracy that are unrelated to the frequency of identical first and second responses and found that participants in both conditions improved their accuracy to an equal degree.
Resumo:
Axée dans un premier temps sur le formalisme et les méthodes, cette thèse est construite sur trois concepts formalisés: une table de contingence, une matrice de dissimilarités euclidiennes et une matrice d'échange. À partir de ces derniers, plusieurs méthodes d'Analyse des données ou d'apprentissage automatique sont exprimées et développées: l'analyse factorielle des correspondances (AFC), vue comme un cas particulier du multidimensional scaling; la classification supervisée, ou non, combinée aux transformations de Schoenberg; et les indices d'autocorrélation et d'autocorrélation croisée, adaptés à des analyses multivariées et permettant de considérer diverses familles de voisinages. Ces méthodes débouchent dans un second temps sur une pratique de l'analyse exploratoire de différentes données textuelles et musicales. Pour les données textuelles, on s'intéresse à la classification automatique en types de discours de propositions énoncées, en se basant sur les catégories morphosyntaxiques (CMS) qu'elles contiennent. Bien que le lien statistique entre les CMS et les types de discours soit confirmé, les résultats de la classification obtenus avec la méthode K- means, combinée à une transformation de Schoenberg, ainsi qu'avec une variante floue de l'algorithme K-means, sont plus difficiles à interpréter. On traite aussi de la classification supervisée multi-étiquette en actes de dialogue de tours de parole, en se basant à nouveau sur les CMS qu'ils contiennent, mais aussi sur les lemmes et le sens des verbes. Les résultats obtenus par l'intermédiaire de l'analyse discriminante combinée à une transformation de Schoenberg sont prometteurs. Finalement, on examine l'autocorrélation textuelle, sous l'angle des similarités entre diverses positions d'un texte, pensé comme une séquence d'unités. En particulier, le phénomène d'alternance de la longueur des mots dans un texte est observé pour des voisinages d'empan variable. On étudie aussi les similarités en fonction de l'apparition, ou non, de certaines parties du discours, ainsi que les similarités sémantiques des diverses positions d'un texte. Concernant les données musicales, on propose une représentation d'une partition musicale sous forme d'une table de contingence. On commence par utiliser l'AFC et l'indice d'autocorrélation pour découvrir les structures existant dans chaque partition. Ensuite, on opère le même type d'approche sur les différentes voix d'une partition, grâce à l'analyse des correspondances multiples, dans une variante floue, et à l'indice d'autocorrélation croisée. Qu'il s'agisse de la partition complète ou des différentes voix qu'elle contient, des structures répétées sont effectivement détectées, à condition qu'elles ne soient pas transposées. Finalement, on propose de classer automatiquement vingt partitions de quatre compositeurs différents, chacune représentée par une table de contingence, par l'intermédiaire d'un indice mesurant la similarité de deux configurations. Les résultats ainsi obtenus permettent de regrouper avec succès la plupart des oeuvres selon leur compositeur.
Resumo:
The aim of this study was to prospectively evaluate the accuracy and predictability of new three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction.We analyzed the preoperative and postoperative clinical and radiologic data of 10 patients with isolated blow-out orbital fractures. Fracture locations were as follows: floor (N = 7; 70%), medial wall (N = 1; 1%), and floor/medial wall (N = 2; 2%). The floor fractures were exposed by a standard transconjunctival approach, whereas a combined transcaruncular transconjunctival approach was used in patients with medial wall fractures. A three-dimensional preformed AO titanium mesh plate (0.4 mm in thickness) was selected according to the size of the defect previously measured on the preoperative computed tomographic (CT) scan examination and fixed at the inferior orbital rim with 1 or 2 screws. The accuracy of plate positioning of the reconstructed orbit was assessed on the postoperative CT scan. Coronal CT scan slices were used to measure bony orbital volume using OsiriX Medical Image software. Reconstructed versus uninjured orbital volume were statistically correlated.Nine patients (90%) had a successful treatment outcome without complications. One patient (10%) developed a mechanical limitation of upward gaze with a resulting handicapping diplopia requiring hardware removal. Postoperative orbital CT scan showed an anatomic three-dimensional placement of the orbital mesh plates in all of the patients. Volume data of the reconstructed orbit fitted that of the contralateral uninjured orbit with accuracy to within 2.5 cm(3). There was no significant difference in volume between the reconstructed and uninjured orbits.This preliminary study has demonstrated that three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction results in (1) a high rate of success with an acceptable rate of major clinical complications (10%) and (2) an anatomic restoration of the bony orbital contour and volume that closely approximates that of the contralateral uninjured orbit.
Resumo:
In forensic pathology routine, fatal cases of contrast agent exposure can be occasionally encountered. In such situations, beyond the difficulties inherent in establishing the cause of death due to nonspecific or absent autopsy and histology findings as well as limited laboratory investigations, pathologists may face other problems in formulating exhaustive, complete reports, and conclusions that are scientifically accurate. Indeed, terminology concerning adverse drug reactions and allergy nomenclature is confusing. Some terms, still utilized in forensic and radiological reports, are outdated and should be avoided. Additionally, not all forensic pathologists master contrast material classification and pathogenesis of contrast agent reactions. We present a review of the literature covering allergic reactions to contrast material exposure in order to update used terminology, explain the pathophysiology, and list currently available laboratory investigations for diagnosis in the forensic setting.
Resumo:
Tire traces can be observed on several crime scenes as vehicles are often used by criminals. The tread abrasion on the road, while braking or skidding, leads to the production of small rubber particles which can be collected for comparison purposes. This research focused on the statistical comparison of Py-GC/MS profiles of tire traces and tire treads. The optimisation of the analytical method was carried out using experimental designs. The aim was to determine the best pyrolysis parameters regarding the repeatability of the results. Thus, the pyrolysis factor effect could also be calculated. The pyrolysis temperature was found to be five time more important than time. Finally, a pyrolysis at 650 °C during 15 s was selected. Ten tires of different manufacturers and models were used for this study. Several samples were collected on each tire, and several replicates were carried out to study the variability within each tire (intravariability). More than eighty compounds were integrated for each analysis and the variability study showed that more than 75% presented a relative standard deviation (RSD) below 5% for the ten tires, thus supporting a low intravariability. The variability between the ten tires (intervariability) presented higher values and the ten most variant compounds had a RSD value above 13%, supporting their high potential of discrimination between the tires tested. Principal Component Analysis (PCA) was able to fully discriminate the ten tires with the help of the first three principal components. The ten tires were finally used to perform braking tests on a racetrack with a vehicle equipped with an anti-lock braking system. The resulting tire traces were adequately collected using sheets of white gelatine. As for tires, the intravariability for the traces was found to be lower than the intervariability. Clustering methods were carried out and the Ward's method based on the squared Euclidean distance was able to correctly group all of the tire traces replicates in the same cluster than the replicates of their corresponding tire. Blind tests on traces were performed and were correctly assigned to their tire source. These results support the hypothesis that the tested tires, of different manufacturers and models, can be discriminated by a statistical comparison of their chemical profiles. The traces were found to be not differentiable from their source but differentiable from all the other tires present in the subset. The results are promising and will be extended on a larger sample set.
Resumo:
PURPOSE: Late toxicities such as second cancer induction become more important as treatment outcome improves. Often the dose distribution calculated with a commercial treatment planning system (TPS) is used to estimate radiation carcinogenesis for the radiotherapy patient. However, for locations beyond the treatment field borders, the accuracy is not well known. The aim of this study was to perform detailed out-of-field-measurements for a typical radiotherapy treatment plan administered with a Cyberknife and a Tomotherapy machine and to compare the measurements to the predictions of the TPS. MATERIALS AND METHODS: Individually calibrated thermoluminescent dosimeters were used to measure absorbed dose in an anthropomorphic phantom at 184 locations. The measured dose distributions from 6 MV intensity-modulated treatment beams for CyberKnife and TomoTherapy machines were compared to the dose calculations from the TPS. RESULTS: The TPS are underestimating the dose far away from the target volume. Quantitatively the Cyberknife underestimates the dose at 40cm from the PTV border by a factor of 60, the Tomotherapy TPS by a factor of two. If a 50% dose uncertainty is accepted, the Cyberknife TPS can predict doses down to approximately 10 mGy/treatment Gy, the Tomotherapy-TPS down to 0.75 mGy/treatment Gy. The Cyberknife TPS can then be used up to 10cm from the PTV border the Tomotherapy up to 35cm. CONCLUSIONS: We determined that the Cyberknife and Tomotherapy TPS underestimate substantially the doses far away from the treated volume. It is recommended not to use out-of-field doses from the Cyberknife TPS for applications like modeling of second cancer induction. The Tomotherapy TPS can be used up to 35cm from the PTV border (for a 390 cm(3) large PTV).
Resumo:
Melanoma is an aggressive disease with few standard treatment options. The conventional classification system for this disease is based on histological growth patterns, with division into four subtypes: superficial spreading, lentigo maligna, nodular, and acral lentiginous. Major limitations of this classification system are absence of prognostic importance and little correlation with treatment outcomes. Recent preclinical and clinical findings support the notion that melanoma is not one malignant disorder but rather a family of distinct molecular diseases. Incorporation of genetic signatures into the conventional histopathological classification of melanoma has great implications for development of new and effective treatments. Genes of the mitogen-associated protein kinase (MAPK) pathway harbour alterations sometimes identified in people with melanoma. The mutation Val600Glu in the BRAF oncogene (designated BRAF(V600E)) has been associated with sensitivity in vitro and in vivo to agents that inhibit BRAF(V600E) or MEK (a kinase in the MAPK pathway). Melanomas arising from mucosal, acral, chronically sun-damaged surfaces sometimes have oncogenic mutations in KIT, against which several inhibitors have shown clinical efficacy. Some uveal melanomas have activating mutations in GNAQ and GNA11, rendering them potentially susceptible to MEK inhibition. These findings suggest that prospective genotyping of patients with melanoma should be used increasingly as we work to develop new and effective treatments for this disease.
What's so special about conversion disorder? A problem and a proposal for diagnostic classification.
Resumo:
Conversion disorder presents a problem for the revisions of DSM-IV and ICD-10, for reasons that are informative about the difficulties of psychiatric classification more generally. Giving up criteria based on psychological aetiology may be a painful sacrifice but it is still the right thing to do.
Resumo:
Computational anatomy with magnetic resonance imaging (MRI) is well established as a noninvasive biomarker of Alzheimer's disease (AD); however, there is less certainty about its dependency on the staging of AD. We use classical group analyses and automated machine learning classification of standard structural MRI scans to investigate AD diagnostic accuracy from the preclinical phase to clinical dementia. Longitudinal data from the Alzheimer's Disease Neuroimaging Initiative were stratified into 4 groups according to the clinical status-(1) AD patients; (2) mild cognitive impairment (MCI) converters; (3) MCI nonconverters; and (4) healthy controls-and submitted to a support vector machine. The obtained classifier was significantly above the chance level (62%) for detecting AD already 4 years before conversion from MCI. Voxel-based univariate tests confirmed the plausibility of our findings detecting a distributed network of hippocampal-temporoparietal atrophy in AD patients. We also identified a subgroup of control subjects with brain structure and cognitive changes highly similar to those observed in AD. Our results indicate that computational anatomy can detect AD substantially earlier than suggested by current models. The demonstrated differential spatial pattern of atrophy between correctly and incorrectly classified AD patients challenges the assumption of a uniform pathophysiological process underlying clinically identified AD.