898 resultados para classification accuracy
Resumo:
Axée dans un premier temps sur le formalisme et les méthodes, cette thèse est construite sur trois concepts formalisés: une table de contingence, une matrice de dissimilarités euclidiennes et une matrice d'échange. À partir de ces derniers, plusieurs méthodes d'Analyse des données ou d'apprentissage automatique sont exprimées et développées: l'analyse factorielle des correspondances (AFC), vue comme un cas particulier du multidimensional scaling; la classification supervisée, ou non, combinée aux transformations de Schoenberg; et les indices d'autocorrélation et d'autocorrélation croisée, adaptés à des analyses multivariées et permettant de considérer diverses familles de voisinages. Ces méthodes débouchent dans un second temps sur une pratique de l'analyse exploratoire de différentes données textuelles et musicales. Pour les données textuelles, on s'intéresse à la classification automatique en types de discours de propositions énoncées, en se basant sur les catégories morphosyntaxiques (CMS) qu'elles contiennent. Bien que le lien statistique entre les CMS et les types de discours soit confirmé, les résultats de la classification obtenus avec la méthode K- means, combinée à une transformation de Schoenberg, ainsi qu'avec une variante floue de l'algorithme K-means, sont plus difficiles à interpréter. On traite aussi de la classification supervisée multi-étiquette en actes de dialogue de tours de parole, en se basant à nouveau sur les CMS qu'ils contiennent, mais aussi sur les lemmes et le sens des verbes. Les résultats obtenus par l'intermédiaire de l'analyse discriminante combinée à une transformation de Schoenberg sont prometteurs. Finalement, on examine l'autocorrélation textuelle, sous l'angle des similarités entre diverses positions d'un texte, pensé comme une séquence d'unités. En particulier, le phénomène d'alternance de la longueur des mots dans un texte est observé pour des voisinages d'empan variable. On étudie aussi les similarités en fonction de l'apparition, ou non, de certaines parties du discours, ainsi que les similarités sémantiques des diverses positions d'un texte. Concernant les données musicales, on propose une représentation d'une partition musicale sous forme d'une table de contingence. On commence par utiliser l'AFC et l'indice d'autocorrélation pour découvrir les structures existant dans chaque partition. Ensuite, on opère le même type d'approche sur les différentes voix d'une partition, grâce à l'analyse des correspondances multiples, dans une variante floue, et à l'indice d'autocorrélation croisée. Qu'il s'agisse de la partition complète ou des différentes voix qu'elle contient, des structures répétées sont effectivement détectées, à condition qu'elles ne soient pas transposées. Finalement, on propose de classer automatiquement vingt partitions de quatre compositeurs différents, chacune représentée par une table de contingence, par l'intermédiaire d'un indice mesurant la similarité de deux configurations. Les résultats ainsi obtenus permettent de regrouper avec succès la plupart des oeuvres selon leur compositeur.
Resumo:
The aim of this study was to prospectively evaluate the accuracy and predictability of new three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction.We analyzed the preoperative and postoperative clinical and radiologic data of 10 patients with isolated blow-out orbital fractures. Fracture locations were as follows: floor (N = 7; 70%), medial wall (N = 1; 1%), and floor/medial wall (N = 2; 2%). The floor fractures were exposed by a standard transconjunctival approach, whereas a combined transcaruncular transconjunctival approach was used in patients with medial wall fractures. A three-dimensional preformed AO titanium mesh plate (0.4 mm in thickness) was selected according to the size of the defect previously measured on the preoperative computed tomographic (CT) scan examination and fixed at the inferior orbital rim with 1 or 2 screws. The accuracy of plate positioning of the reconstructed orbit was assessed on the postoperative CT scan. Coronal CT scan slices were used to measure bony orbital volume using OsiriX Medical Image software. Reconstructed versus uninjured orbital volume were statistically correlated.Nine patients (90%) had a successful treatment outcome without complications. One patient (10%) developed a mechanical limitation of upward gaze with a resulting handicapping diplopia requiring hardware removal. Postoperative orbital CT scan showed an anatomic three-dimensional placement of the orbital mesh plates in all of the patients. Volume data of the reconstructed orbit fitted that of the contralateral uninjured orbit with accuracy to within 2.5 cm(3). There was no significant difference in volume between the reconstructed and uninjured orbits.This preliminary study has demonstrated that three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction results in (1) a high rate of success with an acceptable rate of major clinical complications (10%) and (2) an anatomic restoration of the bony orbital contour and volume that closely approximates that of the contralateral uninjured orbit.
Resumo:
In forensic pathology routine, fatal cases of contrast agent exposure can be occasionally encountered. In such situations, beyond the difficulties inherent in establishing the cause of death due to nonspecific or absent autopsy and histology findings as well as limited laboratory investigations, pathologists may face other problems in formulating exhaustive, complete reports, and conclusions that are scientifically accurate. Indeed, terminology concerning adverse drug reactions and allergy nomenclature is confusing. Some terms, still utilized in forensic and radiological reports, are outdated and should be avoided. Additionally, not all forensic pathologists master contrast material classification and pathogenesis of contrast agent reactions. We present a review of the literature covering allergic reactions to contrast material exposure in order to update used terminology, explain the pathophysiology, and list currently available laboratory investigations for diagnosis in the forensic setting.
Resumo:
Tire traces can be observed on several crime scenes as vehicles are often used by criminals. The tread abrasion on the road, while braking or skidding, leads to the production of small rubber particles which can be collected for comparison purposes. This research focused on the statistical comparison of Py-GC/MS profiles of tire traces and tire treads. The optimisation of the analytical method was carried out using experimental designs. The aim was to determine the best pyrolysis parameters regarding the repeatability of the results. Thus, the pyrolysis factor effect could also be calculated. The pyrolysis temperature was found to be five time more important than time. Finally, a pyrolysis at 650 °C during 15 s was selected. Ten tires of different manufacturers and models were used for this study. Several samples were collected on each tire, and several replicates were carried out to study the variability within each tire (intravariability). More than eighty compounds were integrated for each analysis and the variability study showed that more than 75% presented a relative standard deviation (RSD) below 5% for the ten tires, thus supporting a low intravariability. The variability between the ten tires (intervariability) presented higher values and the ten most variant compounds had a RSD value above 13%, supporting their high potential of discrimination between the tires tested. Principal Component Analysis (PCA) was able to fully discriminate the ten tires with the help of the first three principal components. The ten tires were finally used to perform braking tests on a racetrack with a vehicle equipped with an anti-lock braking system. The resulting tire traces were adequately collected using sheets of white gelatine. As for tires, the intravariability for the traces was found to be lower than the intervariability. Clustering methods were carried out and the Ward's method based on the squared Euclidean distance was able to correctly group all of the tire traces replicates in the same cluster than the replicates of their corresponding tire. Blind tests on traces were performed and were correctly assigned to their tire source. These results support the hypothesis that the tested tires, of different manufacturers and models, can be discriminated by a statistical comparison of their chemical profiles. The traces were found to be not differentiable from their source but differentiable from all the other tires present in the subset. The results are promising and will be extended on a larger sample set.
Resumo:
PURPOSE: Late toxicities such as second cancer induction become more important as treatment outcome improves. Often the dose distribution calculated with a commercial treatment planning system (TPS) is used to estimate radiation carcinogenesis for the radiotherapy patient. However, for locations beyond the treatment field borders, the accuracy is not well known. The aim of this study was to perform detailed out-of-field-measurements for a typical radiotherapy treatment plan administered with a Cyberknife and a Tomotherapy machine and to compare the measurements to the predictions of the TPS. MATERIALS AND METHODS: Individually calibrated thermoluminescent dosimeters were used to measure absorbed dose in an anthropomorphic phantom at 184 locations. The measured dose distributions from 6 MV intensity-modulated treatment beams for CyberKnife and TomoTherapy machines were compared to the dose calculations from the TPS. RESULTS: The TPS are underestimating the dose far away from the target volume. Quantitatively the Cyberknife underestimates the dose at 40cm from the PTV border by a factor of 60, the Tomotherapy TPS by a factor of two. If a 50% dose uncertainty is accepted, the Cyberknife TPS can predict doses down to approximately 10 mGy/treatment Gy, the Tomotherapy-TPS down to 0.75 mGy/treatment Gy. The Cyberknife TPS can then be used up to 10cm from the PTV border the Tomotherapy up to 35cm. CONCLUSIONS: We determined that the Cyberknife and Tomotherapy TPS underestimate substantially the doses far away from the treated volume. It is recommended not to use out-of-field doses from the Cyberknife TPS for applications like modeling of second cancer induction. The Tomotherapy TPS can be used up to 35cm from the PTV border (for a 390 cm(3) large PTV).
Resumo:
Melanoma is an aggressive disease with few standard treatment options. The conventional classification system for this disease is based on histological growth patterns, with division into four subtypes: superficial spreading, lentigo maligna, nodular, and acral lentiginous. Major limitations of this classification system are absence of prognostic importance and little correlation with treatment outcomes. Recent preclinical and clinical findings support the notion that melanoma is not one malignant disorder but rather a family of distinct molecular diseases. Incorporation of genetic signatures into the conventional histopathological classification of melanoma has great implications for development of new and effective treatments. Genes of the mitogen-associated protein kinase (MAPK) pathway harbour alterations sometimes identified in people with melanoma. The mutation Val600Glu in the BRAF oncogene (designated BRAF(V600E)) has been associated with sensitivity in vitro and in vivo to agents that inhibit BRAF(V600E) or MEK (a kinase in the MAPK pathway). Melanomas arising from mucosal, acral, chronically sun-damaged surfaces sometimes have oncogenic mutations in KIT, against which several inhibitors have shown clinical efficacy. Some uveal melanomas have activating mutations in GNAQ and GNA11, rendering them potentially susceptible to MEK inhibition. These findings suggest that prospective genotyping of patients with melanoma should be used increasingly as we work to develop new and effective treatments for this disease.
What's so special about conversion disorder? A problem and a proposal for diagnostic classification.
Resumo:
Conversion disorder presents a problem for the revisions of DSM-IV and ICD-10, for reasons that are informative about the difficulties of psychiatric classification more generally. Giving up criteria based on psychological aetiology may be a painful sacrifice but it is still the right thing to do.
Resumo:
Computational anatomy with magnetic resonance imaging (MRI) is well established as a noninvasive biomarker of Alzheimer's disease (AD); however, there is less certainty about its dependency on the staging of AD. We use classical group analyses and automated machine learning classification of standard structural MRI scans to investigate AD diagnostic accuracy from the preclinical phase to clinical dementia. Longitudinal data from the Alzheimer's Disease Neuroimaging Initiative were stratified into 4 groups according to the clinical status-(1) AD patients; (2) mild cognitive impairment (MCI) converters; (3) MCI nonconverters; and (4) healthy controls-and submitted to a support vector machine. The obtained classifier was significantly above the chance level (62%) for detecting AD already 4 years before conversion from MCI. Voxel-based univariate tests confirmed the plausibility of our findings detecting a distributed network of hippocampal-temporoparietal atrophy in AD patients. We also identified a subgroup of control subjects with brain structure and cognitive changes highly similar to those observed in AD. Our results indicate that computational anatomy can detect AD substantially earlier than suggested by current models. The demonstrated differential spatial pattern of atrophy between correctly and incorrectly classified AD patients challenges the assumption of a uniform pathophysiological process underlying clinically identified AD.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.
Resumo:
During the past decades, anticancer immunotherapy has evolved from a promising therapeutic option to a robust clinical reality. Many immunotherapeutic regimens are now approved by the US Food and Drug Administration and the European Medicines Agency for use in cancer patients, and many others are being investigated as standalone therapeutic interventions or combined with conventional treatments in clinical studies. Immunotherapies may be subdivided into "passive" and "active" based on their ability to engage the host immune system against cancer. Since the anticancer activity of most passive immunotherapeutics (including tumor-targeting monoclonal antibodies) also relies on the host immune system, this classification does not properly reflect the complexity of the drug-host-tumor interaction. Alternatively, anticancer immunotherapeutics can be classified according to their antigen specificity. While some immunotherapies specifically target one (or a few) defined tumor-associated antigen(s), others operate in a relatively non-specific manner and boost natural or therapy-elicited anticancer immune responses of unknown and often broad specificity. Here, we propose a critical, integrated classification of anticancer immunotherapies and discuss the clinical relevance of these approaches.