992 resultados para CLASSIFICATION CRITERIA
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
En aquesta tesis es presenten els resultats de la investigació duta a terme a les comunitats indígenes Tsimane’ de l’Amazònia boliviana. La investigació estudia la percepció dels indígenes sobre l’etnoclassificació del seu territori. S’estableix una clau de classificació i es determina la importància dels elements paisatgístics del territori Tsimane’ segons la percepció local. Aquesta informació permetrà integrar el coneixement local dins dels programes de desenvolupament integral i de planificació territorial en l’Amazònia Boliviana. L’estudi conclou que la població Tsimane’ classifica els elements paisatgístics del seu entorn en 89 taques conformades per una espècies arbòria dominant i que estan incloses en un o més dels nou paisatges identificats: Därsi Därä, Sajras, Sinues Ojñi’, Mayes, Múcúya, Tsäquis Därä, Cum, Tajñi’ i Jaman. A partir d’un anàlisi multicriteri s’ha determinat una importància total per cada paisatge segons els següents criteris d’importància: diversitat de taques, activitats econòmiques realitzables, presència espiritual, percepció individual i importància relativa segons els altres paisatges. Així doncs s’ha trobat que el paisatge més important és el Därsi Därä (bosc primari caracteritzat per un estrat arbori superior a 50 metres d’altura). També s’han analitzat les dades discernint segons el gènere de l’entrevistat i segons la proximitat de les comunitats estudiades a la ciutat més propera.
Resumo:
BACKGROUND: The aim of this study was to assess, at the European level and using digital technology, the inter-pathologist reproducibility of the ISHLT 2004 system and to compare it with the 1990 system We also assessed the reproducibility of the morphologic criteria for diagnosis of antibody-mediated rejection detailed in the 2004 grading system. METHODS: The hematoxylin-eosin-stained sections of 20 sets of endomyocardial biopsies were pre-selected and graded by two pathologists (A.A. and M.B.) and digitized using a telepathology digital pathology system (Aperio ImageScope System; for details refer to http://aperio.com/). Their diagnoses were considered the index diagnoses, which covered all grades of acute cellular rejection (ACR), early ischemic lesions, Quilty lesions, late ischemic lesions and (in the 2005 system) antibody-mediated rejection (AMR). Eighteen pathologists from 16 heart transplant centers in 7 European countries participated in the study. Inter-observer reproducibility was assessed using Fleiss's kappa and Krippendorff's alpha statistics. RESULTS: The combined kappa value of all grades diagnosed by all 18 pathologists was 0.31 for the 1990 grading system and 0.39 for the 2005 grading system, with alpha statistics at 0.57 and 0.55, respectively. Kappa values by grade for 1990/2005, respectively, were: 0 = 0.52/0.51; 1A/1R = 0.24/0.36; 1B = 0.15; 2 = 0.13; 3A/2R = 0.29/0.29; 3B/3R = 0.13/0.23; and 4 = 0.18. For the 2 cases of AMR, 6 of 18 pathologists correctly suspected AMR on the hematoxylin-eosin slides, whereas, in each of 17 of the 18 AMR-negative cases a small percentage of pathologists (range 5% to 33%) overinterpreted the findings as suggestive for AMR. CONCLUSIONS: Reproducibility studies of cardiac biopsies by pathologists in different centers at the international level were feasible using digitized slides rather than conventional histology glass slides. There was a small improvement in interobserver agreement between pathologists of different European centers when moving from the 1990 ISHLT classification to the "new" 2005 ISHLT classification. Morphologic suspicion of AMR in the 2004 system on hematoxylin-eosin-stained slides only was poor, highlighting the need for better standardization of morphologic criteria for AMR. Ongoing educational programs are needed to ensure standardization of diagnosis of both acute cellular and antibody-mediated rejection.
Resumo:
Currently, the most widely used criteria for assessing response to therapy in high-grade gliomas are based on two-dimensional tumor measurements on computed tomography (CT) or magnetic resonance imaging (MRI), in conjunction with clinical assessment and corticosteroid dose (the Macdonald Criteria). It is increasingly apparent that there are significant limitations to these criteria, which only address the contrast-enhancing component of the tumor. For example, chemoradiotherapy for newly diagnosed glioblastomas results in transient increase in tumor enhancement (pseudoprogression) in 20% to 30% of patients, which is difficult to differentiate from true tumor progression. Antiangiogenic agents produce high radiographic response rates, as defined by a rapid decrease in contrast enhancement on CT/MRI that occurs within days of initiation of treatment and that is partly a result of reduced vascular permeability to contrast agents rather than a true antitumor effect. In addition, a subset of patients treated with antiangiogenic agents develop tumor recurrence characterized by an increase in the nonenhancing component depicted on T2-weighted/fluid-attenuated inversion recovery sequences. The recognition that contrast enhancement is nonspecific and may not always be a true surrogate of tumor response and the need to account for the nonenhancing component of the tumor mandate that new criteria be developed and validated to permit accurate assessment of the efficacy of novel therapies. The Response Assessment in Neuro-Oncology Working Group is an international effort to develop new standardized response criteria for clinical trials in brain tumors. In this proposal, we present the recommendations for updated response criteria for high-grade gliomas.
Resumo:
BACKGROUND: Increasing the appropriateness of use of upper gastrointestinal (GI) endoscopy is important to improve quality of care while at the same time containing costs. This study explored whether detailed explicit appropriateness criteria significantly improve the diagnostic yield of upper GI endoscopy. METHODS: Consecutive patients referred for upper GI endoscopy at 6 centers (1 university hospital, 2 district hospitals, 3 gastroenterology practices) were prospectively included over a 6-month period. After controlling for disease presentation and patient characteristics, the relationship between the appropriateness of upper GI endoscopy, as assessed by explicit Swiss criteria developed by the RAND/UCLA panel method, and the presence of relevant endoscopic lesions was analyzed. RESULTS: A total of 2088 patients (60% outpatients, 57% men) were included. Analysis was restricted to the 1681 patients referred for diagnostic upper GI endoscopy. Forty-six percent of upper GI endoscopies were judged to be appropriate, 15% uncertain, and 39% inappropriate by the explicit criteria. No cancer was found in upper GI endoscopies judged to be inappropriate. Upper GI endoscopies judged appropriate or uncertain yielded significantly more relevant lesions (60%) than did those judged to be inappropriate (37%; odds ratio 2.6: 95% CI [2.2, 3.2]). In multivariate analyses, the diagnostic yield of upper GI endoscopy was significantly influenced by appropriateness, patient gender and age, treatment setting, and symptoms. CONCLUSIONS: Upper GI endoscopies performed for appropriate indications resulted in detecting significantly more clinically relevant lesions than did those performed for inappropriate indications. In addition, no upper GI endoscopy that resulted in a diagnosis of cancer was judged to be inappropriate. The use of such criteria improves patient selection for upper GI endoscopy and can thus contribute to efforts aimed at enhancing the quality and efficiency of care. (Gastrointest Endosc 2000;52:333-41).
Resumo:
5 to 10% of all fractures present with a delayed union, whereas 1 to 5% progress to a nonunion, which can be defined as a fracture older than 6 months and lacks any potential to heal without any further intervention. Different fracture and patient related risk factors exist, and the management of a nonunion needs a thorough clinical, radiological and biological workup to classify them in one of the two main categories, the viable nonunions that need essentially more stability, usually by a more rigid fixation, and the non-viable nonunions that need essentially a biological stimulation by decortication and bone grafting. This treatment still remains the first choice with bony healing obtained in 85 to 95% of cases, but it also comes along with certain risks, and some valuable alternatives exist if chosen on the basis of rigid criteria.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
Résumé de la thèse L'évolution des systèmes policiers donne une place prépondérante à l'information et au renseignement. Cette transformation implique de développer et de maintenir un ensemble de processus permanent d'analyse de la criminalité, en particulier pour traiter des événements répétitifs ou graves. Dans une organisation aux ressources limitées, le temps consacré au recueil des données, à leur codification et intégration, diminue le temps disponible pour l'analyse et la diffusion de renseignements. Les phases de collecte et d'intégration restent néanmoins indispensables, l'analyse n'étant pas possible sur des données volumineuses n'ayant aucune structure. Jusqu'à présent, ces problématiques d'analyse ont été abordées par des approches essentiellement spécialisées (calculs de hot-sports, data mining, ...) ou dirigées par un seul axe (par exemple, les sciences comportementales). Cette recherche s'inscrit sous un angle différent, une démarche interdisciplinaire a été adoptée. L'augmentation continuelle de la quantité de données à analyser tend à diminuer la capacité d'analyse des informations à disposition. Un bon découpage (classification) des problèmes rencontrés permet de délimiter les analyses sur des données pertinentes. Ces classes sont essentielles pour structurer la mémoire du système d'analyse. Les statistiques policières de la criminalité devraient déjà avoir répondu à ces questions de découpage de la délinquance (classification juridique). Cette décomposition a été comparée aux besoins d'un système de suivi permanent dans la criminalité. La recherche confirme que nos efforts pour comprendre la nature et la répartition du crime se butent à un obstacle, à savoir que la définition juridique des formes de criminalité n'est pas adaptée à son analyse, à son étude. Depuis près de vingt ans, les corps de police de Suisse romande utilisent et développent un système de classification basé sur l'expérience policière (découpage par phénomène). Cette recherche propose d'interpréter ce système dans le cadre des approches situationnelles (approche théorique) et de le confronter aux données « statistiques » disponibles pour vérifier sa capacité à distinguer les formes de criminalité. La recherche se limite aux cambriolages d'habitations, un délit répétitif fréquent. La théorie des opportunités soutien qu'il faut réunir dans le temps et dans l'espace au minimum les trois facteurs suivants : un délinquant potentiel, une cible intéressante et l'absence de gardien capable de prévenir ou d'empêcher le passage à l'acte. Ainsi, le délit n'est possible que dans certaines circonstances, c'est-à-dire dans un contexte bien précis. Identifier ces contextes permet catégoriser la criminalité. Chaque cas est unique, mais un groupe de cas montre des similitudes. Par exemple, certaines conditions avec certains environnements attirent certains types de cambrioleurs. Deux hypothèses ont été testées. La première est que les cambriolages d'habitations ne se répartissent pas uniformément dans les classes formées par des « paramètres situationnels » ; la deuxième que des niches apparaissent en recoupant les différents paramètres et qu'elles correspondent à la classification mise en place par la coordination judiciaire vaudoise et le CICOP. La base de données vaudoise des cambriolages enregistrés entre 1997 et 2006 par la police a été utilisée (25'369 cas). Des situations spécifiques ont été mises en évidence, elles correspondent aux classes définies empiriquement. Dans une deuxième phase, le lien entre une situation spécifique et d'activité d'un auteur au sein d'une même situation a été vérifié. Les observations réalisées dans cette recherche indiquent que les auteurs de cambriolages sont actifs dans des niches. Plusieurs auteurs sériels ont commis des délits qui ne sont pas dans leur niche, mais le nombre de ces infractions est faible par rapport au nombre de cas commis dans la niche. Un système de classification qui correspond à des réalités criminelles permet de décomposer les événements et de mettre en place un système d'alerte et de suivi « intelligent ». Une nouvelle série dans un phénomène sera détectée par une augmentation du nombre de cas de ce phénomène, en particulier dans une région et à une période donnée. Cette nouvelle série, mélangée parmi l'ensemble des délits, ne serait pas forcément détectable, en particulier si elle se déplace. Finalement, la coopération entre les structures de renseignement criminel opérationnel en Suisse romande a été améliorée par le développement d'une plateforme d'information commune et le système de classification y a été entièrement intégré.
Resumo:
Lipids available in fingermark residue represent important targets for enhancement and dating techniques. While it is well known that lipid composition varies among fingermarks of the same donor (intra-variability) and between fingermarks of different donors (inter-variability), the extent of this variability remains uncharacterised. Thus, this worked aimed at studying qualitatively and quantitatively the initial lipid composition of fingermark residue of 25 different donors. Among the 104 detected lipids, 43 were reported for the first time in the literature. Furthermore, palmitic acid, squalene, cholesterol, myristyl myristate and myristyl myristoleate were quantified and their correlation within fingermark residue was highlighted. Ten compounds were then selected and further studied as potential targets for dating or enhancement techniques. It was shown that their relative standard deviation was significantly lower for the intra-variability than for the inter-variability. Moreover, the use of data pretreatments could significantly reduce this variability. Based on these observations, an objective donor classification model was proposed. Hierarchical cluster analysis was conducted on the pre-treated data and the fingermarks of the 25 donors were classified into two main groups, corresponding to "poor" and "rich" lipid donors. The robustness of this classification was tested using fingermark replicates of selected donors. 86% of these replicates were correctly classified, showing the potential of such a donor classification model for research purposes in order to select representative donors based on compounds of interest.
Resumo:
In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.
Resumo:
Descriptive set theory is mainly concerned with studying subsets of the space of all countable binary sequences. In this paper we study the generalization where countable is replaced by uncountable. We explore properties of generalized Baire and Cantor spaces, equivalence relations and their Borel reducibility. The study shows that the descriptive set theory looks very different in this generalized setting compared to the classical, countable case. We also draw the connection between the stability theoretic complexity of first-order theories and the descriptive set theoretic complexity of their isomorphism relations. Our results suggest that Borel reducibility on uncountable structures is a model theoretically natural way to compare the complexity of isomorphism relations.