930 resultados para classification algorithm
Resumo:
Chronic liver disease (CLD) is most of the time an asymptomatic, progressive, and ultimately potentially fatal disease. In this study, an automatic hierarchical procedure to stage CLD using ultrasound images, laboratory tests, and clinical records are described. The first stage of the proposed method, called clinical based classifier (CBC), discriminates healthy from pathologic conditions. When nonhealthy conditions are detected, the method refines the results in three exclusive pathologies in a hierarchical basis: 1) chronic hepatitis; 2) compensated cirrhosis; and 3) decompensated cirrhosis. The features used as well as the classifiers (Bayes, Parzen, support vector machine, and k-nearest neighbor) are optimally selected for each stage. A large multimodal feature database was specifically built for this study containing 30 chronic hepatitis cases, 34 compensated cirrhosis cases, and 36 decompensated cirrhosis cases, all validated after histopathologic analysis by liver biopsy. The CBC classification scheme outperformed the nonhierachical one against all scheme, achieving an overall accuracy of 98.67% for the normal detector, 87.45% for the chronic hepatitis detector, and 95.71% for the cirrhosis detector.
Resumo:
PURPOSE: Fatty liver disease (FLD) is an increasing prevalent disease that can be reversed if detected early. Ultrasound is the safest and ubiquitous method for identifying FLD. Since expert sonographers are required to accurately interpret the liver ultrasound images, lack of the same will result in interobserver variability. For more objective interpretation, high accuracy, and quick second opinions, computer aided diagnostic (CAD) techniques may be exploited. The purpose of this work is to develop one such CAD technique for accurate classification of normal livers and abnormal livers affected by FLD. METHODS: In this paper, the authors present a CAD technique (called Symtosis) that uses a novel combination of significant features based on the texture, wavelet transform, and higher order spectra of the liver ultrasound images in various supervised learning-based classifiers in order to determine parameters that classify normal and FLD-affected abnormal livers. RESULTS: On evaluating the proposed technique on a database of 58 abnormal and 42 normal liver ultrasound images, the authors were able to achieve a high classification accuracy of 93.3% using the decision tree classifier. CONCLUSIONS: This high accuracy added to the completely automated classification procedure makes the authors' proposed technique highly suitable for clinical deployment and usage.
Resumo:
In this work the identification and diagnosis of various stages of chronic liver disease is addressed. The classification results of a support vector machine, a decision tree and a k-nearest neighbor classifier are compared. Ultrasound image intensity and textural features are jointly used with clinical and laboratorial data in the staging process. The classifiers training is performed by using a population of 97 patients at six different stages of chronic liver disease and a leave-one-out cross-validation strategy. The best results are obtained using the support vector machine with a radial-basis kernel, with 73.20% of overall accuracy. The good performance of the method is a promising indicator that it can be used, in a non invasive way, to provide reliable information about the chronic liver disease staging.
Resumo:
In this work liver contour is semi-automatically segmented and quantified in order to help the identification and diagnosis of diffuse liver disease. The features extracted from the liver contour are jointly used with clinical and laboratorial data in the staging process. The classification results of a support vector machine, a Bayesian and a k-nearest neighbor classifier are compared. A population of 88 patients at five different stages of diffuse liver disease and a leave-one-out cross-validation strategy are used in the classification process. The best results are obtained using the k-nearest neighbor classifier, with an overall accuracy of 80.68%. The good performance of the proposed method shows a reliable indicator that can improve the information in the staging of diffuse liver disease.
Resumo:
Purpose: To describe and compare the content of instruments that assess environmental factors using the International Classification of Functioning, Disability and Health (ICF). Methods: A systematic search of PubMed, CINAHL and PEDro databases was conducted using a pre-determined search strategy. The identified instruments were screened independently by two investigators, and meaningful concepts were linked to the most precise ICF category according to published linking rules. Results: Six instruments were included, containing 526 meaningful concepts. Instruments had between 20% and 98% of items linked to categories in Chapter 1. The highest percentage of items from one instrument linked to categories in Chapters 2–5 varied between 9% and 50%. The presence or absence of environmental factors in a specific context is assessed in 3 instruments, while the other 3 assess the intensity of the impact of environmental factors. Discussion: Instruments differ in their content, type of assessment, and have several items linked to the same ICF category. Most instruments primarily assess products and technology (Chapter 1), highlighting the need to deepen the discussion on the theory that supports the measurement of environmental factors. This discussion should be thorough and lead to the development of methodologies and new tools that capture the underlying concepts of the ICF.
Resumo:
OBJECTIVE: To develop a Charlson-like comorbidity index based on clinical conditions and weights of the original Charlson comorbidity index. METHODS: Clinical conditions and weights were adapted from the International Classification of Diseases, 10th revision and applied to a single hospital admission diagnosis. The study included 3,733 patients over 18 years of age who were admitted to a public general hospital in the city of Rio de Janeiro, southeast Brazil, between Jan 2001 and Jan 2003. The index distribution was analyzed by gender, type of admission, blood transfusion, intensive care unit admission, age and length of hospital stay. Two logistic regression models were developed to predict in-hospital mortality including: a) the aforementioned variables and the risk-adjustment index (full model); and b) the risk-adjustment index and patient's age (reduced model). RESULTS: Of all patients analyzed, 22.3% had risk scores >1, and their mortality rate was 4.5% (66.0% of them had scores >1). Except for gender and type of admission, all variables were retained in the logistic regression. The models including the developed risk index had an area under the receiver operating characteristic curve of 0.86 (full model), and 0.76 (reduced model). Each unit increase in the risk score was associated with nearly 50% increase in the odds of in-hospital death. CONCLUSIONS: The risk index developed was able to effectively discriminate the odds of in-hospital death which can be useful when limited information is available from hospital databases.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
O objectivo do projecto descrito nesta dissertação é o desenvolvimento da interface entre as empresas e a plataforma Business-to-Business (B2B) de negociação automática de anúncios em construção. A plataforma, no seu todo, deve garantir que os intervalos da programação são preenchidos com um alinhamento de anúncios compatível com os interesses expressos e o perfil construído dos espectadores. A plataforma funciona como um mercado electrónico de negociação automática destinado a agências de publicidade (empresas produtoras) e empresas provedoras de conteúdos e serviços multimédia aos consumidores finais (empresas distribuidoras). As empresas, uma vez registadas na plataforma, passam a ser representadas por agentes que negoceiam automaticamente os itens submetidos com o comportamento especificado. Do ponto de vista da arquitectura, a plataforma consiste num sistema multiagente organizado em três camadas compostas por: (i) agentes de interface com as empresas; (ii) agentes de modelação das empresas; e (iii) agentes delegados, de duração efémera, exclusivamente criados para participar em negociações específicas de conteúdos multimédia. Cada empresa representada na plataforma possui, para além de um número indeterminado de delegados envolvidos em negociações específicas, dois agentes: (i) o agente de interface com a empresa, que expõe um conjunto de operações de interface ao exterior através de um serviço Web, localizado na primeira camada; e (ii) o agente que modela a empresa na plataforma, que expõe através de um serviço Web um conjunto de operações aos agentes das restantes camadas da plataforma, residente na camada intermédia. Este projecto focou-se no desenvolvimento da camada superior de interface da plataforma com as empresas e no enriquecimento da camada intermédia. A realização da camada superior incluiu a especificação da parte da ontologia da plataforma que dá suporte às operações de interface com o exterior, à sua exposição como serviços Web e à criação e controlo dos agentes de interface. Esta camada superior deve permitir às empresas carregar e descarregar toda informação relevante de e para a plataforma, através de uma interface gráfica ou de forma automática, e apresentar de forma gráfica e intuitiva os resultados alcançados, nomeadamente, através da apresentação da evolução das transacções. Em relação à camada intermédia, adicionou-se à ontologia da plataforma a representação do conhecimento de suporte às operações de interface com a camada superior, adoptaram-se taxonomias de classificação de espectadores, anúncios e programas, desenvolveu-se um algoritmo de emparelhamento entre os espectadores, programas e anúncios disponíveis e, por fim, procedeu-se ao armazenamento persistente dos resultados das negociações. Do ponto de vista da plataforma, testou-se o seu funcionamento numa única plataforma física e assegurou-se a segurança e privacidade da comunicação entre empresa e plataforma e entre agentes que representam uma mesma empresa.
Resumo:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
Os sistemas Computer-Aided Diagnosis (CAD) auxiliam a deteção e diferenciação de lesões benignas e malignas, aumentando a performance no diagnóstico do cancro da mama. As lesões da mama estão fortemente correlacionadas com a forma do contorno: lesões benignas apresentam contornos regulares, enquanto as lesões malignas tendem a apresentar contornos irregulares. Desta forma, a utilização de medidas quantitativas, como a dimensão fractal (DF), pode ajudar na caracterização dos contornos regulares ou irregulares de uma lesão. O principal objetivo deste estudo é verificar se a utilização concomitante de 2 (ou mais) medidas de DF – uma tradicionalmente utilizada, a qual foi designada por “DF de contorno”; outra proposta por nós, designada por “DF de área” – e ainda 3 medidas obtidas a partir destas, por operações de dilatação/erosão e por normalização de uma das medidas anteriores, melhoram a capacidade de caracterização de acordo com a escala BIRADS (Breast Imaging Reporting and Data System) e o tipo de lesão. As medidas de DF (DF contorno e DF área) foram calculadas através da aplicação do método box-counting, diretamente em imagens de lesões segmentadas e após a aplicação de um algoritmo de dilatação/erosão. A última medida baseia-se na diferença normalizada entre as duas medidas DF de área antes e após a aplicação do algoritmo de dilatação/erosão. Os resultados demonstram que a medida DF de contorno é uma ferramenta útil na diferenciação de lesões, de acordo com a escala BIRADS e o tipo de lesão; no entanto, em algumas situações, ocorrem alguns erros. O uso combinado desta medida com as quatro medidas propostas pode melhorar a classificação das lesões.
Resumo:
The calculation of the dose is one of the key steps in radiotherapy planning1-5. This calculation should be as accurate as possible, and over the years it became feasible through the implementation of new algorithms to calculate the dose on the treatment planning systems applied in radiotherapy. When a breast tumour is irradiated, it is fundamental a precise dose distribution to ensure the planning target volume (PTV) coverage and prevent skin complications. Some investigations, using breast cases, showed that the pencil beam convolution algorithm (PBC) overestimates the dose in the PTV and in the proximal region of the ipsilateral lung. However, underestimates the dose in the distal region of the ipsilateral lung, when compared with analytical anisotropic algorithm (AAA). With this study we aim to compare the performance in breast tumors of the PBC and AAA algorithms.