986 resultados para Naïve Bayesian Classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi quantification (SQ) in DaTScan® studies is broadly used in clinic daily basis, however there is a suspicious about its discriminative capability, and concordance with the diagnostic classification performed by the physician. Aim: Evaluate the discriminate capability of an adapted database and reference's values of healthy controls for the Dopamine Transporters (DAT) with 123I–FP-IT named DBRV adapted to Nuclear Medicine Department's protocol and population of Infanta Cristina's Hospital, and its concordance with the physician classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We define families of aperiodic words associated to Lorenz knots that arise naturally as syllable permutations of symbolic words corresponding to torus knots. An algorithm to construct symbolic words of satellite Lorenz knots is defined. We prove, subject to the validity of a previous conjecture, that Lorenz knots coded by some of these families of words are hyperbolic, by showing that they are neither satellites nor torus knots and making use of Thurston's theorem. Infinite families of hyperbolic Lorenz knots are generated in this way, to our knowledge, for the first time. The techniques used can be generalized to study other families of Lorenz knots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

More than ever, there is an increase of the number of decision support methods and computer aided diagnostic systems applied to various areas of medicine. In breast cancer research, many works have been done in order to reduce false-positives when used as a double reading method. In this study, we aimed to present a set of data mining techniques that were applied to approach a decision support system in the area of breast cancer diagnosis. This method is geared to assist clinical practice in identifying mammographic findings such as microcalcifications, masses and even normal tissues, in order to avoid misdiagnosis. In this work a reliable database was used, with 410 images from about 115 patients, containing previous reviews performed by radiologists as microcalcifications, masses and also normal tissue findings. Throughout this work, two feature extraction techniques were used: the gray level co-occurrence matrix and the gray level run length matrix. For classification purposes, we considered various scenarios according to different distinct patterns of injuries and several classifiers in order to distinguish the best performance in each case described. The many classifiers used were Naïve Bayes, Support Vector Machines, k-nearest Neighbors and Decision Trees (J48 and Random Forests). The results in distinguishing mammographic findings revealed great percentages of PPV and very good accuracy values. Furthermore, it also presented other related results of classification of breast density and BI-RADS® scale. The best predictive method found for all tested groups was the Random Forest classifier, and the best performance has been achieved through the distinction of microcalcifications. The conclusions based on the several tested scenarios represent a new perspective in breast cancer diagnosis using data mining techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to electricity market negotiating players. ALBidS uses a set of different strategies for providing decision support to market players. These strategies are used accordingly to their probability of success for each different context. The approach proposed in this paper uses a Bayesian network for deciding the most probably successful action at each time, depending on past events. The performance of the proposed methodology is tested using electricity market simulations in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets). MASCEM provides the means for simulating a real electricity market environment, based on real data from real electricity market operators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several studies have shown that patients with congestive heart failure (CHF) have a compromised health-related quality of life (HRQL), and this, in recent years, has become a primary endpoint when considering the impact of treatment of chronic conditions such as CHF. OBJECTIVES: To evaluate the psychometric properties of the Portuguese version of a new specific instrument to measure HRQL in patients hospitalized for CHF: the Kansas City Cardiomyopathy Questionnaire (KCCQ). METHODS: The KCCQ was applied to a sample of 193 consecutive patients hospitalized for CHF. Of these, 105 repeated the assessment 3 months after admission, with no events during this period. Mean age was 64.4 +/- 12.4 years (21-88), and 72.5% were 72.5% male. CHF was of ischemic etiology in 4% of cases. RESULTS: This version of the KCCQ was subjected to statistical validation, with assessment of reliability and validity, similar to the American version. Reliability was assessed by the internal consistency of the domains and summary scores, which showed similar values of Cronbach alpha (0.50-0.94). Validity was assessed by convergence, sensitivity to differences between groups and sensitivity to changes in clinical condition. We evaluated the convergent validity of all domains related to functionality, through the relationship between them and a measure of functionality, the New York Heart Association (NYHA) classification. Significant correlations were found (p < 0.01) for this measure of functionality i patients with CHF. Analysis of variance between the physical limitation domain, the summary scores and NYHA class was performed and statistically significant differences were found (F = 23.4; F = 36.4; F = 37.4, p = 0.0001) in the ability to discriminate severity of clinical condition. A second evaluation was performed on 105 patients at the 3-month follow-up outpatient appointment, and significant changes were observed in the mean scores of the domains assessed between hospital admission and the clinic appointment (differences from 14.9 to 30.6 on a scale of 0-100), indicating that the domains assessed are sensitive to changes in clinical condition. The correlation between dimensions of quality of life in the KCCQ is moderate, suggesting that the dimensions are independent, supporting the multifactorial nature of HRQL and the suitability of this measure for its evaluation. CONCLUSION: The KCCQ is a valid instrument, sensitive to change and a specific measure of HRQL in a population with dilated cardiomyopathy and CHF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. The Fourier Transform of eighteen different signals are calculated and approximated by trendlines based on a power law formula. A sensor classification scheme based on the frequency spectrum behavior is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rate of mother-to-child transmission (MTCT) of HIV as well as the implications of the circulating multiple subtypes to MTCT in Nigeria are not known. This study was therefore undertaken to determine the differential rates of MTCT of HIV-1 subtypes detected among infected pregnant women before ARV intervention therapy became available in Nigeria. Twenty of the HIV-positive women who signed the informed consent form during pregnancy brought their babies for follow-up testing at age 18-24 months. Plasma samples from both mother and baby were tested for HIV antibody at the Department of Virology, UCH, Ibadan, Nigeria. All positive samples (plasma and peripheral blood mononuclear cells - PBMCs) were shipped to the Institute of Tropical Medicine, Antwerp, Belgium, where the subtype of the infecting virus was determined using the HMA technique. Overall, a mother-to-child HIV transmission rate of 45% was found in this cohort. Specifically, 36.4%, 66.7% and 100% of the women infected with HIV-1 CRF02 (IbNg), G and B, respectively, transmitted the virus to their babies. As far as it can be ascertained, this is the first report on the rate of MTCT of HIV in Nigeria. The findings reported in this paper will form a useful reference for assessment of currently available therapeutic intervention of MTCT in the country.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O coaching é um processo que permite ajudar um ou mais indivíduos a definirem e saberem como concretizar os seus objetivos, sejam eles pessoais ou profissionais. Atualmente, existe um interesse e procura crescente de pessoas com experiência nesta área (designados por coaches) por parte de empresas, equipas desportivas, escolas e outras organizações, com a finalidade de obter um maior rendimento. De forma a ajudar os intervenientes no processo, este documento demonstra a necessidade de existir uma ferramenta de apoio que permite aos coaches gerirem melhor a sua atividade profissional. A pesquisa e estudo efetuados procuram responder a este caso, desenvolvendo um sistema informático inteligente de apoio ao coach dotado de uma interface centrada no utilizador. Antes de iniciar o desenvolvimento de um sistema inteligente é necessário realizar e apresentar um levantamento do estado da arte, mais concretamente sobre a interação homem-computador, modelação do perfil de utilizador e processo de coaching, que apresenta os fundamentos teóricos para a escolha da metodologia de desenvolvimento adequado. São apresentadas posteriormente as fases constituintes do modelo de desenvolvimento de interfaces escolhido, a engenharia de usabilidade, que se inicia com uma análise detalhada, permitindo de seguida uma estruturação dos conhecimentos obtidos e a aplicação de linhas de orientação estipuladas, finalizando com testes de utilização e respetivo feedback dos utilizadores. O protótipo desenvolvido distingue utilizadores com diferentes características, através de uma classificação por níveis e permite gerir todo o processo de coaching efetuado a outras pessoas ou ao próprio utilizador. O facto de existir uma classificação dos utilizadores faz com que a interação entre sistema e utilizadores seja diferente e adaptada às necessidades de cada um. O resultado dos testes de utilização com um caso prático e dos questionários efetuados permite detetar se o modelo foi bem-sucedido e funciona corretamente e o que é necessário alterar no futuro para facilitar a interação e satisfazer as necessidades de cada utilizador.