864 resultados para Data mining methods
Resumo:
Objective: To illustrate methodological issues involved in estimating dietary trends in populations using data obtained from various sources in Australia in the 1980s and 1990s. Methods: Estimates of absolute and relative change in consumption of selected food items were calculated using national data published annually on the national food supply for 1982-83 to 1992-93 and responses to food frequency questions in two population based risk factor surveys in 1983 and 1994 in the Hunter Region of New South Wales, Australia. The validity of estimated food quantities obtained from these inexpensive sources at the beginning of the period was assessed by comparison with data from a national dietary survey conducted in 1983 using 24 h recall. Results: Trend estimates from the food supply data and risk factor survey data were in good agreement for increases in consumption of fresh fruit, vegetables and breakfast food and decreases in butter, margarine, sugar and alcohol. Estimates for trends in milk, eggs and bread consumption, however, were inconsistent. Conclusions: Both data sources can be used for monitoring progress towards national nutrition goals based on selected food items provided that some limitations are recognized. While data collection methods should be consistent over time they also need to allow for changes in the food supply (for example the introduction of new varieties such as low-fat dairy products). From time to time the trends derived from these inexpensive data sources should be compared with data derived from more detailed and quantitative estimates of dietary intake.
Resumo:
Study Design: Data mining of single nucleotide polymorphisms (SNPs) in gene pathways related to spinal cord injury (SCI). Objectives: To identify gene polymorphisms putatively implicated with neuronal damage evolution pathways, potentially useful to SCI study. Setting: Departments of Psychiatry and Orthopedics, Faculdade de Medicina, Universidade de Sao Paulo, Brazil. Methods: Genes involved with processes related to SCI, such as apoptosis, inflammatory response, axonogenesis, peripheral nervous system development and axon ensheathment, were determined by evaluating the `Biological Process` annotation of Gene Ontology (GO). Each gene of these pathways was mapped using MapViewer, and gene coordinates were used to identify their polymorphisms in the SNP database. As a proof of concept, the frequency of subset of SNPs, located in four genes (ALOX12, APOE, BDNF and NINJ1) was evaluated in the DNA of a group of 28 SCI patients and 38 individuals with no SC lesions. Results: We could identify a total of 95 276 SNPs in a set of 588 genes associated with the selected GO terms, including 3912 nucleotide alterations located in coding regions of genes. The five non-synonymous SNPs genotyped in our small group of patients, showed a significant frequency, reinforcing their potential use for the investigation of SCI evolution. Conclusion: Despite the importance of SNPs in many aspects of gene expression and protein activity, these gene alterations have not been explored in SCI research. Here we describe a set of potentially useful SNPs, some of which could underlie the genetic mechanisms involved in the post trauma spinal cord damage.
Resumo:
We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.
Resumo:
Electricity markets are complex environments with very particular characteristics. MASCEM is a market simulator developed to allow deep studies of the interactions between the players that take part in the electricity market negotiations. This paper presents a new proposal for the definition of MASCEM players’ strategies to negotiate in the market. The proposed methodology is multiagent based, using reinforcement learning algorithms to provide players with the capabilities to perceive the changes in the environment, while adapting their bids formulation according to their needs, using a set of different techniques that are at their disposal.
Resumo:
Many current e-commerce systems provide personalization when their content is shown to users. In this sense, recommender systems make personalized suggestions and provide information of items available in the system. Nowadays, there is a vast amount of methods, including data mining techniques that can be employed for personalization in recommender systems. However, these methods are still quite vulnerable to some limitations and shortcomings related to recommender environment. In order to deal with some of them, in this work we implement a recommendation methodology in a recommender system for tourism, where classification based on association is applied. Classification based on association methods, also named associative classification methods, consist of an alternative data mining technique, which combines concepts from classification and association in order to allow association rules to be employed in a prediction context. The proposed methodology was evaluated in some case studies, where we could verify that it is able to shorten limitations presented in recommender systems and to enhance recommendation quality.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Projecto para obtenção do grau de Mestre em Engenharia Informática e de computadores
Resumo:
TPM Vol. 21, No. 4, December 2014, 435-447 – Special Issue © 2014 Cises.
Resumo:
Epidemiological studies have shown the effect of diet on the incidence of chronic diseases; however, proper planning, designing, and statistical modeling are necessary to obtain precise and accurate food consumption data. Evaluation methods used for short-term assessment of food consumption of a population, such as tracking of food intake over 24h or food diaries, can be affected by random errors or biases inherent to the method. Statistical modeling is used to handle random errors, whereas proper designing and sampling are essential for controlling biases. The present study aimed to analyze potential biases and random errors and determine how they affect the results. We also aimed to identify ways to prevent them and/or to use statistical approaches in epidemiological studies involving dietary assessments.
Resumo:
Conferência: CONTROLO’2012 - 16-18 July 2012 - Funchal
Resumo:
ABSTRACT This study aimed to describe the digital disease detection and participatory surveillance in different countries. The systems or platforms consolidated in the scientific field were analyzed by describing the strategy, type of data source, main objectives, and manner of interaction with users. Eleven systems or platforms, developed from 1996 to 2016, were analyzed. There was a higher frequency of data mining on the web and active crowdsourcing as well as a trend in the use of mobile applications. It is important to provoke debate in the academia and health services for the evolution of methods and insights into participatory surveillance in the digital age.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Proceedings of the Information Technology Applications in Biomedicine, Ioannina - Epirus, Greece, October 26-28, 2006
Resumo:
A vigilância de efeitos indesejáveis após a vacinação é complexa. Existem vários actores de confundimento que podem dar origem a associações espúrias, meramente temporais mas que podem provocar uma percepção do risco alterada e uma consequente desconfiança generalizada acerca do uso das vacinas. Com efeito as vacinas são medicamentos complexos com características únicas cuja vigilância necessita de abordagens metodológicas desenvolvidas para esse propósito. Do exposto se entende que, desde o desenvolvimento da farmacovigilância se tem procurado desenvolver novas metodologias que sejam concomitantes aos Sistemas de Notificação Espontânea que já existem. Neste trabalho propusemo-nos a desenvolver e testar um modelo de vigilância de reacções adversas a vacinas, baseado na auto-declaração pelo utente de eventos ocorridos após a vacinação e testar a capacidade de gerar sinais aplicando cálculos de desproporção a datamining. Para esse efeito foi constituída uma coorte não controlada de utentes vacinados em Centros de Saúde que foram seguidos durante quinze dias. A recolha de eventos adversos a vacinas foi efectuada pelos próprios utentes através de um diário de registo. Os dados recolhidos foram objecto de análise descritiva e análise de data-mining utilizando os cálculos Proportional Reporting Ratio e o Information Component. A metodologia utilizada permitiu gerar um corpo de evidência suficiente para a geração de sinais. Tendo sido gerados quatro sinais. No âmbito do data-mining a utilização do Information Component como método de geração de sinais parece aumentar a eficiência científica ao permitir reduzir o número de ocorrências até detecção de sinal. A informação reportada pelos utentes parece válida como indicador de sinais de reacções adversas não graves, o que permitiu o registo de eventos sem incluir o viés da avaliação da relação causal pelo notificador. Os principais eventos reportados foram eventos adversos locais (62,7%) e febre (31,4%).------------------------------------------ABSTRACT: The monitoring of undesirable effects following vaccination is complex. There are several confounding factors that can lead to merely temporal but spurious associations that can cause a change in the risk perception and a consequent generalized distrust about the safe use of vaccines. Indeed, vaccines are complex drugs with unique characteristics so that its monitoring requires specifically designed methodological approaches. From the above-cited it is understandable that since the development of Pharmacovigilance there has been a drive for the development of new methodologies that are concomitant with Spontaneous Reporting Systems already in place. We proposed to develop and test a new model for vaccine adverse reaction monitoring, based on self-report by users of events following vaccination and to test its capability to generate disproportionality signals applying quantitative methods of signal generation to data-mining. For that effect we set up an uncontrolled cohort of users vaccinated in Healthcare Centers,with a follow-up period of fifteen days. Adverse vaccine events we registered by the users themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. themselves in a paper diary The data was analyzed using descriptive statistics and two quantitative methods of signal generation: Proportional Reporting Ratio and Information Component. The methodology we used allowed for the generation of a sufficient body of evidence for signal generation. Four signals were generated. Regarding the data-mining, the use of Information Component as a method for generating disproportionality signals seems to increase scientific efficiency by reducing the number of events needed to signal detection. The information reported by users seems valid as an indicator of non serious adverse vaccine reactions, allowing for the registry of events without the bias of the evaluation of the casual relation by the reporter. The main adverse events reported were injection site reactions (62,7%) and fever (31,4%).