899 resultados para Computacional Intelligence in Medecine


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel method for prediction of the onset of a spontaneous (paroxysmal) atrial fibrilation episode by representing the electrocardiograph (ECG) output as two time series corresponding to the interbeat intervals and the lengths of the atrial component of the ECG. We will then show how different entropy measures can be calulated from both of these series and then combined in a neural network trained using the Bayesian evidence procedure to form and effective predictive classifier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data visualization algorithms and feature selection techniques are both widely used in bioinformatics but as distinct analytical approaches. Until now there has been no method of measuring feature saliency while training a data visualization model. We derive a generative topographic mapping (GTM) based data visualization approach which estimates feature saliency simultaneously with the training of the visualization model. The approach not only provides a better projection by modeling irrelevant features with a separate noise model but also gives feature saliency values which help the user to assess the significance of each feature. We compare the quality of projection obtained using the new approach with the projections from traditional GTM and self-organizing maps (SOM) algorithms. The results obtained on a synthetic and a real-life chemoinformatics dataset demonstrate that the proposed approach successfully identifies feature significance and provides coherent (compact) projections. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evaluation and selection of industrial projects before investment decision is customarily done using marketing, technical, and financial information. Subsequently, environmental impact assessment and social impact assessment are carried out mainly to satisfy the statutory agencies. Because of stricter environment regulations in developed and developing countries, quite often impact assessment suggests alternate sites, technologies, designs, and implementation methods as mitigating measures. This causes considerable delay to complete project feasibility analysis and selection as complete analysis requires to be taken up again and again until the statutory regulatory authority approves the project. Moreover, project analysis through the above process often results in suboptimal projects as financial analysis may eliminate better options as more environment friendly alternative will always be cost intensive. In this circumstance, this study proposes a decision support system which analyses projects with respect to market, technicalities, and social and environmental impact in an integrated framework using analytic hierarchy process, a multiple attribute decision-making technique. This not only reduces duration of project evaluation and selection, but also helps select an optimal project for the organization for sustainable development. The entire methodology has been applied to a cross-country oil pipeline project in India and its effectiveness has been demonstrated. © 2008, IGI Global.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An introduction is presented for this issue which includes the articles "Internationalizing Sales Research: Current Status, Opportunities and Challenges" by Nikolaos G. Panagopoulos and colleagues, "Cultural Intelligence in Cross-Cultural Selling: Propositions and Directions for Future Research" by John D. Hansen and colleagues, and "A New Conceptual Framework of Sales Force Control Systems" by Ren Y. Darmon and Xavier C. Martin

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Biomedical events extraction concerns about events describing changes on the state of bio-molecules from literature. Comparing to the protein-protein interactions (PPIs) extraction task which often only involves the extraction of binary relations between two proteins, biomedical events extraction is much harder since it needs to deal with complex events consisting of embedded or hierarchical relations among proteins, events, and their textual triggers. In this paper, we propose an information extraction system based on the hidden vector state (HVS) model, called HVS-BioEvent, for biomedical events extraction, and investigate its capability in extracting complex events. Methods and material: HVS has been previously employed for extracting PPIs. In HVS-BioEvent, we propose an automated way to generate abstract annotations for HVS training and further propose novel machine learning approaches for event trigger words identification, and for biomedical events extraction from the HVS parse results. Results: Our proposed system achieves an F-score of 49.57% on the corpus used in the BioNLP'09 shared task, which is only 2.38% lower than the best performing system by UTurku in the BioNLP'09 shared task. Nevertheless, HVS-BioEvent outperforms UTurku's system on complex events extraction with 36.57% vs. 30.52% being achieved for extracting regulation events, and 40.61% vs. 38.99% for negative regulation events. Conclusions: The results suggest that the HVS model with the hierarchical hidden state structure is indeed more suitable for complex event extraction since it could naturally model embedded structural context in sentences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To date, more than 16 million citations of published articles in biomedical domain are available in the MEDLINE database. These articles describe the new discoveries which accompany a tremendous development in biomedicine during the last decade. It is crucial for biomedical researchers to retrieve and mine some specific knowledge from the huge quantity of published articles with high efficiency. Researchers have been engaged in the development of text mining tools to find knowledge such as protein-protein interactions, which are most relevant and useful for specific analysis tasks. This chapter provides a road map to the various information extraction methods in biomedical domain, such as protein name recognition and discovery of protein-protein interactions. Disciplines involved in analyzing and processing unstructured-text are summarized. Current work in biomedical information extracting is categorized. Challenges in the field are also presented and possible solutions are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is about two fundamental problems in the field of computer science. Solving these two problems is important because it has to do with the creation of Artificial Intelligence. In fact, these two problems are not very famous because they have not many applications outside the field of Artificial Intelligence. In this paper we will give a solution neither of the first nor of the second problem. Our goal will be to formulate these two problems and to give some ideas for their solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information processing in the human brain has always been considered as a source of inspiration in Artificial Intelligence; in particular, it has led researchers to develop different tools such as artificial neural networks. Recent findings in Neurophysiology provide evidence that not only neurons but also isolated and networks of astrocytes are responsible for processing information in the human brain. Artificial neural net- works (ANNs) model neuron-neuron communications. Artificial neuron-glia networks (ANGN), in addition to neuron-neuron communications, model neuron-astrocyte con- nections. In continuation of the research on ANGNs, first we propose, and evaluate a model of adaptive neuro fuzzy inference systems augmented with artificial astrocytes. Then, we propose a model of ANGNs that captures the communications of astrocytes in the brain; in this model, a network of artificial astrocytes are implemented on top of a typical neural network. The results of the implementation of both networks show that on certain combinations of parameter values specifying astrocytes and their con- nections, the new networks outperform typical neural networks. This research opens a range of possibilities for future work on designing more powerful architectures of artificial neural networks that are based on more realistic models of the human brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At the moment, the phrases “big data” and “analytics” are often being used as if they were magic incantations that will solve all an organization’s problems at a stroke. The reality is that data on its own, even with the application of analytics, will not solve any problems. The resources that analytics and big data can consume represent a significant strategic risk if applied ineffectively. Any analysis of data needs to be guided, and to lead to action. So while analytics may lead to knowledge and intelligence (in the military sense of that term), it also needs the input of knowledge and intelligence (in the human sense of that term). And somebody then has to do something new or different as a result of the new insights, or it won’t have been done to any purpose. Using an analytics example concerning accounts payable in the public sector in Canada, this paper reviews thinking from the domains of analytics, risk management and knowledge management, to show some of the pitfalls, and to present a holistic picture of how knowledge management might help tackle the challenges of big data and analytics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’objectif principal de cette thèse était de créer, d’implanter et d’évaluer l’efficacité d’un programme de remédiation cognitive, intervenant de façon comparable sur les aspects fluide (Gf) et cristallisé (Gc) de l’intelligence, au sein d’une population d’intérêt clinique, les adolescents présentant un fonctionnement intellectuel limite (FIL). Compte tenu de la forte prévalence de ce trouble, le programme de remédiation GAME (Gains et Apprentissages Multiples pour Enfant) s’est développé autour de jeux disponibles dans le commerce afin de faciliter l’accès et l’implantation de ce programme dans divers milieux. Le premier article de cette thèse, réalisé sous forme de revue systématique de la littérature, avait pour objectif de faire le point sur les études publiées utilisant le jeu comme outil de remédiation cognitive dans la population pédiatrique. L’efficacité, ainsi que la qualité du paradigme utilisé ont été évaluées, et des recommandations sur les aspects méthodologiques à respecter lors de ce type d’étude ont été proposées. Cet article a permis une meilleure compréhension des écueils à éviter et des points forts méthodologiques à intégrer lors de la création du programme de remédiation GAME. Certaines mises en garde méthodologiques relevées dans cet article ont permis d’améliorer la qualité du programme de remédiation cognitive développé dans ce projet de thèse. Compte tenu du peu d’études présentes dans la littérature scientifique concernant la population présentant un FIL (70ine cognitif, une force a été mise en évidence au niveau langagier pour le vocabulaire réceptif et la fluence verbale. De plus, certains aspects des capacités attentionnelles et de la mémoire de travail semblaient être bien compensés, possiblement via l’effet de psychostimulants. Étonnamment, le fonctionnement adaptatif n’était pas directement relié au niveau intellectuel global et était hétérogène, suggérant l’importance d’évaluer ce domaine pour rendre compte du niveau de fonctionnement des adolescents présentant un FIL dans la vie quotidienne. D’un point de vue comportemental et psychiatrique, les adolescents avec un FIL avaient plus de manifestations internalisées et externalisées atteignant un seuil cliniquement significatif que leurs pairs. Ces manifestations comportementales expliquent d’ailleurs une part importante du niveau de stress parental dans cette population. Ces résultats sont importants à considérer lors de l’orientation académique, clinique et familiale dans la prise en charge des adolescents présentant un FIL, et soulignent l’importance de proposer une évaluation neuropsychologique approfondie. Enfin, la partie centrale de cette thèse consistait à créer un programme de remédiation cognitive portant sur les aspects fluide et cristallisé de l’intelligence, champs d’intervention qui a été négligé compte tenu de la stabilité longtemps postulée de ces processus. Ce programme de remédiation, intitulé GAME, s’adressait aux adolescents présentant un FIL pur ou partiel (soit les deux indices de raisonnement étaient dans la zone limite, soit un seul des deux), et présentait deux versants, GAME-c (portant sur l’intelligence cristallisée) et GAME-f (portant sur l’intelligence fluide). Cette intervention durait seize heures réparties sur huit semaines. Les résultats indiquent que les adolescents ayant suivi GAME-f ont amélioré leur raisonnement fluide; alors que les adolescents ayant suivi GAME-c ont amélioré à la fois leur raisonnement cristallisé et fluide. Cette étude contribue à remettre en question la stabilité des processus intellectuels. C’est par contre la première fois que des améliorations de l’intelligence sont constatées dans une population d’intérêt clinique par le biais d’un entraînement direct. Enfin, les variables cognitives, adaptatives, comportementales et psychiatriques susceptibles d’influencer la qualité de l’amélioration pour chacun des programmes GAME ont fait l’objet d’analyses supplémentaires dans un dernier chapitre et permettent de conclure à la possibilité d’adapter le programme GAME à d’autres populations (ex: déficience intellectuelle). Cette thèse a donc permis de souligner la pertinence d’utiliser les jeux comme outil de remédiation cognitive de part leur versatilité dans leur utilisation, leur facilité d’accès et leur faible coût. Elle met également en avant la nécessité de développer une meilleure compréhension de la population présentant un fonctionnement intellectuel limite et d’effectuer des évaluations neuropsychologiques exhaustives auprès de cette population (cognitif, adaptatif, comportemental et psychiatrique). Enfin, elle souligne la possibilité d’améliorer par remédiation directe les intelligences fluide et cristallisée auprès d’individus avec une intelligence subnormale et suggère qu’il pourrait en être de même pour des populations présentant des déficits cognitifs, comme la déficience intellectuelle légère. Les avenues futures de recherche et les retombées cliniques de ce travail sont discutées, en lien avec les différents résultats trouvés dans ces études.