15 resultados para Medicina -- Informática
em Universidad Politécnica de Madrid
Resumo:
ste trabajo presenta un análisis comparativo entre tres algoritmos de aprendizaje diferentes basados en Árboles de Decisión (C4.5) y Redes Neuronales Artificiales (Perceptrón Multicapa MLP y Red Neuronal de Regresión General GRNN) que han sido implementados con el objetivo de predecir los resultados de la rehabilitación cognitiva de personas con daño cerebral adquirido. En el análisis se han incluido datos demográficos del paciente, el perfil de afectación y los resultados provenientes de las tareas de rehabilitación ejecutadas por los pacientes. Los modelos han sido evaluados utilizando la base de datos del Institut Guttmann. El rendimiento de los algoritmos se midió a través del análisis de la especificidad, sensibilidad y exactitud en la precisión y el análisis de la matriz de confusión. Los resultados muestran que la implementación del C4.5 alcanzó una especificidad, sensibilidad y exactitud en la precisión del 98.43%, 83.77% y 89.42% respectivamente. El rendimiento del C4.5 fue significativamente superior al obtenido por el Perceptrón Multicapa y la Red de Regresión General.
Resumo:
El propósito principal de esta investigación es la aplicación de la Metaplasticidad Artificial en un Perceptrón Multicapa (AMMLP) como una herramienta de minería de datos para la predicción y extracción explícita de conocimiento del proceso de rehabilitación cognitiva en pacientes con daño cerebral adquirido. Los resultados obtenidos por el AMMLP junto con el posterior análisis de la base de datos ayudarían a los terapeutas a conocer las características de los pacientes que mejoran y los programas de rehabilitación que han seguido. Esto incrementaría el conocimiento del proceso de rehabilitación y facilitaría la elaboración de hipótesis terapéuticas permitiendo la optimización y personalización de las terapias. La evaluación del AMMLP se ha realizado con datos proporcionados por el Institut Guttmann. Los resultados del AMMLP fueron comparados con los obtenidos con una red neuronal de retropropagación y con árboles de decisión. La exactitud en la predicción obtenida por el AMMLP en la subfunción cognitiva memoria verbal-visual fue de 90.71 %, resultado muy superior a los obtenidos por los demás algoritmos.
Resumo:
En la actualidad, las personas infectadas por el VIH con acceso a tratamiento retrasan indefinidamente su entrada en la fase SIDA de la enfermedad, convirtiéndose en pacientes crónicos. Un mayor conocimiento del comportamiento del virus y de cómo afecta a las personas infectadas podría conducirnos a optimizar el tratamiento y con ello mejorar la calidad de vida de los pacientes. En este contexto aparece la minería de datos, un conjunto de metodologías que, aplicadas a grandes bases de datos, nos permiten obtener información novedosa y potencialmente útil oculta en ellas. Este trabajo de investigación realiza una primera aproximación al problema mediante la búsqueda de asociaciones en una base de datos en la que se registran las historias clínicas electrónicas de personas infectadas que son tratadas en el Hospital Clínic de Barcelona.
Resumo:
This contribution reviews the current state of art comprising the application of Complex Networks Theory to the analysis of functional brain networks. We briefly overview the main advances in this field during the last decade and we explain how graph analysis has increased our knowledge about how the brain behaves when performing a specific task or how it fails when a certain pathology arises. We also show the limitations of this kind of analysis, which have been a source of confusion and misunderstanding when interpreting the results obtained. Finally, we discuss about a possible direction to follow in the next years.
Resumo:
Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.
Resumo:
BACKGROUND: Clinical Trials (CTs) are essential for bridging the gap between experimental research on new drugs and their clinical application. Just like CTs for traditional drugs and biologics have helped accelerate the translation of biomedical findings into medical practice, CTs for nanodrugs and nanodevices could advance novel nanomaterials as agents for diagnosis and therapy. Although there is publicly available information about nanomedicine-related CTs, the online archiving of this information is carried out without adhering to criteria that discriminate between studies involving nanomaterials or nanotechnology-based processes (nano), and CTs that do not involve nanotechnology (non-nano). Finding out whether nanodrugs and nanodevices were involved in a study from CT summaries alone is a challenging task. At the time of writing, CTs archived in the well-known online registry ClinicalTrials.gov are not easily told apart as to whether they are nano or non-nano CTs-even when performed by domain experts, due to the lack of both a common definition for nanotechnology and of standards for reporting nanomedical experiments and results. METHODS: We propose a supervised learning approach for classifying CT summaries from ClinicalTrials.gov according to whether they fall into the nano or the non-nano categories. Our method involves several stages: i) extraction and manual annotation of CTs as nano vs. non-nano, ii) pre-processing and automatic classification, and iii) performance evaluation using several state-of-the-art classifiers under different transformations of the original dataset. RESULTS AND CONCLUSIONS: The performance of the best automated classifier closely matches that of experts (AUC over 0.95), suggesting that it is feasible to automatically detect the presence of nanotechnology products in CT summaries with a high degree of accuracy. This can significantly speed up the process of finding whether reports on ClinicalTrials.gov might be relevant to a particular nanoparticle or nanodevice, which is essential to discover any precedents for nanotoxicity events or advantages for targeted drug therapy.
Resumo:
To support the efficient execution of post-genomic multi-centric clinical trials in breast cancer we propose a solution that streamlines the assessment of the eligibility of patients for available trials. The assessment of the eligibility of a patient for a trial requires evaluating whether each eligibility criterion is satisfied and is often a time consuming and manual task. The main focus in the literature has been on proposing different methods for modelling and formalizing the eligibility criteria. However the current adoption of these approaches in clinical care is limited. Less effort has been dedicated to the automatic matching of criteria to the patient data managed in clinical care. We address both aspects and propose a scalable, efficient and pragmatic patient screening solution enabling automatic evaluation of eligibility of patients for a relevant set of trials. This covers the flexible formalization of criteria and of other relevant trial metadata and the efficient management of these representations.
Resumo:
Recent commentaries have proposed the advantages of using open exchange of data and informatics resources for improving health-related policies and patient care in Africa. Yet, in many African regions, both private medical and public health information systems are still unaffordable. Open exchange over the social Web 2.0 could encourage more altruistic support of medical initiatives. We have carried out some experiments to demonstrate the feasibility of using this approach to disseminate open data and informatics resources in Africa. After the experiments we developed the AFRICA BUILD Portal, the first Social Network for African biomedical researchers. Through the AFRICA BUILD Portal users can access in a transparent way to several resources. Currently, over 600 researchers are using distributed and open resources through this platform committed to low connections.
Resumo:
The Institute of Tropical Medicine in Antwerp hereby presents the results of two pilot distance learning training programmes, developed under the umbrella of the AFRICA BUILD project (FP7). The two courses focused on evidence-based medicine (EBM): with the aim of enhancing research and education, via novel approaches and to identify research needs emanating from the field. These pilot experiences, which were run both in English-speaking (Ghana), and French-speaking (Mali and Cameroon) partner institutions, produced targeted courses for the strengthening of research methodology and policy. The courses and related study materials are in the public domain and available through the AFRICA BUILD Portal (http://www.africabuild.eu/taxonomy/term/37); the training modules were delivered live via Dudal webcasts. This paper assesses the success and difficulties of transferring EBM skills with these two specific training programmes, offered through three different approaches: fully online facultative courses, fully online tutor supported courses or through a blended approach with both online and face-to-face sessions. Key factors affecting the selection of participants, the accessibility of the courses, how the learning resources are offered, and how interactive online communities are formed, are evaluated and discussed.
Capacity Building through education, research and collaboration: AFRICA BUILD, an eHealth Case Study
Resumo:
AFRICA BUILD (AB) is a Coordination Action project under the 7th European Framework Programme having the aim of improving the capacities for health research and education in Africa through Information and Communication Technologies (ICT). This project, started in 2012, has promoted health research, education and evidence-based practice in Africa through the creation of centers of excellence, by using ICT,?know-how?, eLearning and knowledge sharing, through Web-enabled virtual communities.
Resumo:
Secure access to patient data is becoming of increasing importance, as medical informatics grows in significance, to both assist with population health studies, and patient specific medicine in support of treatment. However, assembling the many different types of data emanating from the clinic is in itself a difficulty, and doing so across national borders compounds the problem. In this paper we present our solution: an easy to use distributed informatics platform embedding a state of the art data warehouse incorporating a secure pseudonymisation system protecting access to personal healthcare data. Using this system, a whole range of patient derived data, from genomics to imaging to clinical records, can be assembled and linked, and then connected with analytics tools that help us to understand the data. Research performed in this environment will have immediate clinical impact for personalised patient healthcare.
Resumo:
Semantic interoperability is essential to facilitate efficient collaboration in heterogeneous multi-site healthcare environments. The deployment of a semantic interoperability solution has the potential to enable a wide range of informatics supported applications in clinical care and research both within as ingle healthcare organization and in a network of organizations. At the same time, building and deploying a semantic interoperability solution may require significant effort to carryout data transformation and to harmonize the semantics of the information in the different systems. Our approach to semantic interoperability leverages existing healthcare standards and ontologies, focusing first on specific clinical domains and key applications, and gradually expanding the solution when needed. An important objective of this work is to create a semantic link between clinical research and care environments to enable applications such as streamlining the execution of multi-centric clinical trials, including the identification of eligible patients for the trials. This paper presents an analysis of the suitability of several widely-used medical ontologies in the clinical domain: SNOMED-CT, LOINC, MedDRA, to capture the semantics of the clinical trial eligibility criteria, of the clinical trial data (e.g., Clinical Report Forms), and of the corresponding patient record data that would enable the automatic identification of eligible patients. Next to the coverage provided by the ontologies we evaluate and compare the sizes of the sets of relevant concepts and their relative frequency to estimate the cost of data transformation, of building the necessary semantic mappings, and of extending the solution to new domains. This analysis shows that our approach is both feasible and scalable.
Resumo:
Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.
Resumo:
El libro que presentamos responde a esta llamada, puesto que se ocupa de algunas de las raíces o fundamentos de la informática. Lo hemos escrito pensando en los estudiantes universitarios de las ramas de informática, así como en los profesionales antes mencionados. Estos últimos encontrarán un texto autocontenido,desprovisto en lo posible del aparato teórico habitual y preocupado permanentemente en la tarea de desarrollar aperturas a cuestiones de la más viva actualidad, como los sistemas borrosos o la complejidad del software, y a cuestiones en las que parece vislumbrarse un futuro. En cuanto a los estudiantes,nuestra experiencia nos dice que, por un cúmulo de circunstancias que no hacen al caso, se ven obligados con frecuencia a estudiar las materias objeto de nuestro libro, tal vez, sí, con mayor extensión y formalismo matemático, pero no siempre bajo condiciones óptimas: apuntes improvisados, textos en lenguas extranjeras, dispersión de estas mismas materias en distintas asignaturas y por tanto fragmentación de su sentido radical (raíces), o desapego del sentido de su aplicación. Sin poner en tela de juicio la necesidad científica del mejor formalismo posible, está constatado que dosis excesivas y exclusivas de esa medicina conducen en el plano educativo a un estéril desánimo de los estudiantes.
Resumo:
En los últimos años ha habido un gran aumento de fuentes de datos biomédicos. La aparición de nuevas técnicas de extracción de datos genómicos y generación de bases de datos que contienen esta información ha creado la necesidad de guardarla para poder acceder a ella y trabajar con los datos que esta contiene. La información contenida en las investigaciones del campo biomédico se guarda en bases de datos. Esto se debe a que las bases de datos permiten almacenar y manejar datos de una manera simple y rápida. Dentro de las bases de datos existen una gran variedad de formatos, como pueden ser bases de datos en Excel, CSV o RDF entre otros. Actualmente, estas investigaciones se basan en el análisis de datos, para a partir de ellos, buscar correlaciones que permitan inferir, por ejemplo, tratamientos nuevos o terapias más efectivas para una determinada enfermedad o dolencia. El volumen de datos que se maneja en ellas es muy grande y dispar, lo que hace que sea necesario el desarrollo de métodos automáticos de integración y homogeneización de los datos heterogéneos. El proyecto europeo p-medicine (FP7-ICT-2009-270089) tiene como objetivo asistir a los investigadores médicos, en este caso de investigaciones relacionadas con el cáncer, proveyéndoles con nuevas herramientas para el manejo de datos y generación de nuevo conocimiento a partir del análisis de los datos gestionados. La ingestión de datos en la plataforma de p-medicine, y el procesamiento de los mismos con los métodos proporcionados, buscan generar nuevos modelos para la toma de decisiones clínicas. Dentro de este proyecto existen diversas herramientas para integración de datos heterogéneos, diseño y gestión de ensayos clínicos, simulación y visualización de tumores y análisis estadístico de datos. Precisamente en el ámbito de la integración de datos heterogéneos surge la necesidad de añadir información externa al sistema proveniente de bases de datos públicas, así como relacionarla con la ya existente mediante técnicas de integración semántica. Para resolver esta necesidad se ha creado una herramienta, llamada Term Searcher, que permite hacer este proceso de una manera semiautomática. En el trabajo aquí expuesto se describe el desarrollo y los algoritmos creados para su correcto funcionamiento. Esta herramienta ofrece nuevas funcionalidades que no existían dentro del proyecto para la adición de nuevos datos provenientes de fuentes públicas y su integración semántica con datos privados.---ABSTRACT---Over the last few years, there has been a huge growth of biomedical data sources. The emergence of new techniques of genomic data generation and data base generation that contain this information, has created the need of storing it in order to access and work with its data. The information employed in the biomedical research field is stored in databases. This is due to the capability of databases to allow storing and managing data in a quick and simple way. Within databases there is a variety of formats, such as Excel, CSV or RDF. Currently, these biomedical investigations are based on data analysis, which lead to the discovery of correlations that allow inferring, for example, new treatments or more effective therapies for a specific disease or ailment. The volume of data handled in them is very large and dissimilar, which leads to the need of developing new methods for automatically integrating and homogenizing the heterogeneous data. The p-medicine (FP7-ICT-2009-270089) European project aims to assist medical researchers, in this case related to cancer research, providing them with new tools for managing and creating new knowledge from the analysis of the managed data. The ingestion of data into the platform and its subsequent processing with the provided tools aims to enable the generation of new models to assist in clinical decision support processes. Inside this project, there exist different tools related to areas such as the integration of heterogeneous data, the design and management of clinical trials, simulation and visualization of tumors and statistical data analysis. Particularly in the field of heterogeneous data integration, there is a need to add external information from public databases, and relate it to the existing ones through semantic integration methods. To solve this need a tool has been created: the term Searcher. This tool aims to make this process in a semiautomatic way. This work describes the development of this tool and the algorithms employed in its operation. This new tool provides new functionalities that did not exist inside the p-medicine project for adding new data from public databases and semantically integrate them with private data.