950 resultados para Online services using open-source NLP tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il problema dell'antibiotico-resistenza è un problema di sanità pubblica per affrontare il quale è necessario un sistema di sorveglianza basato sulla raccolta e l'analisi dei dati epidemiologici di laboratorio. Il progetto di dottorato è consistito nello sviluppo di una applicazione web per la gestione di tali dati di antibiotico sensibilità di isolati clinici utilizzabile a livello di ospedale. Si è creata una piattaforma web associata a un database relazionale per avere un’applicazione dinamica che potesse essere aggiornata facilmente inserendo nuovi dati senza dover manualmente modificare le pagine HTML che compongono l’applicazione stessa. E’ stato utilizzato il database open-source MySQL in quanto presenta numerosi vantaggi: estremamente stabile, elevate prestazioni, supportato da una grande comunità online ed inoltre gratuito. Il contenuto dinamico dell’applicazione web deve essere generato da un linguaggio di programmazione tipo “scripting” che automatizzi operazioni di inserimento, modifica, cancellazione, visualizzazione di larghe quantità di dati. E’ stato scelto il PHP, linguaggio open-source sviluppato appositamente per la realizzazione di pagine web dinamiche, perfettamente utilizzabile con il database MySQL. E’ stata definita l’architettura del database creando le tabelle contenenti i dati e le relazioni tra di esse: le anagrafiche, i dati relativi ai campioni, microrganismi isolati e agli antibiogrammi con le categorie interpretative relative al dato antibiotico. Definite tabelle e relazioni del database è stato scritto il codice associato alle funzioni principali: inserimento manuale di antibiogrammi, importazione di antibiogrammi multipli provenienti da file esportati da strumenti automatizzati, modifica/eliminazione degli antibiogrammi precedenti inseriti nel sistema, analisi dei dati presenti nel database con tendenze e andamenti relativi alla prevalenza di specie microbiche e alla chemioresistenza degli stessi, corredate da grafici. Lo sviluppo ha incluso continui test delle funzioni via via implementate usando reali dati clinici e sono stati introdotti appositi controlli e l’introduzione di una semplice e pulita veste grafica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We recently reported on the Multi Wave Animator (MWA), a novel open-source tool with capability of recreating continuous physiologic signals from archived numerical data and presenting them as they appeared on the patient monitor. In this report, we demonstrate for the first time the power of this technology in a real clinical case, an intraoperative cardiopulmonary arrest following reperfusion of a liver transplant graft. Using the MWA, we animated hemodynamic and ventilator data acquired before, during, and after cardiac arrest and resuscitation. This report is accompanied by an online video that shows the most critical phases of the cardiac arrest and resuscitation and provides a basis for analysis and discussion. This video is extracted from a 33-min, uninterrupted video of cardiac arrest and resuscitation, which is available online. The unique strength of MWA, its capability to accurately present discrete and continuous data in a format familiar to clinicians, allowed us this rare glimpse into events leading to an intraoperative cardiac arrest. Because of the ability to recreate and replay clinical events, this tool should be of great interest to medical educators, researchers, and clinicians involved in quality assurance and patient safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e.g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. RESULTS: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/~vpopovic/research/ CONCLUSION: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PDP++ is a freely available, open source software package designed to support the development, simulation, and analysis of research-grade connectionist models of cognitive processes. It supports most popular parallel distributed processing paradigms and artificial neural network architectures, and it also provides an implementation of the LEABRA computational cognitive neuroscience framework. Models are typically constructed and examined using the PDP++ graphical user interface, but the system may also be extended through the incorporation of user-written C++ code. This article briefly reviews the features of PDP++, focusing on its utility for teaching cognitive modeling concepts and skills to university undergraduate and graduate students. An informal evaluation of the software as a pedagogical tool is provided, based on the author’s classroom experiences at three research universities and several conference-hosted tutorials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. The author focuses on the Social Web and possibilities of its integration with the Semantic Web as resource for a semi-automated tracking of online reputations using imprecise natural language terms. The inherent structure of natural language supports humans not only in communication but also in the perception of the world. Thereby fuzziness is a promising tool for transforming those human perceptions into computer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management. For readers interested in the cross-over field of computer science, information systems, and social sciences, this book is an ideal source for becoming acquainted with the evolving field of fuzzy online reputation management in the Social Semantic Web area. ​

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An interdisciplinary research unit consisting of 30 teams in the natural, economic and social sciences analyzed biodiversity and ecosystem services of a mountain rainforest ecosystem in the hotspot of the tropical Andes, with special reference to past, current and future environmental changes. The group assessed ecosystem services using data from ecological field and scenario-driven model experiments, and with the help of comparative field surveys of the natural forest and its anthropogenic replacement system for agriculture. The book offers insights into the impacts of environmental change on various service categories mentioned in the Millennium Ecosystem Assessment (2005): cultural, regulating, supporting and provisioning ecosystem services. Examples focus on biodiversity of plants and animals including trophic networks, and abiotic/biotic parameters such as soils, regional climate, water, nutrient and sediment cycles. The types of threats considered include land use and climate changes, as well as atmospheric fertilization. In terms of regulating and provisioning services, the emphasis is primarily on water regulation and supply as well as climate regulation and carbon sequestration. With regard to provisioning services, the synthesis of the book provides science-based recommendations for a sustainable land use portfolio including several options such as forestry, pasture management and the practices of indigenous peoples. In closing, the authors show how they integrated the local society by pursuing capacity building in compliance with the CBD-ABS (Convention on Biological Diversity - Access and Benefit Sharing), in the form of education and knowledge transfer for application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. Erroneous answers in studies on the misinformation effect (ME) can be reduced in different ways. In some studies, ME was reduced by SM questions, warnings, or a low credibility of the source of post-event information (PEI). Results are inconsistent, however. Of course, a participant can deliberately decide to refrain from reporting a critical item only when the difference between the original event and the PEI is distinguishable in principle. We were interested in the question to what extent the influence of erroneous information on a central aspect of the original event can be reduced by different means applied singly or in combination. Method. With a 2 (credibility; high vs. low) x 2 (warning; present vs. absent) between subjects design and an additional control group that received neither misinformation nor a warning (N = 116), we examined the above-mentioned factors’ influence on the ME. Participants viewed a short video of a robbery. The critical item suggested in the PEI was that the victim was given a kick by the perpetrator (which he was actually not). The memory test consisted of a two-forced-choice recognition test followed by a SM test. Results. To our surprise, neither a main effect of erroneous PEI nor a main effect of credibility was found. The error rates for the critical item in the control group (50%) as well as in the high (65%) and low (52%) credibility condition without warning did not significantly differ. A warning about possible misleading information in the PEI significantly reduced the influence of misinformation in both credibility conditions by 32-37%. Using a SM question significantly reduced the error rate too, but only in the high credibility no warning condition. Conclusion and Future Research. Our results show that, contrary to a warning or the use of a SM question, low source credibility did not reduce the ME. The most striking finding was, however, the absence of a main effect of erroneous PEI. Due to the high error rate in the control group, we suspect that the wrong answers might have been caused either by the response format (recognition test) or by autosuggestion possibly promoted by the high schema-consistency of the critical item. First results of a post-study in which we used open-ended questions before the recognition test support the former assumption. Results of a replication of this study using open-ended questions prior to the recognition test will be available by June.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Internet de las Cosas (IoT), como parte de la Futura Internet, se ha convertido en la actualidad en uno de los principales temas de investigación; en parte gracias a la atención que la sociedad está poniendo en el desarrollo de determinado tipo de servicios (telemetría, generación inteligente de energía, telesanidad, etc.) y por las recientes previsiones económicas que sitúan a algunos actores, como los operadores de telecomunicaciones (que se encuentran desesperadamente buscando nuevas oportunidades), al frente empujando algunas tecnologías interrelacionadas como las comunicaciones Máquina a Máquina (M2M). En este contexto, un importante número de actividades de investigación a nivel mundial se están realizando en distintas facetas: comunicaciones de redes de sensores, procesado de información, almacenamiento de grandes cantidades de datos (big--‐data), semántica, arquitecturas de servicio, etc. Todas ellas, de forma independiente, están llegando a un nivel de madurez que permiten vislumbrar la realización de la Internet de las Cosas más que como un sueño, como una realidad tangible. Sin embargo, los servicios anteriormente mencionados no pueden esperar a desarrollarse hasta que las actividades de investigación obtengan soluciones holísticas completas. Es importante proporcionar resultados intermedios que eviten soluciones verticales realizadas para desarrollos particulares. En este trabajo, nos hemos focalizado en la creación de una plataforma de servicios que pretende facilitar, por una parte la integración de redes de sensores y actuadores heterogéneas y geográficamente distribuidas, y por otra lado el desarrollo de servicios horizontales utilizando dichas redes y la información que proporcionan. Este habilitador se utilizará para el desarrollo de servicios y para la experimentación en la Internet de las Cosas. Previo a la definición de la plataforma, se ha realizado un importante estudio focalizando no sólo trabajos y proyectos de investigación, sino también actividades de estandarización. Los resultados se pueden resumir en las siguientes aseveraciones: a) Los modelos de datos definidos por el grupo “Sensor Web Enablement” (SWE™) del “Open Geospatial Consortium (OGC®)” representan hoy en día la solución más completa para describir las redes de sensores y actuadores así como las observaciones. b) Las interfaces OGC, a pesar de las limitaciones que requieren cambios y extensiones, podrían ser utilizadas como las bases para acceder a sensores y datos. c) Las redes de nueva generación (NGN) ofrecen un buen sustrato que facilita la integración de redes de sensores y el desarrollo de servicios. En consecuencia, una nueva plataforma de Servicios, llamada Ubiquitous Sensor Networks (USN), se ha definido en esta Tesis tratando de contribuir a rellenar los huecos previamente mencionados. Los puntos más destacados de la plataforma USN son: a) Desde un punto de vista arquitectónico, sigue una aproximación de dos niveles (Habilitador y Gateway) similar a otros habilitadores que utilizan las NGN (como el OMA Presence). b) Los modelos de datos están basado en los estándares del OGC SWE. iv c) Está integrado en las NGN pero puede ser utilizado sin ellas utilizando infraestructuras IP abiertas. d) Las principales funciones son: Descubrimiento de sensores, Almacenamiento de observaciones, Publicacion--‐subscripcion--‐notificación, ejecución remota homogénea, seguridad, gestión de diccionarios de datos, facilidades de monitorización, utilidades de conversión de protocolos, interacciones síncronas y asíncronas, soporte para el “streaming” y arbitrado básico de recursos. Para demostrar las funcionalidades que la Plataforma USN propuesta pueden ofrecer a los futuros escenarios de la Internet de las Cosas, se presentan resultados experimentales de tres pruebas de concepto (telemetría, “Smart Places” y monitorización medioambiental) reales a pequeña escala y un estudio sobre semántica (sistema de información vehicular). Además, se está utilizando actualmente como Habilitador para desarrollar tanto experimentación como servicios reales en el proyecto Europeo SmartSantander (que aspira a integrar alrededor de 20.000 dispositivos IoT). v Abstract Internet of Things, as part of the Future Internet, has become one of the main research topics nowadays; in part thanks to the pressure the society is putting on the development of a particular kind of services (Smart metering, Smart Grids, eHealth, etc.), and by the recent business forecasts that situate some players, like Telecom Operators (which are desperately seeking for new opportunities), at the forefront pushing for some interrelated technologies like Machine--‐to--‐Machine (M2M) communications. Under this context, an important number of research activities are currently taking place worldwide at different levels: sensor network communications, information processing, big--‐ data storage, semantics, service level architectures, etc. All of them, isolated, are arriving to a level of maturity that envision the achievement of Internet of Things (IoT) more than a dream, a tangible goal. However, the aforementioned services cannot wait to be developed until the holistic research actions bring complete solutions. It is important to come out with intermediate results that avoid vertical solutions tailored for particular deployments. In the present work, we focus on the creation of a Service--‐level platform intended to facilitate, from one side the integration of heterogeneous and geographically disperse Sensors and Actuator Networks (SANs), and from the other the development of horizontal services using them and the information they provide. This enabler will be used for horizontal service development and for IoT experimentation. Prior to the definition of the platform, we have realized an important study targeting not just research works and projects, but also standardization topics. The results can be summarized in the following assertions: a) Open Geospatial Consortium (OGC®) Sensor Web Enablement (SWE™) data models today represent the most complete solution to describe SANs and observations. b) OGC interfaces, despite the limitations that require changes and extensions, could be used as the bases for accessing sensors and data. c) Next Generation Networks (NGN) offer a good substrate that facilitates the integration of SANs and the development of services. Consequently a new Service Layer platform, called Ubiquitous Sensor Networks (USN), has been defined in this Thesis trying to contribute to fill in the previous gaps. The main highlights of the proposed USN Platform are: a) From an architectural point of view, it follows a two--‐layer approach (Enabler and Gateway) similar to other enablers that run on top of NGN (like the OMA Presence). b) Data models and interfaces are based on the OGC SWE standards. c) It is integrated in NGN but it can be used without it over open IP infrastructures. d) Main functions are: Sensor Discovery, Observation Storage, Publish--‐Subscribe--‐Notify, homogeneous remote execution, security, data dictionaries handling, monitoring facilities, authorization support, protocol conversion utilities, synchronous and asynchronous interactions, streaming support and basic resource arbitration. vi In order to demonstrate the functionalities that the proposed USN Platform can offer to future IoT scenarios, some experimental results have been addressed in three real--‐life small--‐scale proofs--‐of concepts (Smart Metering, Smart Places and Environmental monitoring) and a study for semantics (in--‐vehicle information system). Furthermore we also present the current use of the proposed USN Platform as an Enabler to develop experimentation and real services in the SmartSantander EU project (that aims at integrating around 20.000 IoT devices).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results: We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions: CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El planteamiento inicial era proveer al individuo invidente de un sistema autónomo capaz de guiarle según sus preferencias. El resultado obtenido al finalizar este proyecto ha sido un dispositivo autónomo configurable por el usuario mediante una aplicación sw , desarrollada en la plataforma móvil Android capaz de comunicarse con el dispositivo autónomo(móvil personal). La idea de utilizar como plataforma de desarrollo sw Android, se basó fundamentalmente en que es código open source, es gratuito y está presente en el 70 por ciento de los móviles de Europa. La idea inicial era que ambos hubieran sido integrados en un mismo dispositivo, pero una vez comenzado el proyecto y habiendo evaluado los hábitos actuales, decidimos adaptar la idea general del proyecto, a nuestros días. Para ello hicimos uso del dispositivo móvil más usado hoy en día, como es nuestros teléfonos móviles, o más bien los llamado Smartphone, con los cuales podemos desde su aplicación originaria que es llamar, hasta realizar multitud de operaciones al mismo tiempo como puede ser comunicación por internet, posicionamiento via GPS, intercambio de ficheros por bluetooth… tantas como podamos programar. Sobre este último atributo, intercambio de información a través de bluetooth, es la interfaz que vamos a aprovechar para la realización de nuestro proyecto. Hoy en día el 90% de los Smartphone tiene entre sus características de conectividad la posibilidad de intercambiar información vía bluetooth. Una vez se tiene resuelto el interfaz entre el medio y el usuario se debe solucionar la forma de transformar la información para que los dispositivos móviles recojan la información y sepan discernir entre la información importante y la que no lo es. Para ello hemos desarrollado una tarjeta configurable, con un módulo bluetooth comercial para enviar la información. El resultado final de esta tarjeta proporciona una manera fácil de configurar diferentes mensajes que serán utilizados según la situación. ABSTRACT The initial approach consisted of a system that shows the way for blind people to get somewhere or something or provide to them important information, an autonomous system able to guide to their preference. After several analyses the project accomplish is a standalone device configurable by the user via an application sw, developed in Android mobile platform capable of communicating with the standalone device (personal cell phone). The decision of using the sw development platform of Android was due to the open source code concept and the great extent of presence on 70 percent of European mobiles. The first idea was that the sw and the device were integrated into a single device, but once the project had been started and having assessed the current habits, it has changed to be adapted to the present technology to get a better usability on the present-day. To achieve the project goals the most used mobile device today was used, our mobile phones, or rather called Smartphone, which you could use to phone your mother or perform many operations simultaneously such as communication online, positioning via GPS, bluetooth file trading program, etc. On this last attribute, information sharing via bluetooth, is the interface that it has been taken to complete the project. Today 90% of the Smartphone include in its connectivity features the ability to exchange information via bluetooth. Once that it was solved the interface between the environment and the final user, the next step incorporates the transformation of the information that the mobile devices collect from the environment to discern between the information the user configure to be notified or not. The hardware device that makes it possible is a configurable card with a bluetooth module that is able to send the information. The final result of this card provides an easy way to configure different messages, that we could use depending of the situation.