28 resultados para pacs: information retrieval techniques
em Université de Lausanne, Switzerland
Resumo:
The advent of retrievable caval filters was a game changer in the sense, that the previously irreversible act of implanting a medical device into the main venous blood stream of the body requiring careful evaluation of the pros and cons prior to execution suddenly became a "reversible" procedure where potential hazards in the late future of the patient lost most of their weight at the time of decision making. This review was designed to assess the rate of success with late retrieval of so called retrievable caval filters in order to get some indication about reasonable implant duration with respect to relatively "easy" implant removal with conventional means, i.e., catheters, hooks and lassos. A PubMed search (www.pubmed.gov) was performed with the search term "cava filter retrieval after 30 days clinical", and 20 reports between 1994 and 2013 dealing with late retrieval of caval filters were identified, covering approximately 7,000 devices with 600 removed filters. The maximal duration of implant reported is 2,599 days and the maximal implant duration of removed filters is also 2,599 days. The maximal duration reported with standard retrieval techniques, i.e., catheter, hook and/or lasso, is 475 days, whereas for the retrievals after this period more sophisticated techniques including lasers, etc. were required. The maximal implant duration for series with 100% retrieval accounts for 84 days, which is equivalent to 12 weeks or almost 3 months. We conclude that retrievable caval filters often become permanent despite the initial decision of temporary use. However, such "forgotten" retrievable devices can still be removed with a great chance of success up to three months after implantation. Conventional percutaneous removal techniques may be sufficient up to sixteen months after implantation whereas more sophisticated catheter techniques have been shown to be successful up to 83 months or more than seven years of implant duration. Tilting, migrating, or misplaced devices should be removed early on, and replaced if indicated with a device which is both, efficient and retrievable.
Resumo:
The manipulation of DNA is routine practice in botanical research and has made a huge impact on plant breeding, biotechnology and biodiversity evaluation. DNA is easy to extract from most plant tissues and can be stored for long periods in DNA banks. Curation methods are well developed for other botanical resources such as herbaria, seed banks and botanic gardens, but procedures for the establishment and maintenance of DNA banks have not been well documented. This paper reviews the curation of DNA banks for the characterisation and utilisation of biodiversity and provides guidelines for DNA bank management. It surveys existing DNA banks and outlines their operation. It includes a review of plant DNA collection, preservation, isolation, storage, database management and exchange procedures. We stress that DNA banks require full integration with existing collections such as botanic gardens, herbaria and seed banks, and information retrieval systems that link such facilities, bioinformatic resources and other DNA banks. They also require efficient and well-regulated sample exchange procedures. Only with appropriate curation will maximum utilisation of DNA collections be achieved.
Resumo:
In this paper we propose a novel unsupervised approach to learning domain-specific ontologies from large open-domain text collections. The method is based on the joint exploitation of Semantic Domains and Super Sense Tagging for Information Retrieval tasks. Our approach is able to retrieve domain specific terms and concepts while associating them with a set of high level ontological types, named supersenses, providing flat ontologies characterized by very high accuracy and pertinence to the domain.
Resumo:
BACKGROUND: The annotation of protein post-translational modifications (PTMs) is an important task of UniProtKB curators and, with continuing improvements in experimental methodology, an ever greater number of articles are being published on this topic. To help curators cope with this growing body of information we have developed a system which extracts information from the scientific literature for the most frequently annotated PTMs in UniProtKB. RESULTS: The procedure uses a pattern-matching and rule-based approach to extract sentences with information on the type and site of modification. A ranked list of protein candidates for the modification is also provided. For PTM extraction, precision varies from 57% to 94%, and recall from 75% to 95%, according to the type of modification. The procedure was used to track new publications on PTMs and to recover potential supporting evidence for phosphorylation sites annotated based on the results of large scale proteomics experiments. CONCLUSIONS: The information retrieval and extraction method we have developed in this study forms the basis of a simple tool for the manual curation of protein post-translational modifications in UniProtKB/Swiss-Prot. Our work demonstrates that even simple text-mining tools can be effectively adapted for database curation tasks, providing that a thorough understanding of the working process and requirements are first obtained. This system can be accessed at http://eagl.unige.ch/PTM/.
Resumo:
Abstract Textual autocorrelation is a broad and pervasive concept, referring to the similarity between nearby textual units: lexical repetitions along consecutive sentences, semantic association between neighbouring lexemes, persistence of discourse types (narrative, descriptive, dialogal...) and so on. Textual autocorrelation can also be negative, as illustrated by alternating phonological or morpho-syntactic categories, or the succession of word lengths. This contribution proposes a general Markov formalism for textual navigation, and inspired by spatial statistics. The formalism can express well-known constructs in textual data analysis, such as term-document matrices, references and hyperlinks navigation, (web) information retrieval, and in particular textual autocorrelation, as measured by Moran's I relatively to the exchange matrix associated to neighbourhoods of various possible types. Four case studies (word lengths alternation, lexical repulsion, parts of speech autocorrelation, and semantic autocorrelation) illustrate the theory. In particular, one observes a short-range repulsion between nouns together with a short-range attraction between verbs, both at the lexical and semantic levels. Résumé: Le concept d'autocorrélation textuelle, fort vaste, réfère à la similarité entre unités textuelles voisines: répétitions lexicales entre phrases successives, association sémantique entre lexèmes voisins, persistance du type de discours (narratif, descriptif, dialogal...) et ainsi de suite. L'autocorrélation textuelle peut être également négative, comme l'illustrent l'alternance entre les catégories phonologiques ou morpho-syntaxiques, ou la succession des longueurs de mots. Cette contribution propose un formalisme markovien général pour la navigation textuelle, inspiré par la statistique spatiale. Le formalisme est capable d'exprimer des constructions bien connues en analyse des données textuelles, telles que les matrices termes-documents, les références et la navigation par hyperliens, la recherche documentaire sur internet, et, en particulier, l'autocorélation textuelle, telle que mesurée par le I de Moran relatif à une matrice d'échange associée à des voisinages de différents types possibles. Quatre cas d'étude illustrent la théorie: alternance des longueurs de mots, répulsion lexicale, autocorrélation des catégories morpho-syntaxiques et autocorrélation sémantique. On observe en particulier une répulsion à courte portée entre les noms, ainsi qu'une attraction à courte portée entre les verbes, tant au niveau lexical que sémantique.
Resumo:
Despite the tremendous amount of data collected in the field of ambulatory care, political authorities still lack synthetic indicators to provide them with a global view of health services utilization and costs related to various types of diseases. Moreover, public health indicators fail to provide useful information for physicians' accountability purposes. The approach is based on the Swiss context, which is characterized by the greatest frequency of medical visits in Europe, the highest rate of growth for care expenditure, poor public information but a lot of structured data (new fee system introduced in 2004). The proposed conceptual framework is universal and based on descriptors of six entities: general population, people with poor health, patients, services, resources and effects. We show that most conceptual shortcomings can be overcome and that the proposed indicators can be achieved without threatening privacy protection, using modern cryptographic techniques. Twelve indicators are suggested for the surveillance of the ambulatory care system, almost all based on routinely available data: morbidity, accessibility, relevancy, adequacy, productivity, efficacy (from the points of view of the population, people with poor health, and patients), effectiveness, efficiency, health services coverage and financing. The additional costs of this surveillance system should not exceed Euro 2 million per year (Euro 0.3 per capita).
Resumo:
BACKGROUND: DNA sequence integrity, mRNA concentrations and protein-DNA interactions have been subject to genome-wide analyses based on microarrays with ever increasing efficiency and reliability over the past fifteen years. However, very recently novel technologies for Ultra High-Throughput DNA Sequencing (UHTS) have been harnessed to study these phenomena with unprecedented precision. As a consequence, the extensive bioinformatics environment available for array data management, analysis, interpretation and publication must be extended to include these novel sequencing data types. DESCRIPTION: MIMAS was originally conceived as a simple, convenient and local Microarray Information Management and Annotation System focused on GeneChips for expression profiling studies. MIMAS 3.0 enables users to manage data from high-density oligonucleotide SNP Chips, expression arrays (both 3'UTR and tiling) and promoter arrays, BeadArrays as well as UHTS data using MIAME-compliant standardized vocabulary. Importantly, researchers can export data in MAGE-TAB format and upload them to the EBI's ArrayExpress certified data repository using a one-step procedure. CONCLUSION: We have vastly extended the capability of the system such that it processes the data output of six types of GeneChips (Affymetrix), two different BeadArrays for mRNA and miRNA (Illumina) and the Genome Analyzer (a popular Ultra-High Throughput DNA Sequencer, Illumina), without compromising on its flexibility and user-friendliness. MIMAS, appropriately renamed into Multiomics Information Management and Annotation System, is currently used by scientists working in approximately 50 academic laboratories and genomics platforms in Switzerland and France. MIMAS 3.0 is freely available via http://multiomics.sourceforge.net/.
Resumo:
Comment l'analyse chimique des produits contrefaits peut-elle contribuer à mieux comprendre le phénomène de la contrefaçon?Pour appréhender cette question, une approche novatrice basée sur l'analyse par SPME GC-MS des composés volatils de trente-neuf bracelets parfumés présents sur des contrefaçons horlogères a été mise en oeuvre.La détection de plusieurs dizaines de composés par montre a permis de définir des profils chimiques discriminants. Au total, trois groupes de montres possédant des profils comparables ont été détectés. Ces groupes ont été mis en perspective de liens physiques détectés sur la base du marquage des bracelets (marques et logos) par la fédération de l'industrie horlogère suisse (FH) et des informations spatiotemporelles des saisies. Les montres provenant d'une même saisie présentent systématiquement des liens physiques, mais pas forcément des profils chimiques similaires. Il en ressort que les profils chimiques peuvent fournir une information complémentaire à l'analyse des marquages, qu'ils varient peu dans le temps et que des montres liées chimiquement sont retrouvées dans le monde entier. Cette étude exploratoire révèle ainsi le potentielle d'exploitation de techniques d'analyse complémentaires pour mieux comprendre les modes de production, voire de diffusion de la contrefaçon. Finalement, la méthode analytique a permis de détecter des composés de plastiques en sus des constituants de parfums. Ce résultat laisse entrevoir la possibilité d'exploiterla méthode pour une large gamme de produits contrefaits.
Resumo:
Internet is increasingly used as a source of information on health issues and is probably a major source of patients' empowerment. This process is however limited by the frequently poor quality of web-based health information designed for consumers. A better diffusion of information about criteria defining the quality of the content of websites, and about useful methods designed for searching such needed information, could be particularly useful to patients and their relatives. A brief, six-items DISCERN version, characterized by a high specificity for detecting websites with good or very good content quality was recently developed. This tool could facilitate the identification of high-quality information on the web by patients and may improve the empowerment process initiated by the development of the health-related web.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
The use of self-calibrating techniques in parallel magnetic resonance imaging eliminates the need for coil sensitivity calibration scans and avoids potential mismatches between calibration scans and subsequent accelerated acquisitions (e.g., as a result of patient motion). Most examples of self-calibrating Cartesian parallel imaging techniques have required the use of modified k-space trajectories that are densely sampled at the center and more sparsely sampled in the periphery. However, spiral and radial trajectories offer inherent self-calibrating characteristics because of their densely sampled center. At no additional cost in acquisition time and with no modification in scanning protocols, in vivo coil sensitivity maps may be extracted from the densely sampled central region of k-space. This work demonstrates the feasibility of self-calibrated spiral and radial parallel imaging using a previously described iterative non-Cartesian sensitivity encoding algorithm.
Resumo:
Little is known about how human amnesia affects the activation of cortical networks during memory processing. In this study, we recorded high-density evoked potentials in 12 healthy control subjects and 11 amnesic patients with various types of brain damage affecting the medial temporal lobes, diencephalic structures, or both. Subjects performed a continuous recognition task composed of meaningful designs. Using whole-scalp spatiotemporal mapping techniques, we found that, during the first 200 ms following picture presentation, map configuration of amnesics and controls were indistinguishable. Beyond this period, processing significantly differed. Between 200 and 350 ms, amnesic patients expressed different topographical maps than controls in response to new and repeated pictures. From 350 to 550 ms, healthy subjects showed modulation of the same maps in response to new and repeated items. In amnesics, by contrast, presentation of repeated items induced different maps, indicating distinct cortical processing of new and old information. The study indicates that cortical mechanisms underlying memory formation and re-activation in amnesia fundamentally differ from normal memory processing.