902 resultados para Information storage and retrieval systems -- Geography


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of CleanEx is to provide access to public gene expression data via unique gene names. A second objective is to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and cross-data set comparisons. A consistent and up-to-date gene nomenclature is achieved by associating each single experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of human genes and genes from model organisms. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing cross-references to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resource, such as cDNA clones or Affymetrix probe sets. The web-based query interfaces offer access to individual entries via text string searches or quantitative expression criteria. CleanEx is accessible at: http://www.cleanex.isb-sib.ch/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Une veille bibliographique est organisée depuis 2005 sur la surveillance biologique aux produits chimiques en milieu de travail (SBEPC MT). Elle a été mise en place par le réseau francophone multidisciplinaire, composé de l'INRS (France), l'IRSST (Québec) et l'UCL (Belgique). Cet article dresse le bilan de l'information récoltée et analysée, de 2009 à 2012, au travers de 435 articles sélectionnés. Plusieurs thèmes d'intérêt ou d'actualités font l'objet d'une analyse plus approfondie, dont notamment les pesticides, les hydrocarbures aromatiques, le benzène, le mangannèse, la variabilité biologique, les dosages cutanés et frottis de surface, les dosages dans l'air expiré ou encore la spectrométrie de masse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette première de quatre parties, un réseau francophone multidisciplinaire présente les principaux résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cet article est la seconde partie d'une série de quatre consacrée aux résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail (SBEPC MT). Alors que la précédente partie présentait les objectifs et l'organisation de la veille, cette partie ainsi que la partie 3 vont donc présenter une vue d'ensemble de la base de données en fonction de l'indexation des articles analysés par différents mots clés.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Voici la quatrième et dernière partie des résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail (SBEPCMT) mise en place par un réseau francophone multidisciplinaire.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les filtres de recherche bibliographique optimisés visent à faciliter le repérage de l’information dans les bases de données bibliographiques qui sont presque toujours la source la plus abondante d’évidences scientifiques. Ils contribuent à soutenir la prise de décisions basée sur les évidences. La majorité des filtres disponibles dans la littérature sont des filtres méthodologiques. Mais pour donner tout leur potentiel, ils doivent être combinés à des filtres permettant de repérer les études couvrant un sujet particulier. Dans le champ de la sécurité des patients, il a été démontré qu’un repérage déficient de l’information peut avoir des conséquences tragiques. Des filtres de recherche optimisés couvrant le champ pourraient s’avérer très utiles. La présente étude a pour but de proposer des filtres de recherche bibliographique optimisés pour le champ de la sécurité des patients, d’évaluer leur validité, et de proposer un guide pour l’élaboration de filtres de recherche. Nous proposons des filtres optimisés permettant de repérer des articles portant sur la sécurité des patients dans les organisations de santé dans les bases de données Medline, Embase et CINAHL. Ces filtres réalisent de très bonnes performances et sont spécialement construits pour les articles dont le contenu est lié de façon explicite au champ de la sécurité des patients par leurs auteurs. La mesure dans laquelle on peut généraliser leur utilisation à d’autres contextes est liée à la définition des frontières du champ de la sécurité des patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The biomedical literature is extensively catalogued and indexed in MEDLINE. MEDLINE indexing is done by trained human indexers, who identify the most important concepts in each article, and is expensive and inconsistent. Automating the indexing task is difficult: the National Library of Medicine produces the Medical Text Indexer (MTI), which suggests potential indexing terms to the indexers. MTI’s output is not good enough to work unattended. In my thesis, I propose a different way to approach the indexing task called MEDRank. MEDRank creates graphs representing the concepts in biomedical articles and their relationships within the text, and applies graph-based ranking algorithms to identify the most important concepts in each article. I evaluate the performance of several automated indexing solutions, including my own, by comparing their output to the indexing terms selected by the human indexers. MEDRank outperformed all other evaluated indexing solutions, including MTI, in general indexing performance and precision. MEDRank can be used to cluster documents, index any kind of biomedical text with standard vocabularies, or could become part of MTI itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the classification of news items in ePaper, a prototype system of a future personalized newspaper service on a mobile reading device. The ePaper system aggregates news items from various news providers and delivers to each subscribed user (reader) a personalized electronic newspaper, utilizing content-based and collaborative filtering methods. The ePaper can also provide users "standard" (i.e., not personalized) editions of selected newspapers, as well as browsing capabilities in the repository of news items. This paper concentrates on the automatic classification of incoming news using hierarchical news ontology. Based on this classification on one hand, and on the users' profiles on the other hand, the personalization engine of the system is able to provide a personalized paper to each user onto her mobile reading device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Propuesta de trabajo y evaluación del modelo constructivista “portafolio", usado como metodología de enseñanza-aprendizaje en dos de los cursos que presentan mayor dificultad para los estudiantes de la Escuela de Bibliotecología, Documentación e Información: Almacenamiento y Recuperación de Información 1 y Metodología de la Investigación. Esta metodología se implementó como una respuesta a la necesidad de desarrollar un proceso que permita a los estudiantes asimilar los contenidos de los cursos de acuerdo con sus propias experiencias y concepciones.La metodología fue evaluada para determinar su efectividad en el proceso enseñanza aprendizaje, esta evaluación se realizó en el transcurso del semestre y al final del mismo, a través de un cuestionario, un taller de evaluación y la revisión del portafolio elaborado por los estudiantes.La propuesta y la evaluación de loa resultados se presenta en este trabajo, como una experiencia exitosa que fue afectada por aspectos negativos, que pueden mejorarse.