44 resultados para Information storage and retrieval systems
Resumo:
We propose a method for brain atlas deformation in the presence of large space-occupying tumors, based on an a priori model of lesion growth that assumes radial expansion of the lesion from its starting point. Our approach involves three steps. First, an affine registration brings the atlas and the patient into global correspondence. Then, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. The last step is the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. Results show that a good registration is performed and that the method can be applied to automatic segmentation of structures and substructures in brains with gross deformation, with important medical applications in neurosurgery, radiosurgery, and radiotherapy.
Resumo:
L'article présente les étapes de la mise en place d'une veille bibliographique (ou veille scientifique) thématique effectuée conjointement depuis 2005 par 4 institutions francophones du domaine de la santé au travail : l'INRS (France), l'IRSST (Québec), l'IST (Suisse) et l'UCL (Belgique).La thématique suivie est celle de la surveillance biologique de l'exposition aux produits chimiques en milieu de travail. Les données recueillies et mises en forme par les documentalistes servent aux chercheurs spécialistes du sujet non seulement pour suivre les nouveautés du domaine, mais aussi pour documenter des cours et mettre à jour des guides de surveillance biologique. Les différentes étapes de l'approche méthodologique du projet sont décrites : le choix des bases de données à interroger et la mise au point de la stratégie de recherche, la mise en place d'une procédure de partage des tâches pour toutes les étapes du processus de veille qui se répètent à chaque mise à jour (interrogation, création de bases de données avec le logiciel Reference Manager, mise en forme et indexation des références, création et mise à disposition des partenaires des bases de données consolidées au fil du temps avec tous les articles analysés), les moyens administratifs, humains et techniques d'échange de fichiers et les essais pour élargir la veille à la surveillance de pages Web sélectionnées.Un bilan chiffré des six années de la veille est également donné.L'information récoltée et analysée durant les deux dernières années par les partenaires du projet fera l'objet d'un second article axé sur les principales tendances de la thématique choisie.
Resumo:
The Complete Arabidopsis Transcriptome Micro Array (CATMA) database contains gene sequence tag (GST) and gene model sequences for over 70% of the predicted genes in the Arabidopsis thaliana genome as well as primer sequences for GST amplification and a wide range of supplementary information. All CATMA GST sequences are specific to the gene for which they were designed, and all gene models were predicted from a complete reannotation of the genome using uniform parameters. The database is searchable by sequence name, sequence homology or direct SQL query, and is available through the CATMA website at http://www.catma.org/.
Resumo:
Les deux premières parties de cet article parues précédemment ont présenté la méthodologie ainsi que les premiers éléments du bilan réalisé sur la période allant de 2009 à 2012 de la veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail (SBEPC MT) mise en place par un réseau francophone multidisciplinaire.
Resumo:
PURPOSE: To improve the tag persistence throughout the whole cardiac cycle by providing a constant tag-contrast throughout all the cardiac phases when using balanced steady-state free precession (bSSFP) imaging. MATERIALS AND METHODS: The flip angles of the imaging radiofrequency pulses were optimized to compensate for the tagging contrast-to-noise ratio (Tag-CNR) fading at later cardiac phases in bSSFP imaging. Complementary spatial modulation of magnetization (CSPAMM) tagging was implemented to improve the Tag-CNR. Numerical simulations were performed to examine the behavior of the Tag-CNR with the proposed method, and to compare the resulting Tag-CNR with that obtained from the more commonly used spoiled gradient echo (SPGR) imaging. A gel phantom, as well as five healthy human volunteers, were scanned on a 1.5T scanner using bSSFP imaging with and without the proposed technique. The phantom was also scanned with SPGR imaging. RESULTS: With the proposed technique, the Tag-CNR remained almost constant during the whole cardiac cycle. Using bSSFP imaging, the Tag-CNR was about double that of SPGR. CONCLUSION: The tag persistence was significantly improved when the proposed method was applied, with better Tag-CNR during the diastolic cardiac phase. The improved Tag-CNR will support automated tagging analysis and quantification methods.
Resumo:
The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.
Resumo:
The main goal of CleanEx is to provide access to public gene expression data via unique gene names. A second objective is to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and cross-data set comparisons. A consistent and up-to-date gene nomenclature is achieved by associating each single experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of human genes and genes from model organisms. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing cross-references to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resource, such as cDNA clones or Affymetrix probe sets. The web-based query interfaces offer access to individual entries via text string searches or quantitative expression criteria. CleanEx is accessible at: http://www.cleanex.isb-sib.ch/.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
Une veille bibliographique est organisée depuis 2005 sur la surveillance biologique aux produits chimiques en milieu de travail (SBEPC MT). Elle a été mise en place par le réseau francophone multidisciplinaire, composé de l'INRS (France), l'IRSST (Québec) et l'UCL (Belgique). Cet article dresse le bilan de l'information récoltée et analysée, de 2009 à 2012, au travers de 435 articles sélectionnés. Plusieurs thèmes d'intérêt ou d'actualités font l'objet d'une analyse plus approfondie, dont notamment les pesticides, les hydrocarbures aromatiques, le benzène, le mangannèse, la variabilité biologique, les dosages cutanés et frottis de surface, les dosages dans l'air expiré ou encore la spectrométrie de masse.
Resumo:
Dans cette première de quatre parties, un réseau francophone multidisciplinaire présente les principaux résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail.
Resumo:
Cet article est la seconde partie d'une série de quatre consacrée aux résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail (SBEPC MT). Alors que la précédente partie présentait les objectifs et l'organisation de la veille, cette partie ainsi que la partie 3 vont donc présenter une vue d'ensemble de la base de données en fonction de l'indexation des articles analysés par différents mots clés.
Resumo:
Voici la quatrième et dernière partie des résultats d'une veille bibliographique sur la surveillance biologique de l'exposition aux produits chimiques en milieu de travail (SBEPCMT) mise en place par un réseau francophone multidisciplinaire.
Resumo:
In this review, intratumoral drug disposition will be integrated into the wide range of resistance mechanisms to anticancer agents with particular emphasis on targeted protein kinase inhibitors. Six rules will be established: 1. There is a high variability of extracellular/intracellular drug level ratios; 2. There are three main systems involved in intratumoral drug disposition that are composed of SLC, ABC and XME enzymes; 3. There is a synergistic interplay between these three systems; 4. In cancer subclones, there is a strong genomic instability that leads to a highly variable expression of SLC, ABC or XME enzymes; 5. Tumor-expressed metabolizing enzymes play a role in tumor-specific ADME and cell survival and 6. These three systems are involved in the appearance of resistance (transient event) or in the resistance itself. In addition, this article will investigate whether the overexpression of some ABC and XME systems in cancer cells is just a random consequence of DNA/chromosomal instability, hypo- or hypermethylation and microRNA deregulation, or a more organized modification induced by transposable elements. Experiments will also have to establish if these tumor-expressed enzymes participate in cell metabolism or in tumor-specific ADME or if they are only markers of clonal evolution and genomic deregulation. Eventually, the review will underline that the fate of anticancer agents in cancer cells should be more thoroughly investigated from drug discovery to clinical studies. Indeed, inhibition of tumor expressed metabolizing enzymes could strongly increase drug disposition, specifically in the target cells resulting in more efficient therapies.
Resumo:
Thanks to decades of research, gait analysis has become an efficient tool. However, mainly due to the price of the motion capture systems, standard gait laboratories have the capability to measure only a few consecutive steps of ground walking. Recently, wearable systems were proposed to measure human motion without volume limitation. Although accurate, these systems are incompatible with most of existing calibration procedures and several years of research will be necessary for their validation. A new approach consisting of using a stationary system with a small capture volume for the calibration procedure and then to measure gait using a wearable system could be very advantageous. It could benefit from the knowledge related to stationary systems, allow long distance monitoring and provide new descriptive parameters. The aim of this study was to demonstrate the potential of this approach. Thus, a combined system was proposed to measure the 3D lower body joints angles and segmental angular velocities. It was then assessed in terms of reliability towards the calibration procedure, repeatability and concurrent validity. The dispersion of the joint angles across calibrations was comparable to those of stationary systems and good reliability was obtained for the angular velocities. The repeatability results confirmed that mean cycle kinematics of long distance walks could be used for subjects' comparison and pointed out an interest for the variability between cycles. Finally, kinematics differences were observed between participants with different ankle conditions. In conclusion, this study demonstrated the potential of a mixed approach for human movement analysis.