931 resultados para 060102 Bioinformatics
Resumo:
My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.
Resumo:
Cardiovascular disease (CVD) is a threat to public health. It has been reported to be the leading cause of death in United States. The invention of next generation sequencing (NGS) technology has revolutionized the biomedical research. To investigate NGS data of CVD related quantitative traits would contribute to address the unknown etiology and disease mechanism of CVD. NHLBI's Exome Sequencing Project (ESP) contains CVD related phenotypes and their associated NGS exomes sequence data. Initially, a subset of next generation sequencing data consisting of 13 CVD-related quantitative traits was investigated. Only 6 traits, systolic blood pressure (SBP), diastolic blood pressure (DBP), height, platelet counts, waist circumference, and weight, were analyzed by functional linear model (FLM) and 7 currently existing methods. FLM outperformed all currently existing methods by identifying the highest number of significant genes and had identified 96, 139, 756, 1162, 1106, and 298 genes associated with SBP, DBP, Height, Platelet, Waist, and Weight respectively. ^
Resumo:
The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^
Resumo:
My dissertation focuses on two aspects of RNA sequencing technology. The first is the methodology for modeling the overdispersion inherent in RNA-seq data for differential expression analysis. This aspect is addressed in three sections. The second aspect is the application of RNA-seq data to identify the CpG island methylator phenotype (CIMP) by integrating datasets of mRNA expression level and DNA methylation status. Section 1: The cost of DNA sequencing has reduced dramatically in the past decade. Consequently, genomic research increasingly depends on sequencing technology. However it remains elusive how the sequencing capacity influences the accuracy of mRNA expression measurement. We observe that accuracy improves along with the increasing sequencing depth. To model the overdispersion, we use the beta-binomial distribution with a new parameter indicating the dependency between overdispersion and sequencing depth. Our modified beta-binomial model performs better than the binomial or the pure beta-binomial model with a lower false discovery rate. Section 2: Although a number of methods have been proposed in order to accurately analyze differential RNA expression on the gene level, modeling on the base pair level is required. Here, we find that the overdispersion rate decreases as the sequencing depth increases on the base pair level. Also, we propose four models and compare them with each other. As expected, our beta binomial model with a dynamic overdispersion rate is shown to be superior. Section 3: We investigate biases in RNA-seq by exploring the measurement of the external control, spike-in RNA. This study is based on two datasets with spike-in controls obtained from a recent study. We observe an undiscovered bias in the measurement of the spike-in transcripts that arises from the influence of the sample transcripts in RNA-seq. Also, we find that this influence is related to the local sequence of the random hexamer that is used in priming. We suggest a model of the inequality between samples and to correct this type of bias. Section 4: The expression of a gene can be turned off when its promoter is highly methylated. Several studies have reported that a clear threshold effect exists in gene silencing that is mediated by DNA methylation. It is reasonable to assume the thresholds are specific for each gene. It is also intriguing to investigate genes that are largely controlled by DNA methylation. These genes are called “L-shaped” genes. We develop a method to determine the DNA methylation threshold and identify a new CIMP of BRCA. In conclusion, we provide a detailed understanding of the relationship between the overdispersion rate and sequencing depth. And we reveal a new bias in RNA-seq and provide a detailed understanding of the relationship between this new bias and the local sequence. Also we develop a powerful method to dichotomize methylation status and consequently we identify a new CIMP of breast cancer with a distinct classification of molecular characteristics and clinical features.
Resumo:
DNA extraction was carried out as described on the MICROBIS project pages (http://icomm.mbl.edu/microbis ) using a commercially available extraction kit. We amplified the hypervariable regions V4-V6 of archaeal and bacterial 16S rRNA genes using PCR and several sets of forward and reverse primers (http://vamps.mbl.edu/resources/primers.php). Massively parallel tag sequencing of the PCR products was carried out on a 454 Life Sciences GS FLX sequencer at Marine Biological Laboratory, Woods Hole, MA, following the same experimental conditions for all samples. Sequence reads were submitted to a rigorous quality control procedure based on mothur v30 (doi:10.1128/AEM.01541-09) including denoising of the flow grams using an algorithm based on PyroNoise (doi:10.1038/nmeth.1361), removal of PCR errors and a chimera check using uchime (doi:10.1093/bioinformatics/btr381). The reads were taxonomically assigned according to the SILVA taxonomy (SSURef v119, 07-2014; doi:10.1093/nar/gks1219) implemented in mothur and clustered at 98% ribosomal RNA gene V4-V6 sequence identity. V4-V6 amplicon sequence abundance tables were standardized to account for unequal sampling effort using 1000 (Archaea) and 2300 (Bacteria) randomly chosen sequences without replacement using mothur and then used to calculate inverse Simpson diversity indices and Chao1 richness (doi:10.2307/4615964). Bray-Curtis dissimilarities (doi:10.2307/1942268) between all samples were calculated and used for 2-dimensional non metric multidimensional scaling (NMDS) ordinations with 20 random starts (doi:10.1007/BF02289694). Stress values below 0.2 indicated that the multidimensional dataset was well represented by the 2D ordination. NMDS ordinations were compared and tested using Procrustes correlation analysis (doi:10.1007/BF02291478). All analyses were carried out with the R statistical environment and the packages vegan (available at: http://cran.r-project.org/package=vegan), labdsv (available at: http://cran.r-project.org/package=labdsv), as well as with custom R scripts. Operational taxonomic units at 98% sequence identity (OTU0.03) that occurred only once in the whole dataset were termed absolute single sequence OTUs (SSOabs; doi:10.1038/ismej.2011.132). OTU0.03 sequences that occurred only once in at least one sample, but may occur more often in other samples were termed relative single sequence OTUs (SSOrel). SSOrel are particularly interesting for community ecology, since they comprise rare organisms that might become abundant when conditions change.16S rRNA amplicons and metagenomic reads have been stored in the sequence read archive under SRA project accession number SRP042162.
Resumo:
The Ocean Sampling Day (OSD) is a simultaneous sampling campaign of the world's oceans which took place (for the first time) on the summer solstice (June 21st) in the year 2014. These cumulative samples, related in time, space and environmental parameters, provide insights into fundamental rules describing microbial diversity and function and contribute to the blue economy through the identification of novel, ocean-derived biotechnologies. We see OSD data as a reference data set for generations of experiments to follow in the coming decade. The present data set includes a description of each sample collected during the Ocean Sampling Day 2014 and provides contextual environmental data measured concurrently with the collection of water samples for genomic analyses.
Resumo:
La computación molecular es una disciplina que se ocupa del diseño e implementación de dispositivos para el procesamiento de información sobre un sustrato biológico, como el ácido desoxirribonucleico (ADN), el ácido ribonucleico (ARN) o las proteínas. Desde que Watson y Crick descubrieron en los años cincuenta la estructura molecular del ADN en forma de doble hélice, se desencadenaron otros descubrimientos como las enzimas que cortan el ADN o la reacción en cadena de la polimerasa (PCR), contribuyendo más que signi�cativamente a la irrupción de la tecnología del ADN recombinante. Gracias a esta tecnología y al descenso vertiginoso de los precios de secuenciación y síntesis del ADN, la computación biomolecular pudo abandonar su concepción puramente teórica. En 1994, Leonard Adleman logró resolver un problema de computación NP-completo (El Problema del Camino de Hamilton Dirigido) utilizando únicamente moléculas de ADN. La gran capacidad de procesamiento en paralelo ofrecida por las técnicas del ADN recombinante permitió a Adleman ser capaz de resolver dicho problema en tiempo polinómico, aunque a costa de un consumo exponencial de moléculas de ADN. Utilizando algoritmos similares al de �fuerza bruta� utilizado por Adleman se logró resolver otros problemas NP-completos (por ejemplo, el de Satisfacibilidad de Fórmulas Lógicas / SAT). Pronto se comprendió que la computación con biomolecular no podía competir en velocidad ni precisión con los ordenadores de silicio, por lo que su enfoque y objetivos se centraron en la resolución de problemas biológicos con aplicación biomédica, dejando de lado la resolución de problemas clásicos de computación. Desde entonces se han propuesto diversos modelos de dispositivos biomoleculares que, de forma autónoma (sin necesidad de un bio-ingeniero realizando operaciones de laboratorio), son capaces de procesar como entrada un sustrato biológico y proporcionar una salida también en formato biológico: procesadores que aprovechan la extensión de la Polimerasa, autómatas que funcionan con enzimas de restricción o con deoxiribozimas, circuitos de hibridación competitiva. Esta tesis presenta un conjunto de modelos de dispositivos de ácidos nucleicos escalables, sensibles al tiempo y energéticamente e�cientes, capaces de implementar diversas operaciones de computación lógica aprovechando el fenómeno de la hibridación competitiva del ADN. La capacidad implícita de estos dispositivos para aplicar reglas de inferencia como modus ponens, modus tollens, resolución o el silogismo hipotético tiene un gran potencial. Entre otras funciones, permiten representar implicaciones lógicas (o reglas del tipo SI/ENTONCES), como por ejemplo, �si se da el síntoma 1 y el síntoma 2, entonces estamos ante la enfermedad A�, o �si estamos ante la enfermedad B, entonces deben manifestarse los síntomas 2 y 3�. Utilizando estos módulos lógicos como bloques básicos de construcción, se pretende desarrollar sistemas in vitro basados en sensores de ADN, capaces de trabajar de manera conjunta para detectar un conjunto de síntomas de entrada y producir un diagnóstico de salida. La reciente publicación en la revista Science de un autómata biomolecular de diagnóstico, capaz de tratar las células cancerígenas sin afectar a las células sanas, es un buen ejemplo de la relevancia cientí�ca que este tipo de autómatas tienen en la actualidad. Además de las recién mencionadas aplicaciones en el diagnóstico in vitro, los modelos presentados también tienen utilidad en el diseño de biosensores inteligentes y la construcción de bases de datos con registros en formato biomolecular que faciliten el análisis genómico. El estudio sobre el estado de la cuestión en computación biomolecular que se presenta en esta tesis está basado en un artículo recientemente publicado en la revista Current Bioinformatics. Los nuevos dispositivos presentados en la tesis forman parte de una solicitud de patente de la que la UPM es titular, y han sido presentados en congresos internacionales como Unconventional Computation 2010 en Tokio o Synthetic Biology 2010 en París.
Resumo:
Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)
Resumo:
The Linked Data initiative offers a straight method to publish structured data in the World Wide Web and link it to other data, resulting in a world wide network of semantically codified data known as the Linked Open Data cloud. The size of the Linked Open Data cloud, i.e. the amount of data published using Linked Data principles, is growing exponentially, including life sciences data. However, key information for biological research is still missing in the Linked Open Data cloud. For example, the relation between orthologs genes and genetic diseases is absent, even though such information can be used for hypothesis generation regarding human diseases. The OGOLOD system, an extension of the OGO Knowledge Base, publishes orthologs/diseases information using Linked Data. This gives the scientists the ability to query the structured information in connection with other Linked Data and to discover new information related to orthologs and human diseases in the cloud.
Resumo:
Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.
Resumo:
Background. Over the last years, the number of available informatics resources in medicine has grown exponentially. While specific inventories of such resources have already begun to be developed for Bioinformatics (BI), comparable inventories are as yet not available for Medical Informatics (MI) field, so that locating and accessing them currently remains a hard and time-consuming task. Description. We have created a repository of MI resources from the scientific literature, providing free access to its contents through a web-based service. Relevant information describing the resources is automatically extracted from manuscripts published in top-ranked MI journals. We used a pattern matching approach to detect the resources? names and their main features. Detected resources are classified according to three different criteria: functionality, resource type and domain. To facilitate these tasks, we have built three different taxonomies by following a novel approach based on folksonomies and social tagging. We adopted the terminology most frequently used by MI researchers in their publications to create the concepts and hierarchical relationships belonging to the taxonomies. The classification algorithm identifies the categories associated to resources and annotates them accordingly. The database is then populated with this data after manual curation and validation. Conclusions. We have created an online repository of MI resources to assist researchers in locating and accessing the most suitable resources to perform specific tasks. The database contained 282 resources at the time of writing. We are continuing to expand the number of available resources by taking into account further publications as well as suggestions from users and resource developers.
Resumo:
Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.
Resumo:
Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).