819 resultados para symbolic machine learning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The developmental processes and functions of an organism are controlled by the genes and the proteins that are derived from these genes. The identification of key genes and the reconstruction of gene networks can provide a model to help us understand the regulatory mechanisms for the initiation and progression of biological processes or functional abnormalities (e.g. diseases) in living organisms. In this dissertation, I have developed statistical methods to identify the genes and transcription factors (TFs) involved in biological processes, constructed their regulatory networks, and also evaluated some existing association methods to find robust methods for coexpression analyses. Two kinds of data sets were used for this work: genotype data and gene expression microarray data. On the basis of these data sets, this dissertation has two major parts, together forming six chapters. The first part deals with developing association methods for rare variants using genotype data (chapter 4 and 5). The second part deals with developing and/or evaluating statistical methods to identify genes and TFs involved in biological processes, and construction of their regulatory networks using gene expression data (chapter 2, 3, and 6). For the first part, I have developed two methods to find the groupwise association of rare variants with given diseases or traits. The first method is based on kernel machine learning and can be applied to both quantitative as well as qualitative traits. Simulation results showed that the proposed method has improved power over the existing weighted sum method (WS) in most settings. The second method uses multiple phenotypes to select a few top significant genes. It then finds the association of each gene with each phenotype while controlling the population stratification by adjusting the data for ancestry using principal components. This method was applied to GAW 17 data and was able to find several disease risk genes. For the second part, I have worked on three problems. First problem involved evaluation of eight gene association methods. A very comprehensive comparison of these methods with further analysis clearly demonstrates the distinct and common performance of these eight gene association methods. For the second problem, an algorithm named the bottom-up graphical Gaussian model was developed to identify the TFs that regulate pathway genes and reconstruct their hierarchical regulatory networks. This algorithm has produced very significant results and it is the first report to produce such hierarchical networks for these pathways. The third problem dealt with developing another algorithm called the top-down graphical Gaussian model that identifies the network governed by a specific TF. The network produced by the algorithm is proven to be of very high accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Important food crops like rice are constantly exposed to various stresses that can have devastating effect on their survival and productivity. Being sessile, these highly evolved organisms have developed elaborate molecular machineries to sense a mixture of stress signals and elicit a precise response to minimize the damage. However, recent discoveries revealed that the interplay of these stress regulatory and signaling molecules is highly complex and remains largely unknown. In this work, we conducted large scale analysis of differential gene expression using advanced computational methods to dissect regulation of stress response which is at the heart of all molecular changes leading to the observed phenotypic susceptibility. One of the most important stress conditions in terms of loss of productivity is drought. We performed genomic and proteomic analysis of epigenetic and miRNA mechanisms in regulation of drought responsive genes in rice and found subsets of genes with striking properties. Overexpressed genesets included higher number of epigenetic marks, miRNA targets and transcription factors which regulate drought tolerance. On the other hand, underexpressed genesets were poor in above features but were rich in number of metabolic genes with multiple co-expression partners contributing majorly towards drought resistance. Identification and characterization of the patterns exhibited by differentially expressed genes hold key to uncover the synergistic and antagonistic components of the cross talk between stress response mechanisms. We performed meta-analysis on drought and bacterial stresses in rice and Arabidopsis, and identified hundreds of shared genes. We found high level of conservation of gene expression between these stresses. Weighted co-expression network analysis detected two tight clusters of genes made up of master transcription factors and signaling genes showing strikingly opposite expression status. To comprehensively identify the shared stress responsive genes between multiple abiotic and biotic stresses in rice, we performed meta-analyses of microarray studies from seven different abiotic and six biotic stresses separately and found more than thirteen hundred shared stress responsive genes. Various machine learning techniques utilizing these genes classified the stresses into two major classes' namely abiotic and biotic stresses and multiple classes of individual stresses with high accuracy and identified the top genes showing distinct patterns of expression. Functional enrichment and co-expression network analysis revealed the different roles of plant hormones, transcription factors in conserved and non-conserved genesets in regulation of stress response.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Activation of the peroxisome proliferator-activated receptor alpha (PPARalpha) is associated with increased fatty acid catabolism and is commonly targeted for the treatment of hyperlipidemia. To identify latent, endogenous biomarkers of PPARalpha activation and hence increased fatty acid beta-oxidation, healthy human volunteers were given fenofibrate orally for 2 weeks and their urine was profiled by UPLC-QTOFMS. Biomarkers identified by the machine learning algorithm random forests included significant depletion by day 14 of both pantothenic acid (>5-fold) and acetylcarnitine (>20-fold), observations that are consistent with known targets of PPARalpha including pantothenate kinase and genes encoding proteins involved in the transport and synthesis of acylcarnitines. It was also concluded that serum cholesterol (-12.7%), triglycerides (-25.6%), uric acid (-34.7%), together with urinary propylcarnitine (>10-fold), isobutyrylcarnitine (>2.5-fold), (S)-(+)-2-methylbutyrylcarnitine (5-fold), and isovalerylcarnitine (>5-fold) were all reduced by day 14. Specificity of these biomarkers as indicators of PPARalpha activation was demonstrated using the Ppara-null mouse. Urinary pantothenic acid and acylcarnitines may prove useful indicators of PPARalpha-induced fatty acid beta-oxidation in humans. This study illustrates the utility of a pharmacometabolomic approach to understand drug effects on lipid metabolism in both human populations and in inbred mouse models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive event-related potentials (ERPs) are widely employed in the study of dementive disorders. The morphology of averaged response is known to be under the influence of neurodegenerative processes and exploited for diagnostic purposes. This work is built over the idea that there is additional information in the dynamics of single-trial responses. We introduce a novel way to detect mild cognitive impairment (MCI) from the recordings of auditory ERP responses. Using single trial responses from a cohort of 25 amnestic MCI patients and a group of age-matched controls, we suggest a descriptor capable of encapsulating single-trial (ST) response dynamics for the benefit of early diagnosis. A customized vector quantization (VQ) scheme is first employed to summarize the overall set of ST-responses by means of a small-sized codebook of brain waves that is semantically organized. Each ST-response is then treated as a trajectory that can be encoded as a sequence of code vectors. A subject's set of responses is consequently represented as a histogram of activated code vectors. Discriminating MCI patients from healthy controls is based on the deduced response profiles and carried out by means of a standard machine learning procedure. The novel response representation was found to improve significantly MCI detection with respect to the standard alternative representation obtained via ensemble averaging (13% in terms of sensitivity and 6% in terms of specificity). Hence, the role of cognitive ERPs as biomarker for MCI can be enhanced by adopting the delicate description of our VQ scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Clinical prognostic groupings for localised prostate cancers are imprecise, with 30-50% of patients recurring after image-guided radiotherapy or radical prostatectomy. We aimed to test combined genomic and microenvironmental indices in prostate cancer to improve risk stratification and complement clinical prognostic factors. METHODS We used DNA-based indices alone or in combination with intra-prostatic hypoxia measurements to develop four prognostic indices in 126 low-risk to intermediate-risk patients (Toronto cohort) who will receive image-guided radiotherapy. We validated these indices in two independent cohorts of 154 (Memorial Sloan Kettering Cancer Center cohort [MSKCC] cohort) and 117 (Cambridge cohort) radical prostatectomy specimens from low-risk to high-risk patients. We applied unsupervised and supervised machine learning techniques to the copy-number profiles of 126 pre-image-guided radiotherapy diagnostic biopsies to develop prognostic signatures. Our primary endpoint was the development of a set of prognostic measures capable of stratifying patients for risk of biochemical relapse 5 years after primary treatment. FINDINGS Biochemical relapse was associated with indices of tumour hypoxia, genomic instability, and genomic subtypes based on multivariate analyses. We identified four genomic subtypes for prostate cancer, which had different 5-year biochemical relapse-free survival. Genomic instability is prognostic for relapse in both image-guided radiotherapy (multivariate analysis hazard ratio [HR] 4·5 [95% CI 2·1-9·8]; p=0·00013; area under the receiver operator curve [AUC] 0·70 [95% CI 0·65-0·76]) and radical prostatectomy (4·0 [1·6-9·7]; p=0·0024; AUC 0·57 [0·52-0·61]) patients with prostate cancer, and its effect is magnified by intratumoral hypoxia (3·8 [1·2-12]; p=0·019; AUC 0·67 [0·61-0·73]). A novel 100-loci DNA signature accurately classified treatment outcome in the MSKCC low-risk to intermediate-risk cohort (multivariate analysis HR 6·1 [95% CI 2·0-19]; p=0·0015; AUC 0·74 [95% CI 0·65-0·83]). In the independent MSKCC and Cambridge cohorts, this signature identified low-risk to high-risk patients who were most likely to fail treatment within 18 months (combined cohorts multivariate analysis HR 2·9 [95% CI 1·4-6·0]; p=0·0039; AUC 0·68 [95% CI 0·63-0·73]), and was better at predicting biochemical relapse than 23 previously published RNA signatures. INTERPRETATION This is the first study of cancer outcome to integrate DNA-based and microenvironment-based failure indices to predict patient outcome. Patients exhibiting these aggressive features after biopsy should be entered into treatment intensification trials. FUNDING Movember Foundation, Prostate Cancer Canada, Ontario Institute for Cancer Research, Canadian Institute for Health Research, NIHR Cambridge Biomedical Research Centre, The University of Cambridge, Cancer Research UK, Cambridge Cancer Charity, Prostate Cancer UK, Hutchison Whampoa Limited, Terry Fox Research Institute, Princess Margaret Cancer Centre Foundation, PMH-Radiation Medicine Program Academic Enrichment Fund, Motorcycle Ride for Dad (Durham), Canadian Cancer Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a non-rigid free-from 2D-3D registration approach using statistical deformation model (SDM). In our approach the SDM is first constructed from a set of training data using a non-rigid registration algorithm based on b-spline free-form deformation to encode a priori information about the underlying anatomy. A novel intensity-based non-rigid 2D-3D registration algorithm is then presented to iteratively fit the 3D b-spline-based SDM to the 2D X-ray images of an unseen subject, which requires a computationally expensive inversion of the instantiated deformation in each iteration. In this paper, we propose to solve this challenge with a fast B-spline pseudo-inversion algorithm that is implemented on graphics processing unit (GPU). Experiments conducted on C-arm and X-ray images of cadaveric femurs demonstrate the efficacy of the present approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.