915 resultados para High-throughput assay method
Resumo:
Metabolomics as one of the most rapidly growing technologies in the "-omics" field denotes the comprehensive analysis of low molecular-weight compounds and their pathways. Cancer-specific alterations of the metabolome can be detected by high-throughput mass-spectrometric metabolite profiling and serve as a considerable source of new markers for the early differentiation of malignant diseases as well as their distinction from benign states. However, a comprehensive framework for the statistical evaluation of marker panels in a multi-class setting has not yet been established. We collected serum samples of 40 pancreatic carcinoma patients, 40 controls, and 23 pancreatitis patients according to standard protocols and generated amino acid profiles by routine mass-spectrometry. In an intrinsic three-class bioinformatic approach we compared these profiles, evaluated their selectivity and computed multi-marker panels combined with the conventional tumor marker CA 19-9. Additionally, we tested for non-inferiority and superiority to determine the diagnostic surplus value of our multi-metabolite marker panels. Compared to CA 19-9 alone, the combined amino acid-based metabolite panel had a superior selectivity for the discrimination of healthy controls, pancreatitis, and pancreatic carcinoma patients [Formula: see text] We combined highly standardized samples, a three-class study design, a high-throughput mass-spectrometric technique, and a comprehensive bioinformatic framework to identify metabolite panels selective for all three groups in a single approach. Our results suggest that metabolomic profiling necessitates appropriate evaluation strategies and-despite all its current limitations-can deliver marker panels with high selectivity even in multi-class settings.
Resumo:
Point-of-care testing (POCT) remains under scrutiny by healthcare professionals because of its ill-tried, young history. POCT methods are being developed by a few major equipment companies based on rapid progress in informatics and nanotechnology. Issues as POCT quality control, comparability with standard laboratory procedures, standardisation, traceability and round robin testing are being left to hospitals. As a result, the clinical and operational benefits of POCT were first evident for patients on the operating table. For the management of cardiovascular surgery patients, POCT technology is an indispensable aid. Improvement of the technology has meant that clinical laboratory pathologists now recognise the need for POCT beyond their high-throughput areas.
Resumo:
High-throughput SNP arrays provide estimates of genotypes for up to one million loci, often used in genome-wide association studies. While these estimates are typically very accurate, genotyping errors do occur, which can influence in particular the most extreme test statistics and p-values. Estimates for the genotype uncertainties are also available, although typically ignored. In this manuscript, we develop a framework to incorporate these genotype uncertainties in case-control studies for any genetic model. We verify that using the assumption of a “local alternative” in the score test is very reasonable for effect sizes typically seen in SNP association studies, and show that the power of the score test is simply a function of the correlation of the genotype probabilities with the true genotypes. We demonstrate that the power to detect a true association can be substantially increased for difficult to call genotypes, resulting in improved inference in association studies.
Resumo:
The last few years have seen the advent of high-throughput technologies to analyze various properties of the transcriptome and proteome of several organisms. The congruency of these different data sources, or lack thereof, can shed light on the mechanisms that govern cellular function. A central challenge for bioinformatics research is to develop a unified framework for combining the multiple sources of functional genomics information and testing associations between them, thus obtaining a robust and integrated view of the underlying biology. We present a graph theoretic approach to test the significance of the association between multiple disparate sources of functional genomics data by proposing two statistical tests, namely edge permutation and node label permutation tests. We demonstrate the use of the proposed tests by finding significant association between a Gene Ontology-derived "predictome" and data obtained from mRNA expression and phenotypic experiments for Saccharomyces cerevisiae. Moreover, we employ the graph theoretic framework to recast a surprising discrepancy presented in Giaever et al. (2002) between gene expression and knockout phenotype, using expression data from a different set of experiments.
Resumo:
The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
Functional neuroimaging techniques enable investigations into the neural basis of human cognition, emotions, and behaviors. In practice, applications of functional magnetic resonance imaging (fMRI) have provided novel insights into the neuropathophysiology of major psychiatric,neurological, and substance abuse disorders, as well as into the neural responses to their treatments. Modern activation studies often compare localized task-induced changes in brain activity between experimental groups. One may also extend voxel-level analyses by simultaneously considering the ensemble of voxels constituting an anatomically defined region of interest (ROI) or by considering means or quantiles of the ROI. In this work we present a Bayesian extension of voxel-level analyses that offers several notable benefits. First, it combines whole-brain voxel-by-voxel modeling and ROI analyses within a unified framework. Secondly, an unstructured variance/covariance for regional mean parameters allows for the study of inter-regional functional connectivity, provided enough subjects are available to allow for accurate estimation. Finally, an exchangeable correlation structure within regions allows for the consideration of intra-regional functional connectivity. We perform estimation for our model using Markov Chain Monte Carlo (MCMC) techniques implemented via Gibbs sampling which, despite the high throughput nature of the data, can be executed quickly (less than 30 minutes). We apply our Bayesian hierarchical model to two novel fMRI data sets: one considering inhibitory control in cocaine-dependent men and the second considering verbal memory in subjects at high risk for Alzheimer’s disease. The unifying hierarchical model presented in this manuscript is shown to enhance the interpretation content of these data sets.
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
Genotyping platforms such as Affymetrix can be used to assess genotype-phenotype as well as copy number-phenotype associations at millions of markers. While genotyping algorithms are largely concordant when assessed on HapMap samples, tools to assess copy number changes are more variable and often discordant. One explanation for the discordance is that copy number estimates are susceptible to systematic differences between groups of samples that were processed at different times or by different labs. Analysis algorithms that do not adjust for batch effects are prone to spurious measures of association. The R package crlmm implements a multilevel model that adjusts for batch effects and provides allele-specific estimates of copy number. This paper illustrates a workflow for the estimation of allele-specific copy number, develops markerand study-level summaries of batch effects, and demonstrates how the marker-level estimates can be integrated with complimentary Bioconductor software for inferring regions of copy number gain or loss. All analyses are performed in the statistical environment R. A compendium for reproducing the analysis is available from the author’s website (http://www.biostat.jhsph.edu/~rscharpf/crlmmCompendium/index.html).
Resumo:
The detection of virulence determinants harbored by pathogenic Escherichia coli is important for establishing the pathotype responsible for infection. A sensitive and specific miniaturized virulence microarray containing 60 oligonucleotide probes was developed. It detected six E. coli pathotypes and will be suitable in the future for high-throughput use.
Resumo:
Cu is an essential nutrient for man, but can be toxic if intakes are too high. In sensitive populations, marginal over- or under-exposure can have detrimental effects. Malnourished children, the elderly, and pregnant or lactating females may be susceptible for Cu deficiency. Cu status and exposure in the population can currently not be easily measured, as neither plasma Cu nor plasma cuproenzymes reflect Cu status precisely. Some blood markers (such as ceruloplasmin) indicate severe Cu depletion, but do not inversely respond to Cu excess, and are not suitable to indicate marginal states. A biomarker of Cu is needed that is sensitive to small changes in Cu status, and that responds to Cu excess as well as deficiency. Such a marker will aid in monitoring Cu status in large populations, and will help to avoid chronic health effects (for example, liver damage in chronic toxicity, osteoporosis, loss of collagen stability, or increased susceptibility to infections in deficiency). The advent of high-throughput technologies has enabled us to screen for potential biomarkers in the whole proteome of a cell, not excluding markers that have no direct link to Cu. Further, this screening allows us to search for a whole group of proteins that, in combination, reflect Cu status. The present review emphasises the need to find sensitive biomarkers for Cu, examines potential markers of Cu status already available, and discusses methods to identify a novel suite of biomarkers.
Resumo:
Irreversible, nonenzymatic glycation of the haemoglobin A beta chain leads to the formation of haemoglobin A1c (HbA1c), a stable minor haemoglobin component with enhanced electrophoretic mobility. The rate of formation of HbA1c is directly proportional to the ambient glucose concentration. HbA1c is commonly used to assess long-term blood glucose control in patients with diabetes mellitus, because the HbA1c value has been shown to predict the risk for the development of many of the chronic complications in diabetes. There are currently four principal glycohaemoglobin assay techniques (ion-exchange chromatography, electrophoresis, affinity chromatography and immunoassays) and over 20 methods that measure different glycated products. The ranges indicating good and poor glycaemic control can vary markedly between different assays. At the moment values differ between methodologies and even between different laboratories using the same methodology. Optimal use of HbA1c testing requires standardisation. There is progress towards international standardisation and improved precision of HbA1c which will lead to all assays reporting results in a standardised way. Clinicians ordering HbA1c testing for their patients should be aware of the type of assay method used, the reference interval, potential assay interferences (e.g. haemoglobinopathies, chronic alcohol ingestion, carbamylation products in uraemia) and assay performance. And they should know that a variety of factors have been shown to directly influence HbA1c values, e.g. iron deficiency anaemia, chronic renal failure and shortened red blood cell life span.
Resumo:
Trypanosoma brucei rhodesiense and T. b. gambiense are the causative agents of sleeping sickness, a fatal disease that affects 36 countries in sub-Saharan Africa. Nevertheless, only a handful of clinically useful drugs are available. These drugs suffer from severe side-effects. The situation is further aggravated by the alarming incidence of treatment failures in several sleeping sickness foci, apparently indicating the occurrence of drug-resistant trypanosomes. Because of these reasons, and since vaccination does not appear to be feasible due to the trypanosomes' ever changing coat of variable surface glycoproteins (VSGs), new drugs are needed urgently. The entry of Trypanosoma brucei into the post-genomic age raises hopes for the identification of novel kinds of drug targets and in turn new treatments for sleeping sickness. The pragmatic definition of a drug target is, a protein that is essential for the parasite and does not have homologues in the host. Such proteins are identified by comparing the predicted proteomes of T. brucei and Homo sapiens, then validated by large-scale gene disruption or gene silencing experiments in trypanosomes. Once all proteins that are essential and unique to the parasite are identified, inhibitors may be found by high-throughput screening. However powerful, this functional genomics approach is going to miss a number of attractive targets. Several current, successful parasiticides attack proteins that have close homologues in the human proteome. Drugs like DFMO or pyrimethamine inhibit parasite and host enzymes alike--a therapeutic window is opened only by subtle differences in the regulation of the targets, which cannot be recognized in silico. Working against the post-genomic approach is also the fact that essential proteins tend to be more highly conserved between species than non-essential ones. Here we advocate drug targeting, i.e. uptake or activation of a drug via parasite-specific pathways, as a chemotherapeutic strategy to selectively inhibit enzymes that have equally sensitive counterparts in the host. The T. brucei purine salvage machinery offers opportunities for both metabolic and transport-based targeting: unusual nucleoside and nucleobase permeases may be exploited for selective import, salvage enzymes for selective activation of purine antimetabolites.