58 resultados para High-throughput assay method
Resumo:
Real-time PCR protocols were developed to detect and discriminate 11 anastomosis groups (AGs) of Rhizoctonia solani using ribosomal internal transcribed spacer (ITS) regions (AG-1-IA, AG-1-IC, AG-2-1, AG-2-2, AG-4HGI+II, AG-4HGIII, AG-8) or beta-tubulin (AG-3, AG-4HGII, AG-5 and AG-9) sequences. All real-time assays were target group specific, except AG-2-2, which showed a weak cross-reaction with AG-2tabac. In addition, methods were developed for the high throughput extraction of DNA from soil and compost samples. The DNA extraction method was used with the AG-2-1 assay and shown to be quantitative with a detection threshold of 10-7 g of R. solani per g of soil. A similar DNA extraction efficiency was observed for samples from three contrasting soil types. The developed methods were then used to investigate the spatial distribution of R. solani AG-2-1 in field soils. Soil from shallow depths of a field planted with Brassica oleracea tested positive for R. solani AG-2-1 more frequently than soil collected from greater depths. Quantification of R. solani inoculum in field samples proved challenging due to low levels of inoculum in naturally occurring soils. The potential uses of real-time PCR and DNA extraction protocols to investigate the epidemiology of R. solani are discussed.
Resumo:
Developments in high-throughput genotyping provide an opportunity to explore the application of marker technology in distinctness, uniformity and stability (DUS) testing of new varieties. We have used a large set of molecular markers to assess the feasibility of a UPOV Model 2 approach: “Calibration of threshold levels for molecular characteristics against the minimum distance in traditional characteristics”. We have examined 431 winter and spring barley varieties, with data from UK DUS trials comprising 28 characteristics, together with genotype data from 3072 SNP markers. Inter varietal distances were calculated and we found higher correlations between molecular and morphological distances than have been previously reported. When varieties were grouped by kinship, phenotypic and genotypic distances of these groups correlated well. We estimated the minimum marker numbers required and showed there was a ceiling after which the correlations do not improve. To investigate the possibility of breaking through this ceiling, we attempted genomic prediction of phenotypes from genotypes and higher correlations were achieved. We tested distinctness decisions made using either morphological or genotypic distances and found poor correspondence between each method.
Resumo:
Background: Targeted Induced Loci Lesions IN Genomes (TILLING) is increasingly being used to generate and identify mutations in target genes of crop genomes. TILLING populations of several thousand lines have been generated in a number of crop species including Brassica rapa. Genetic analysis of mutants identified by TILLING requires an efficient, high-throughput and cost effective genotyping method to track the mutations through numerous generations. High resolution melt (HRM) analysis has been used in a number of systems to identify single nucleotide polymorphisms (SNPs) and insertion/deletions (IN/DELs) enabling the genotyping of different types of samples. HRM is ideally suited to high-throughput genotyping of multiple TILLING mutants in complex crop genomes. To date it has been used to identify mutants and genotype single mutations. The aim of this study was to determine if HRM can facilitate downstream analysis of multiple mutant lines identified by TILLING in order to characterise allelic series of EMS induced mutations in target genes across a number of generations in complex crop genomes. Results: We demonstrate that HRM can be used to genotype allelic series of mutations in two genes, BraA.CAX1a and BraA.MET1.a in Brassica rapa. We analysed 12 mutations in BraA.CAX1.a and five in BraA.MET1.a over two generations including a back-cross to the wild-type. Using a commercially available HRM kit and the Lightscanner™ system we were able to detect mutations in heterozygous and homozygous states for both genes. Conclusions: Using HRM genotyping on TILLING derived mutants, it is possible to generate an allelic series of mutations within multiple target genes rapidly. Lines suitable for phenotypic analysis can be isolated approximately 8-9 months (3 generations) from receiving M3 seed of Brassica rapa from the RevGenUK TILLING service.
Resumo:
Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
A novel and generic miniaturization methodology for the determination of partition coefficient values of organic compounds in noctanol/water by using magnetic nanoparticles is, for the first time, described. We have successfully designed, synthesised and characterised new colloidal stable porous silica-encapsulated magnetic nanoparticles of controlled dimensions. These nanoparticles absorbing a tiny amount of n-octanol in their porous silica over-layer are homogeneously dispersed into a bulk aqueous phase (pH 7.40) containing an organic compound prior to magnetic separation. The small size of the particles and the efficient mixing allow a rapid establishment of the partition equilibrium of the organic compound between the solid supported n-octanol nano-droplets and the bulk aqueous phase. UV-vis spectrophotometry is then applied as a quantitative method to determine the concentration of the organic compound in the aqueous phase both before and after partitioning (after magnetic separation). log D values of organic compounds of pharmaceutical interest (0.65-3.50), determined by this novel methodology, were found to be in excellent agreement with the values measured by the shake-flask method in two independent laboratories, which are also consistent with the literature data. It was also found that this new technique gives a number of advantages such as providing an accurate measurement of log D value, a much shorter experimental time and a smaller sample size required. With this approach, the formation of a problematic emulsion, commonly encountered in shake-flask experiments, is eliminated. It is envisaged that this method could be applicable to the high throughput log D screening of drug candidates. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The entropically-driven ring-opening polymerization of macrocyclic monomers (> ca. 14 ring atoms per repeat unit) and/or macrocyclic oligomers is a relatively new method of polymer synthesis that exploits the well-known phenomenon of ring-chain equilibria. It attracts interest because of its novel features. For example, these ring-opening polymerizations emit no volatiles and little or no heat. This review considers the principles of entropically-driven ring-opening polymerizations, gives selected examples and discusses potential applications. The latter include micromolding, high throughput syntheses and the synthesis of supramolecular polymers. Copyright (c) 2005 John Wiley T Sons, Ltd.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
We describe a high-level design method to synthesize multi-phase regular arrays. The method is based on deriving component designs using classical regular (or systolic) array synthesis techniques and composing these separately evolved component design into a unified global design. Similarity transformations ar e applied to component designs in the composition stage in order to align data ow between the phases of the computations. Three transformations are considered: rotation, re ection and translation. The technique is aimed at the design of hardware components for high-throughput embedded systems applications and we demonstrate this by deriving a multi-phase regular array for the 2-D DCT algorithm which is widely used in many vide ocommunications applications.
Resumo:
The animal gastrointestinal tract houses a large microbial community, the gut microbiota, that confers many benefits to its host, such as protection from pathogens and provision of essential metabolites. Metagenomic approaches have defined the chicken fecal microbiota in other studies, but here, we wished to assess the correlation between the metagenome and the bacterial proteome in order to better understand the healthy chicken gut microbiota. Here, we performed high-throughput sequencing of 16S rRNA gene amplicons and metaproteomics analysis of fecal samples to determine microbial gut composition and protein expression. 16 rRNA gene sequencing analysis identified Clostridiales, Bacteroidaceae, and Lactobacillaceae species as the most abundant species in the gut. For metaproteomics analysis, peptides were generated by using the Fasp method and subsequently fractionated by strong anion exchanges. Metaproteomics analysis identified 3,673 proteins. Among the most frequently identified proteins, 380 proteins belonged to Lactobacillus spp., 155 belonged to Clostridium spp., and 66 belonged to Streptococcus spp. The most frequently identified proteins were heat shock chaperones, including 349 GroEL proteins, from many bacterial species, whereas the most abundant enzymes were pyruvate kinases, as judged by the number of peptides identified per protein (spectral counting). Gene ontology and KEGG pathway analyses revealed the functions and locations of the identified proteins. The findings of both metaproteomics and 16S rRNA sequencing analyses are discussed.
Resumo:
High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-toaverage power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA’s nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme.
Resumo:
Ruminant husbandry is a major source of anthropogenic greenhouse gases (GHG). Filling knowledge gaps and providing expert recommendation are important for defining future research priorities, improving methodologies and establishing science-based GHG mitigation solutions to government and non-governmental organisations, advisory/extension networks, and the ruminant livestock sector. The objectives of this review is to summarize published literature to provide a detailed assessment of the methodologies currently in use for measuring enteric methane (CH4) emission from individual animals under specific conditions, and give recommendations regarding their application. The methods described include respiration chambers and enclosures, sulphur hexafluoride tracer (SF6) technique, and techniques based on short-term measurements of gas concentrations in samples of exhaled air. This includes automated head chambers (e.g. the GreenFeed system), the use of carbon dioxide (CO2) as a marker, and (handheld) laser CH4 detection. Each of the techniques are compared and assessed on their capability and limitations, followed by methodology recommendations. It is concluded that there is no ‘one size fits all’ method for measuring CH4 emission by individual animals. Ultimately, the decision as to which method to use should be based on the experimental objectives and resources available. However, the need for high throughput methodology e.g. for screening large numbers of animals for genomic studies, does not justify the use of methods that are inaccurate. All CH4 measurement techniques are subject to experimental variation and random errors. Many sources of variation must be considered when measuring CH4 concentration in exhaled air samples without a quantitative or at least regular collection rate, or use of a marker to indicate (or adjust) for the proportion of exhaled CH4 sampled. Consideration of the number and timing of measurements relative to diurnal patterns of CH4 emission and respiratory exchange are important, as well as consideration of feeding patterns and associated patterns of rumen fermentation rate and other aspects of animal behaviour. Regardless of the method chosen, appropriate calibrations and recovery tests are required for both method establishment and routine operation. Successful and correct use of methods requires careful attention to detail, rigour, and routine self-assessment of the quality of the data they provide.
Resumo:
The authors present a systolic design for a simple GA mechanism which provides high throughput and unidirectional pipelining by exploiting the inherent parallelism in the genetic operators. The design computes in O(N+G) time steps using O(N2) cells where N is the population size and G is the chromosome length. The area of the device is independent of the chromosome length and so can be easily scaled by replicating the arrays or by employing fine-grain migration. The array is generic in the sense that it does not rely on the fitness function and can be used as an accelerator for any GA application using uniform crossover between pairs of chromosomes. The design can also be used in hybrid systems as an add-on to complement existing designs and methods for fitness function acceleration and island-style population management
Resumo:
The development of high throughput techniques ('chip' technology) for measurement of gene expression and gene polymorphisms (genomics), and techniques for measuring global protein expression (proteomics) and metabolite profile (metabolomics) are revolutionising life science research, including research in human nutrition. In particular, the ability to undertake large-scale genotyping and to identify gene polymorphisms that determine risk of chronic disease (candidate genes) could enable definition of an individual's risk at an early age. However, the search for candidate genes has proven to be more complex, and their identification more elusive, than previously thought. This is largely due to the fact that much of the variability in risk results from interactions between the genome and environmental exposures. Whilst the former is now very well defined via the Human Genome Project, the latter (e.g. diet, toxins, physical activity) are poorly characterised, resulting in inability to account for their confounding effects in most large-scale candidate gene studies. The polygenic nature of most chronic diseases offers further complexity, requiring very large studies to disentangle relatively weak impacts of large numbers of potential 'risk' genes. The efficacy of diet as a preventative strategy could also be considerably increased by better information concerning gene polymorphisms that determine variability in responsiveness to specific diet and nutrient changes. Much of the limited available data are based on retrospective genotyping using stored samples from previously conducted intervention trials. Prospective studies are now needed to provide data that can be used as the basis for provision of individualised dietary advice and development of food products that optimise disease prevention. Application of the new technologies in nutrition research offers considerable potential for development of new knowledge and could greatly advance the role of diet as a preventative disease strategy in the 21st century. Given the potential economic and social benefits offered, funding for research in this area needs greater recognition, and a stronger strategic focus, than is presently the case. Application of genomics in human health offers considerable ethical and societal as well as scientific challenges. Economic determinants of health care provision are more likely to resolve such issues than scientific developments or altruistic concerns for human health.
Resumo:
Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.