916 resultados para High-throughput screening


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information gained from the human genome project and improvements in compound synthesizing have increased the number of both therapeutic targets and potential lead compounds. This has evolved a need for better screening techniques to have a capacity to screen number of compound libraries against increasing amount of targets. Radioactivity based assays have been traditionally used in drug screening but the fluorescence based assays have become more popular in high throughput screening (HTS) as they avoid safety and waste problems confronted with radioactivity. In comparison to conventional fluorescence more sensitive detection is obtained with time-resolved luminescence which has increased the popularity of time-resolved fluorescence resonance energy transfer (TR-FRET) based assays. To simplify the current TR-FRET based assay concept the luminometric homogeneous single-label utilizing assay technique, Quenching Resonance Energy Transfer (QRET), was developed. The technique utilizes soluble quencher to quench non-specifically the signal of unbound fraction of lanthanide labeled ligand. One labeling procedure and fewer manipulation steps in the assay concept are saving resources. The QRET technique is suitable for both biochemical and cell-based assays as indicated in four studies:1) ligand screening study of β2 -adrenergic receptor (cell-based), 2) activation study of Gs-/Gi-protein coupled receptors by measuring intracellular concentration of cyclic adenosine monophosphate (cell-based), 3) activation study of G-protein coupled receptors by observing the binding of guanosine-5’-triphosphate (cell membranes), and 4) activation study of small GTP binding protein Ras (biochemical). Signal-to-background ratios were between 2.4 to 10 and coefficient of variation varied from 0.5 to 17% indicating their suitability to HTS use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pan-viral DNA array (PVDA) and high-throughput sequencing (HTS) are useful tools to identify novel viruses of emerging diseases. However, both techniques have difficulties to identify viruses in clinical samples because of the host genomic nucleic acid content (hg/cont). Both propidium monoazide (PMA) and ethidium bromide monoazide (EMA) have the capacity to bind free DNA/RNA, but are cell membrane-impermeable. Thus, both are unable to bind protected nucleic acid such as viral genomes within intact virions. However, EMA/PMA modified genetic material cannot be amplified by enzymes. In order to assess the potential of EMA/PMA to lower the presence of amplifiable hg/cont in samples and improve virus detection, serum and lung tissue homogenates were spiked with porcine reproductive and respiratory virus (PRRSV) and were processed with EMA/PMA. In addition, PRRSV RT-qPCR positive clinical samples were also tested. EMA/PMA treatments significantly decreased amplifiable hg/cont and significantly increased the number of PVDA positive probes and their signal intensity compared to untreated spiked lung samples. EMA/PMA treatments also increased the sensitivity of HTS by increasing the number of specific PRRSV reads and the PRRSV percentage of coverage. Interestingly, EMA/PMA treatments significantly increased the sensitivity of PVDA and HTS in two out of three clinical tissue samples. Thus, EMA/PMA treatments offer a new approach to lower the amplifiable hg/cont in clinical samples and increase the success of PVDA and HTS to identify viruses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to nonrodent studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tagged microarray marker (TAM) method allows high-throughput differentiation between predicted alternative PCR products. Typically, the method is used as a molecular marker approach to determining the allelic states of single nucleotide polymorphisms (SNPs) or insertion-deletion (indel) alleles at genomic loci in multiple individuals. Biotin-labeled PCR products are spotted, unpurified, onto a streptavidin-coated glass slide and the alternative products are differentiated by hybridization to fluorescent detector oligonucleotides that recognize corresponding allele-specific tags on the PCR primers. The main attractions of this method are its high throughput (thousands of PCRs are analyzed per slide), flexibility of scoring (any combination, from a single marker in thousands of samples to thousands of markers in a single sample, can be analyzed) and flexibility of scale (any experimental scale, from a small lab setting up to a large project). This protocol describes an experiment involving 3,072 PCRs scored on a slide. The whole process from the start of PCR setup to receiving the data spreadsheet takes 2 d.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has become evident that the mystery of life will not be deciphered just by decoding its blueprint, the genetic code. In the life and biomedical sciences, research efforts are now shifting from pure gene analysis to the analysis of all biomolecules involved in the machinery of life. One area of these postgenomic research fields is proteomics. Although proteomics, which basically encompasses the analysis of proteins, is not a new concept, it is far from being a research field that can rely on routine and large-scale analyses. At the time the term proteomics was coined, a gold-rush mentality was created, promising vast and quick riches (i.e., solutions to the immensely complex questions of life and disease). Predictably, the reality has been quite different. The complexity of proteomes and the wide variations in the abundances and chemical properties of their constituents has rendered the use of systematic analytical approaches only partially successful, and biologically meaningful results have been slow to arrive. However, to learn more about how cells and, hence, life works, it is essential to understand the proteins and their complex interactions in their native environment. This is why proteomics will be an important part of the biomedical sciences for the foreseeable future. Therefore, any advances in providing the tools that make protein analysis a more routine and large-scale business, ideally using automated and rapid analytical procedures, are highly sought after. This review will provide some basics, thoughts and ideas on the exploitation of matrix-assisted laser desorption/ ionization in biological mass spectrometry - one of the most commonly used analytical tools in proteomics - for high-throughput analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Large-scale genetic profiling, mapping and genetic association studies require access to a series of well-characterised and polymorphic microsatellite markers with distinct and broad allele ranges. Selection of complementary microsatellite markers with non-overlapping allele ranges has historically proved to be a bottleneck in the development of multiplex microsatellite assays. The characterisation process for each microsatellite locus can be laborious and costly given the need for numerous, locus-specific fluorescent primers. Results Here, we describe a simple and inexpensive approach to select useful microsatellite markers. The system is based on the pooling of multiple unlabelled PCR amplicons and their subsequent ligation into a standard cloning vector. A second round of amplification utilising generic labelled primers targeting the vector and unlabelled locus-specific primers targeting the microsatellite flanking region yield allelic profiles that are representative of all individuals contained within the pool. Suitability of various DNA pool sizes was then tested for this purpose. DNA template pools containing between 8 and 96 individuals were assessed for the determination of allele ranges of individual microsatellite markers across a broad population. This helped resolve the balance between using pools that are large enough to allow the detection of many alleles against the risk of including too many individuals in a pool such that rare alleles are over-diluted and so do not appear in the pooled microsatellite profile. Pools of DNA from 12 individuals allowed the reliable detection of all alleles present in the pool. Conclusion The use of generic vector-specific fluorescent primers and unlabelled locus-specific primers provides a high resolution, rapid and inexpensive approach for the selection of highly polymorphic microsatellite loci that possess non-overlapping allele ranges for use in large-scale multiplex assays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has become evident that the mystery of life will not be deciphered just by decoding its blueprint, the genetic code. In the life and biomedical sciences, research efforts are now shifting from pure gene analysis to the analysis of all biomolecules involved in the machinery of life. One area of these postgenomic research fields is proteomics. Although proteomics, which basically encompasses the analysis of proteins, is not a new concept, it is far from being a research field that can rely on routine and large-scale analyses. At the time the term proteomics was coined, a gold-rush mentality was created, promising vast and quick riches (i.e., solutions to the immensely complex questions of life and disease). Predictably, the reality has been quite different. The complexity of proteomes and the wide variations in the abundances and chemical properties of their constituents has rendered the use of systematic analytical approaches only partially successful, and biologically meaningful results have been slow to arrive. However, to learn more about how cells and, hence, life works, it is essential to understand the proteins and their complex interactions in their native environment. This is why proteomics will be an important part of the biomedical sciences for the foreseeable future. Therefore, any advances in providing the tools that make protein analysis a more routine and large-scale business, ideally using automated and rapid analytical procedures, are highly sought after. This review will provide some basics, thoughts and ideas on the exploitation of matrix-assisted laser desorption/ionization in biological mass spectrometry - one of the most commonly used analytical tools in proteomics - for high-throughput analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to non-rodent studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and Aims: Phosphate (Pi) deficiency in soils is a major limiting factor for crop growth worldwide. Plant growth under low Pi conditions correlates with root architectural traits and it may therefore be possible to select these traits for crop improvement. The aim of this study was to characterize root architectural traits, and to test quantitative trait loci (QTL) associated with these traits, under low Pi (LP) and high Pi (HP) availability in Brassica napus. Methods: Root architectural traits were characterized in seedlings of a double haploid (DH) mapping population (n = 190) of B. napus 'Tapidor' x 'Ningyou 7' (TNDH) using high-throughput phenotyping methods. Primary root length (PRL), lateral root length (LRL), lateral root number (LRN), lateral root density (LRD) and biomass traits were measured 12 d post-germination in agar at LP and HP. Key Results: In general, root and biomass traits were highly correlated under LP and HP conditions. 'Ningyou 7' had greater LRL, LRN and LRD than 'Tapidor', at both LP and HP availability, but smaller PRL. A cluster of highly significant QTL for LRN, LRD and biomass traits at LP availability were identified on chromosome A03; QTL for PRL were identified on chromosomes A07 and C06. Conclusions: High-throughput phenotyping of Brassica can be used to identify root architectural traits which correlate with shoot biomass. It is feasible that these traits could be used in crop improvement strategies. The identification of QTL linked to root traits under LP and HP conditions provides further insights on the genetic basis of plant tolerance to P deficiency, and these QTL warrant further dissection.