56 resultados para High-throughput sequencing
Resumo:
Many bacterial transcription factors do not behave as per the textbook operon model. We draw on whole genome work, as well as reported diversity across different bacteria, to argue that transcription factors may have evolved from nucleoid-associated proteins. This view would explain a large amount of recent data gleaned from high-throughput sequencing and bioinformatic analyses.
Resumo:
We review the current status of various aspects of biopolymer translocation through nanopores and the challenges and opportunities it offers. Much of the interest generated by nanopores arises from their potential application to third-generation cheap and fast genome sequencing. Although the ultimate goal of single-nucleotide identification has not yet been reached, great advances have been made both from a fundamental and an applied point of view, particularly in controlling the translocation time, fabricating various kinds of synthetic pores or genetically engineering protein nanopores with tailored properties, and in devising methods (used separately or in combination) aimed at discriminating nucleotides based either on ionic or transverse electron currents, optical readout signatures, or on the capabilities of the cellular machinery. Recently, exciting new applications have emerged, for the detection of specific proteins and toxins (stochastic biosensors), and for the study of protein folding pathways and binding constants of protein-protein and protein-DNA complexes. The combined use of nanopores and advanced micromanipulation techniques involving optical/magnetic tweezers with high spatial resolution offers unique opportunities for improving the basic understanding of the physical behavior of biomolecules in confined geometries, with implications for the control of crucial biological processes such as protein import and protein denaturation. We highlight the key works in these areas along with future prospects. Finally, we review theoretical and simulation studies aimed at improving fundamental understanding of the complex microscopic mechanisms involved in the translocation process. Such understanding is a pre-requisite to fruitful application of nanopore technology in high-throughput devices for molecular biomedical diagnostics.
Resumo:
Since the last decade, there is a growing need for patterned biomolecules for various applications ranging from diagnostic devices to enabling fundamental biological studies with high throughput. Protein arrays facilitate the study of protein-protein, protein-drug or protein-DNA interactions as well as highly multiplexed immunosensors based on antibody-antigen recognition. Protein microarrays are typically fabricated using piezoelectric inkjet printing with resolution limit of similar to 70-100 mu m limiting the array density. A considerable amount of research has been done on patterning biomolecules using customised biocompatible photoresists. Here, a simple photolithographic process for fabricating protein microarrays on a commercially available diazo-naphthoquinone-novolac-positive tone photoresist functionalised with 3-aminopropyltriethoxysilane is presented. The authors demonstrate that proteins immobilised using this procedure retain their activity and therefore form functional microarrays with the array density limited only by the resolution of lithography, which is more than an order of magnitude compared with inkjet printing. The process described here may be useful in the integration of conventional semiconductor manufacturing processes with biomaterials relevant for the creation of next-generation bio-chips.
Resumo:
Cancer is a complex disease which arises due to a series of genetic changes related to cell division and growth control. Cancer remains the second leading cause of death in humans next to heart diseases. As a testimony to our progress in understanding the biology of cancer and developments in cancer diagnosis and treatment methods, the overall median survival time of all cancers has increased six fold one year to six years during the last four decades. However, while the median survival time has increased dramatically for some cancers like breast and colon, there has been only little change for other cancers like pancreas and brain. Further, not all patients having a single type of tumour respond to the standard treatment. The differential response is due to genetic heterogeneity which exists not only between tumours, which is called intertumour heterogeneity, but also within individual tumours, which is called intratumoural heterogeneity. Thus it becomes essential to personalize the cancer treatment based on a specific genetic change in a given tumour. It is also possible to stratify cancer patients into low- and high-risk groups based on expression changes or alterations in a group of genes gene signatures and choose a more suitable mode of therapy. It is now possible that each tumour can be analysed using various high-throughput methods like gene expression profiling and next-generation sequencing to identify its unique fingerprint based on which a personalized or tailor-made therapy can be developed. Here, we review the important progress made in the recent years towards personalizing cancer treatment with the use of gene signatures.
Resumo:
Recognizing similarities and deriving relationships among protein molecules is a fundamental requirement in present-day biology. Similarities can be present at various levels which can be detected through comparison of protein sequences or their structural folds. In some cases similarities obscure at these levels could be present merely in the substructures at their binding sites. Inferring functional similarities between protein molecules by comparing their binding sites is still largely exploratory and not as yet a routine protocol. One of the main reasons for this is the limitation in the choice of appropriate analytical tools that can compare binding sites with high sensitivity. To benefit from the enormous amount of structural data that is being rapidly accumulated, it is essential to have high throughput tools that enable large scale binding site comparison. Results: Here we present a new algorithm PocketMatch for comparison of binding sites in a frame invariant manner. Each binding site is represented by 90 lists of sorted distances capturing shape and chemical nature of the site. The sorted arrays are then aligned using an incremental alignment method and scored to obtain PMScores for pairs of sites. A comprehensive sensitivity analysis and an extensive validation of the algorithm have been carried out. A comparison with other site matching algorithms is also presented. Perturbation studies where the geometry of a given site was retained but the residue types were changed randomly, indicated that chance similarities were virtually non-existent. Our analysis also demonstrates that shape information alone is insufficient to discriminate between diverse binding sites, unless combined with chemical nature of amino acids. Conclusion: A new algorithm has been developed to compare binding sites in accurate, efficient and high-throughput manner. Though the representation used is conceptually simplistic, we demonstrate that along with the new alignment strategy used, it is sufficient to enable binding comparison with high sensitivity. Novel methodology has also been presented for validating the algorithm for accuracy and sensitivity with respect to geometry and chemical nature of the site. The method is also fast and takes about 1/250(th) second for one comparison on a single processor. A parallel version on BlueGene has also been implemented.
Resumo:
Background: Tuberculosis still remains one of the largest killer infectious diseases, warranting the identification of newer targets and drugs. Identification and validation of appropriate targets for designing drugs are critical steps in drug discovery, which are at present major bottle-necks. A majority of drugs in current clinical use for many diseases have been designed without the knowledge of the targets, perhaps because standard methodologies to identify such targets in a high-throughput fashion do not really exist. With different kinds of 'omics' data that are now available, computational approaches can be powerful means of obtaining short-lists of possible targets for further experimental validation. Results: We report a comprehensive in silico target identification pipeline, targetTB, for Mycobacterium tuberculosis. The pipeline incorporates a network analysis of the protein-protein interactome, a flux balance analysis of the reactome, experimentally derived phenotype essentiality data, sequence analyses and a structural assessment of targetability, using novel algorithms recently developed by us. Using flux balance analysis and network analysis, proteins critical for survival of M. tuberculosis are first identified, followed by comparative genomics with the host, finally incorporating a novel structural analysis of the binding sites to assess the feasibility of a protein as a target. Further analyses include correlation with expression data and non-similarity to gut flora proteins as well as 'anti-targets' in the host, leading to the identification of 451 high-confidence targets. Through phylogenetic profiling against 228 pathogen genomes, shortlisted targets have been further explored to identify broad-spectrum antibiotic targets, while also identifying those specific to tuberculosis. Targets that address mycobacterial persistence and drug resistance mechanisms are also analysed. Conclusion: The pipeline developed provides rational schema for drug target identification that are likely to have high rates of success, which is expected to save enormous amounts of money, resources and time in the drug discovery process. A thorough comparison with previously suggested targets in the literature demonstrates the usefulness of the integrated approach used in our study, highlighting the importance of systems-level analyses in particular. The method has the potential to be used as a general strategy for target identification and validation and hence significantly impact most drug discovery programmes.
Resumo:
This paper presents the architecture of a fault-tolerant, special-purpose multi-microprocessor system for solving Partial Differential Equations (PDEs). The modular nature of the architecture allows the use of hundreds of Processing Elements (PEs) for high throughput. Its performance is evaluated by both analytical and simulation methods. The results indicate that the system can achieve high operation rates and is not sensitive to inter-processor communication delay.
Resumo:
Background: The hot dog fold has been found in more than sixty proteins since the first report of its existence about a decade ago. The fold appears to have a strong association with fatty acid biosynthesis, its regulation and metabolism, as the proteins with this fold are predominantly coenzyme A-binding enzymes with a variety of substrates located at their active sites. Results: We have analyzed the structural features and sequences of proteins having the hot dog fold. This study reveals that though the basic architecture of the fold is well conserved in these proteins, significant differences exist in their sequence, nature of substrate and oligomerization. Segments with certain conserved sequence motifs seem to play crucial structural and functional roles in various classes of these proteins. Conclusion: The analysis led to predictions regarding the functional classification and identification of possible catalytic residues of a number of hot dog fold-containing hypothetical proteins whose structures were determined in high throughput structural genomics projects.
Resumo:
Several metal complexes of three different functionalized salen derivatives have been synthesized. The salens differ in terms of the electrostatic character and the location of the charges. The interactions of such complexes with DNA were first investigated in detail by UV−vis absorption titrimetry. It appears that the DNA binding by most of these compounds is primarily due to a combination of electrostatic and other modes of interactions. The melting temperatures of DNA in the presence of various metal complexes were higher than that of the pure DNA. The presence of additional charge on the central metal ion core in the complex, however, alters the nature of binding. Bis-cationic salen complexes containing central Ni(II) or Mn(III) were found to induce DNA strand scission, especially in the presence of co-oxidant as revealed by plasmid DNA cleavage assay and also on the basis of the autoradiogram obtained from their respective high-resolution sequencing gels. Modest base selectivity was observed in the DNA cleavage reactions. Comparisons of the linearized and supercoiled forms of DNA in the metal complex-mediated cleavage reactions reveal that the supercoiled forms are more susceptible to DNA scission. Under suitable conditions, the DNA cleavage reactions can be induced either by preformed metal complexes or by in situ complexation of the ligand in the presence of the appropriate metal ion. Also revealed was the fact that the analogous complexes containing Cu(II) or Cr(III) did not effect any DNA strand scission under comparable conditions. Salens with pendant negative charges on either side of the precursor salicylaldehyde or ethylenediamine fragments did not bind with DNA. Similarly, metallosalen complexes with net anionic character also failed to induce any DNA modification activities.
Resumo:
The prognosis of patients with glioblastoma, the most malignant adult glial brain tumor, remains poor in spite of advances in treatment procedures, including surgical resection, irradiation and chemotherapy.Genetic heterogeneity of glioblastoma warrants extensive studies in order to gain a thorough understanding of the biology of this tumor. While there have been several studies of global transcript profiling of glioma with the identification of gene signatures for diagnosis and disease management, translation into clinics is yet to happen. Serum biomarkers have the potential to revolutionize the process of cancer diagnosis, grading, prognostication and treatment response monitoring. Besides having the advantage that serum can be obtained through a less invasive procedure, it contains molecules at an extraordinary dynamic range of ten orders of magnitude in terms of their concentrations. While the conventional methods, such as 2DE, have been in use for many years, the ability to identify the proteins through mass spectrometry techniques such as MALDI-TOF led to an explosion of interest in proteomics. Relatively new high-throughput proteomics methods such as SELDI-TOF and protein microarrays are expected to hasten the process of serum biomarker discovery. This review will highlight the recent advances in the proteomics platform in discovering serum biomarkers and the current status of glioma serum markers. We aim to provide the principles and potential of the latest proteomic approaches and their applications in the biomarker discovery process. Besides providing a comprehensive list of available serum biomarkers of glioma, we will also propose how these markers will revolutionize the clinical management of glioma patients.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.
Resumo:
Run-time interoperability between different applications based on H.264/AVC is an emerging need in networked infotainment, where media delivery must match the desired resolution and quality of the end terminals. In this paper, we describe the architecture and design of a polymorphic ASIC to support this. The H.264 decoding flow is partitioned into modules, such that the polymorphic ASIC meets the design goals of low-power, low-area, high flexibility, high throughput and fast interoperability between different profiles and levels of H.264. We demonstrate the idea with a multi-mode decoder that can decode baseline, main and high profile H.264 streams and can interoperate at run.time across these profiles. The decoder is capable of processing frame sizes of up to 1024 times 768 at 30 fps. The design synthesized with UMC 0.13 mum technology, occupies 250 k gates and runs at 100 MHz.
Resumo:
Computational docking of ligands to protein structures is a key step in structure-based drug design. Currently, the time required for each docking run is high and thus limits the use of docking in a high-throughput manner, warranting parallelization of docking algorithms. AutoDock, a widely used tool, has been chosen for parallelization. Near-linear increases in speed were observed with 96 processors, reducing the time required for docking ligands to HIV-protease from 81 min, as an example, on a single IBM Power-5 processor ( 1.65 GHz), to about 1 min on an IBM cluster, with 96 such processors. This implementation would make it feasible to perform virtual ligand screening using AutoDock.
Resumo:
Glycomics is the study of comprehensive structural elucidation and characterization of all glycoforms found in nature and their dynamic spatiotemporal changes that are associated with biological processes. Glycocalyx of mammalian cells actively participate in cell-cell, cell-matrix, and cell-pathogen interactions, which impact embryogenesis, growth and development, homeostasis, infection and immunity, signaling, malignancy, and metabolic disorders. Relative to genomics and proteomics, glycomics is just growing out of infancy with great potential in biomedicine for biomarker discovery, diagnosis, and treatment. However, the immense diversity and complexity of glycan structures and their multiple modes of interactions with proteins pose great challenges for development of analytical tools for delineating structure function relationships and understanding glycocode. Several tools are being developed for glycan profiling based on chromatography,m mass spectrometry, glycan microarrays, and glyco-informatics. Lectins, which have long been used in glyco-immunology, printed on a microarray provide a versatile platform for rapid high throughput analysis of glycoforms of biological samples. Herein, we summarize technological advances in lectin microarrays and critically review their impact on glycomics analysis. Challenges remain in terms of expansion to include nonplant derived lectins, standardization for routine clinical use, development of recombinant lectins, and exploration of plant kingdom for discovery of novel lectins.