917 resultados para High-throughput
Resumo:
Reversible phosphorylation plays a key role in numerous biological processes. Mass spectrometry-based approaches are commonly used to analyze protein phosphorylation, but such analysis is challenging, largely due to the low phosphorylation stoichiometry. Hence, a number of phosphopeptide enrichment strategies have been developed, including metal oxide affinity chromatography (MOAC). Here, we describe a new material for performing MOAC that employs a magnetite-doped polydimethylsiloxane (PDMS), that is suitable for the creation of microwell array and microfluidic systems to enable low volume, high throughput analysis. Incubation time and sample loading were explored and optimized and demonstrate that the embedded magnetite is able to enrich phosphopeptides. This substrate-based approach is rapid, straightforward and suitable for simultaneously performing multiple, low volume enrichments. © the Partner Organisations 2014.
Resumo:
Besides their well-described use as delivery systems for water-soluble drugs, liposomes have the ability to act as a solubilizing agent for drugs with low aqueous solubility. However, a key limitation in exploiting liposome technology is the availability of scalable, low-cost production methods for the preparation of liposomes. Here we describe a new method, using microfluidics, to prepare liposomal solubilising systems which can incorporate low solubility drugs (in this case propofol). The setup, based on a chaotic advection micromixer, showed high drug loading (41 mol%) of propofol as well as the ability to manufacture vesicles with at prescribed sizes (between 50 and 450 nm) in a high-throughput setting. Our results demonstrate the ability of merging liposome manufacturing and drug encapsulation in a single process step, leading to an overall reduced process time. These studies emphasise the flexibility and ease of applying lab-on-a-chip microfluidics for the solubilisation of poorly water-soluble drugs.
Resumo:
Aim: Identify the incidence of vitreomacular traction (VMT) and frequency of reduced vision in the absence of other coexisting macular pathology using a pragmatic classification system for VMT in a population of patients referred to the hospital eye service. Methods: A detailed survey of consecutive optical coherence tomography (OCT) scans was done in a high-throughput ocular imaging service to ascertain cases of vitreomacular adhesion (VMA) and VMT using a departmental classification system. Analysis was done on the stages of traction, visual acuity, and association with other macular conditions. Results: In total, 4384 OCT scan episodes of 2223 patients were performed. Two hundred and fourteen eyes had VMA/VMT, with 112 eyes having coexisting macular pathology. Of 102 patients without coexisting pathology, 57 patients had VMT grade between 2 and 8, with a negative correlation between VMT grade and number of Snellen lines (r= -0.61717). There was a distinct cutoff in visual function when VMT grade was higher than 4 with the presence of cysts and sub retinal separation and breaks in the retinal layers. Conclusions: VMT is a common encounter often associated with other coexisting macular pathology. We estimated an incidence rate of 0.01% of VMT cases with reduced vision and without coexisting macular pathology that may potentially benefit from intervention. Grading of VMT to select eyes with cyst formation as well as hole formation may be useful for targeting patients who are at higher risk of visual loss from VMT.
Resumo:
DNA-binding proteins are crucial for various cellular processes and hence have become an important target for both basic research and drug development. With the avalanche of protein sequences generated in the postgenomic age, it is highly desired to establish an automated method for rapidly and accurately identifying DNA-binding proteins based on their sequence information alone. Owing to the fact that all biological species have developed beginning from a very limited number of ancestral species, it is important to take into account the evolutionary information in developing such a high-throughput tool. In view of this, a new predictor was proposed by incorporating the evolutionary information into the general form of pseudo amino acid composition via the top-n-gram approach. It was observed by comparing the new predictor with the existing methods via both jackknife test and independent data-set test that the new predictor outperformed its counterparts. It is anticipated that the new predictor may become a useful vehicle for identifying DNA-binding proteins. It has not escaped our notice that the novel approach to extract evolutionary information into the formulation of statistical samples can be used to identify many other protein attributes as well.
Resumo:
The universally conserved translation elongation factor EF-Tu delivers aminoacyl(aa)-tRNA in the form of an aa-tRNA·EF-Tu·GTP ternary complex (TC) to the ribosome where it binds to the cognate mRNA codon within the ribosomal A-site, leading to formation of a pretranslocation (PRE) complex. Here we describe preparation of QSY9 and Cy5 derivatives of the variant E348C-EF-Tu that are functional in translation elongation. Together with fluorophore derivatives of aa-tRNA and of ribosomal protein L11, located within the GTPase associated center (GAC), these labeled EF-Tus allow development of two new FRET assays that permit the dynamics of distance changes between EF-Tu and both L11 (Tu-L11 assay) and aa-tRNA (Tu-tRNA assay) to be determined during the decoding process. We use these assays to examine: (i) the relative rates of EF-Tu movement away from the GAC and from aa-tRNA during decoding, (ii) the effects of the misreading-inducing antibiotics streptomycin and paromomycin on tRNA selection at the A-site, and (iii) how strengthening the binding of aa-tRNA to EF-Tu affects the rate of EF-Tu movement away from L11 on the ribosome. These FRET assays have the potential to be adapted for high throughput screening of ribosomal antibiotics.
Resumo:
Cell population heterogeneity has attracted great interest for understanding the individual cellular performances in their response to external stimuli and in the production of targeted products. Physical characterization of single cells and analysis of dynamic gene expression, synthesized proteins, and cellular metabolites from one single cell are reviewed. Advanced techniques have been developed to achieve high-throughput and ultrahigh resolution or sensitivity. Single cell capture methods are discussed as well. How to make use of cellular heterogeneities for maximizing cellular productivity is still in the infant stage, and control strategies will be formulated after the causes for heterogeneity have been elucidated.
Resumo:
Ageing is accompanied by many visible characteristics. Other biological and physiological markers are also well-described e.g. loss of circulating sex hormones and increased inflammatory cytokines. Biomarkers for healthy ageing studies are presently predicated on existing knowledge of ageing traits. The increasing availability of data-intensive methods enables deep-analysis of biological samples for novel biomarkers. We have adopted two discrete approaches in MARK-AGE Work Package 7 for biomarker discovery; (1) microarray analyses and/or proteomics in cell systems e.g. endothelial progenitor cells or T cell ageing including a stress model; and (2) investigation of cellular material and plasma directly from tightly-defined proband subsets of different ages using proteomic, transcriptomic and miR array. The first approach provided longitudinal insight into endothelial progenitor and T cell ageing.This review describes the strategy and use of hypothesis-free, data-intensive approaches to explore cellular proteins, miR, mRNA and plasma proteins as healthy ageing biomarkers, using ageing models and directly within samples from adults of different ages. It considers the challenges associated with integrating multiple models and pilot studies as rational biomarkers for a large cohort study. From this approach, a number of high-throughput methods were developed to evaluate novel, putative biomarkers of ageing in the MARK-AGE cohort.
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. This paper proposes a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal ratio via the standard deviation, and 2) the detection of packet loss induced through congestion. The authors show that the use of such techniques in a WMN can significantly improve performance in terms of the packet sending rate. The effectiveness of the proposed method was demonstrated in a simulated wireless network testbed via packet-level simulation.
Resumo:
The production of recombinant therapeutic proteins is an active area of research in drug development. These bio-therapeutic drugs target nearly 150 disease states and promise to bring better treatments to patients. However, if new bio-therapeutics are to be made more accessible and affordable, improvements in production performance and optimization of processes are necessary. A major challenge lies in controlling the effect of process conditions on production of intact functional proteins. To achieve this, improved tools are needed for bio-processing. For example, implementation of process modeling and high-throughput technologies can be used to achieve quality by design, leading to improvements in productivity. Commercially, the most sought after targets are secreted proteins due to the ease of handling in downstream procedures. This chapter outlines different approaches for production and optimization of secreted proteins in the host Pichia pastoris. © 2012 Springer Science+business Media, LLC.
Resumo:
Since the first discovery of S100 members in 1965, their expressions have been affiliated with numerous biological functions in all cells of the body. However, in the recent years, S100A4, a member of this superfamily has emerged as the central target in generating new avenue for cancer therapy as its overexpression has been correlated with cancer patients’ mortality as well as established roles as motility and metastasis promoter. As it has no catalytic activity, S100A4 has to interact with its target proteins to regulate such effects. Up to date, more than 10 S100A4 target proteins have been identified but the mechanical process regulated by S100A4 to induce motility remains vague. In this work, we demonstrated that S100A4 overexpression resulted in actin filaments disorganisation, reduction in focal adhesions, instability of filopodia as well as exhibiting polarised morphology. However, such effects were not observed in truncated versions of S100A4 possibly highlighting the importance of C terminus of S100A4 target recognition. In order to assess some of the intracellular mechanisms that may be involved in promoting migrations, different strategies were used, including active pharmaceutical agents, inhibitors and knockdown experiments. Treatment of S100A4 overexpressing cells with blebbistatin and Y-27632, non muscle myosin IIA (NMMIIA) inhibitors, as well as knockdown of NMMIIA, resulted in motility enhancement and focal adhesions reduction proposing that NMMIIA assisted S100A4 in regulating cell motility but its presence is not essential. Further work done using Cos 7 cell lines, naturally lacking NMMIIA, further demonstrated that S100A4 is capable of regulating cell motility independent of NMMIIA, possibly through poor maturation of focal adhesion. Given that all these experiments highlighted the independency of NMMIIA towards migration, a protein that has been put at the forefront of S100A4-induced motility, we aimed to gather further understanding regarding the other molecular mechanisms that may be at play for motility. Using high throughput imaging (HCI), 3 compounds were identified to be capable of inhibiting S100A4-mediated migration. Although we have yet to investigate the underlying mechanism for their effects, these compounds have been shown to target membrane proteins and the externalisation of S100 proteins, for at least one of the compounds, leading us to speculate that preventing externalisation of S100A4 could potentially regulate cell motility.
Resumo:
Nanoparticles offer an ideal platform for the delivery of small molecule drugs, subunit vaccines and genetic constructs. Besides the necessity of a homogenous size distribution, defined loading efficiencies and reasonable production and development costs, one of the major bottlenecks in translating nanoparticles into clinical application is the need for rapid, robust and reproducible development techniques. Within this thesis, microfluidic methods were investigated for the manufacturing, drug or protein loading and purification of pharmaceutically relevant nanoparticles. Initially, methods to prepare small liposomes were evaluated and compared to a microfluidics-directed nanoprecipitation method. To support the implementation of statistical process control, design of experiment models aided the process robustness and validation for the methods investigated and gave an initial overview of the size ranges obtainable in each method whilst evaluating advantages and disadvantages of each method. The lab-on-a-chip system resulted in a high-throughput vesicle manufacturing, enabling a rapid process and a high degree of process control. To further investigate this method, cationic low transition temperature lipids, cationic bola-amphiphiles with delocalized charge centers, neutral lipids and polymers were used in the microfluidics-directed nanoprecipitation method to formulate vesicles. Whereas the total flow rate (TFR) and the ratio of solvent to aqueous stream (flow rate ratio, FRR) was shown to be influential for controlling the vesicle size in high transition temperature lipids, the factor FRR was found the most influential factor controlling the size of vesicles consisting of low transition temperature lipids and polymer-based nanoparticles. The biological activity of the resulting constructs was confirmed by an invitro transfection of pDNA constructs using cationic nanoprecipitated vesicles. Design of experiments and multivariate data analysis revealed the mathematical relationship and significance of the factors TFR and FRR in the microfluidics process to the liposome size, polydispersity and transfection efficiency. Multivariate tools were used to cluster and predict specific in-vivo immune responses dependent on key liposome adjuvant characteristics upon delivery a tuberculosis antigen in a vaccine candidate. The addition of a low solubility model drug (propofol) in the nanoprecipitation method resulted in a significantly higher solubilisation of the drug within the liposomal bilayer, compared to the control method. The microfluidics method underwent scale-up work by increasing the channel diameter and parallelisation of the mixers in a planar way, resulting in an overall 40-fold increase in throughput. Furthermore, microfluidic tools were developed based on a microfluidics-directed tangential flow filtration, which allowed for a continuous manufacturing, purification and concentration of liposomal drug products.
Resumo:
The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^
Resumo:
The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^
Resumo:
To carry out their specific roles in the cell, genes and gene products often work together in groups, forming many relationships among themselves and with other molecules. Such relationships include physical protein-protein interaction relationships, regulatory relationships, metabolic relationships, genetic relationships, and much more. With advances in science and technology, some high throughput technologies have been developed to simultaneously detect tens of thousands of pairwise protein-protein interactions and protein-DNA interactions. However, the data generated by high throughput methods are prone to noise. Furthermore, the technology itself has its limitations, and cannot detect all kinds of relationships between genes and their products. Thus there is a pressing need to investigate all kinds of relationships and their roles in a living system using bioinformatic approaches, and is a central challenge in Computational Biology and Systems Biology. This dissertation focuses on exploring relationships between genes and gene products using bioinformatic approaches. Specifically, we consider problems related to regulatory relationships, protein-protein interactions, and semantic relationships between genes. A regulatory element is an important pattern or "signal", often located in the promoter of a gene, which is used in the process of turning a gene "on" or "off". Predicting regulatory elements is a key step in exploring the regulatory relationships between genes and gene products. In this dissertation, we consider the problem of improving the prediction of regulatory elements by using comparative genomics data. With regard to protein-protein interactions, we have developed bioinformatics techniques to estimate support for the data on these interactions. While protein-protein interactions and regulatory relationships can be detected by high throughput biological techniques, there is another type of relationship called semantic relationship that cannot be detected by a single technique, but can be inferred using multiple sources of biological data. The contributions of this thesis involved the development and application of a set of bioinformatic approaches that address the challenges mentioned above. These included (i) an EM-based algorithm that improves the prediction of regulatory elements using comparative genomics data, (ii) an approach for estimating the support of protein-protein interaction data, with application to functional annotation of genes, (iii) a novel method for inferring functional network of genes, and (iv) techniques for clustering genes using multi-source data.
Resumo:
Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.