930 resultados para Protein Array Analysis -- methods
Resumo:
A broad review of technologically focused work concerning biomolecules at interfaces is presented. The emphasis is on developments in interfacial biomolecular engineering that may have a practical impact in bioanalysis, tissue engineering, emulsion processing or bioseparations. We also review methods for fabrication in an attempt to draw out those approaches that may be useful for product manufacture, and briefly review methods for analysing the resulting interfacial nanostructures. From this review we conclude that the generation of knowledge and-innovation at the nanoscale far exceeds our ability to translate this innovation into practical outcomes addressing a market need, and that significant technological challenges exist. A particular challenge in this translation is to understand how the structural properties of biomolecules control the assembled architecture, which in turn defines product performance, and how this relationship is affected by the chosen manufacturing route. This structure-architecture-process-performance (SAPP) interaction problem is the familiar laboratory scale-up challenge in disguise. A further challenge will be to interpret biomolecular self- and directed-assembly reactions using tools of chemical reaction engineering, enabling rigorous manufacturing optimization of self-assembly laboratory techniques. We conclude that many of the technological problems facing this field are addressable using tools of modem chemical and biomolecular engineering, in conjunction with knowledge and skills from the underpinning sciences. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this study was to identify a set of genetic polymorphisms that efficiently divides methicillin-resistant Staphylococcus aureus (MRSA) strains into groups consistent with the population structure. The rationale was that such polymorphisms could underpin rapid real-time PCR or low-density array-based methods for monitoring MRSA dissemination in a cost-effective manner. Previously, the authors devised a computerized method for identifying sets of single nucleoticle polymorphisms (SNPs) with high resolving power that are defined by multilocus sequence typing (MLST) databases, and also developed a real-time PCR method for interrogating a seven-member SNP set for genotyping S. aureus. Here, it is shown that these seven SNPs efficiently resolve the major MRSA lineages and define 27 genotypes. The SNP-based genotypes are consistent with the MRSA population structure as defined by eBURST analysis. The capacity of binary markers to improve resolution was tested using 107 diverse MRSA isolates of Australian origin that encompass nine SNP-based genotypes. The addition of the virulence-associated genes cna, pvl and bbplsdrE, and the integrated plasmids pT181, p1258 and pUB110, resolved the nine SNP-based genotypes into 21 combinatorial genotypes. Subtyping of the SCCmec locus revealed new SCCmec types and increased the number of combinatorial genotypes to 24. It was concluded that these polymorphisms provide a facile means of assigning MRSA isolates into well-recognized lineages.
Resumo:
Aims: Characterization of the representative protozoan Acanthamoeba polyphaga surface carbohydrate exposure by a novel combination of flow cytometry and ligand-receptor analysis. Methods and Results: Trophozoite and cyst morphological forms were exposed to a panel of FITC-lectins. Population fluorescence associated with FITC-lectin binding to acanthamoebal surface moieties was ascertained by flow cytometry. Increasing concentrations of representative FITC-lectins, saturation binding and determination of K d and relative Bmax values were employed to characterize carbohydrate residue exposure. FITC-lectins specific for N-acetylglucosamine, N-acetylgalactosamine and mannose/glucose were readily bound by trophozoite and cyst surfaces. Minor incremental increases in FITC-lectin concentration resulted in significant differences in surface fluorescence intensity and supported the calculation of ligand-binding determinants, Kd and relative B max, which gave a trophozoite and cyst rank order of lectin affinity and surface receptor presence. Conclusions: Trophozoites and cysts expose similar surface carbohydrate residues, foremost amongst which is N-acetylglucosamine, in varying orientation and availability. Significance and Impact of the Study: The outlined versatile combination of flow cytometry and ligand-receptor analysis allowed the characterization of surface carbohydrate exposure by protozoan morphological forms and in turn will support a valid comparison of carbohydrate exposure by other single-cell protozoa and eucaryotic microbes analysed in the same manner.
Resumo:
The use of quantitative methods has become increasingly important in the study of neurodegenerative disease. Disorders such as Alzheimer's disease (AD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This article reviews the advantages and limitations of the different methods of quantifying the abundance of pathological lesions in histological sections, including estimates of density, frequency, coverage, and the use of semiquantitative scores. The major sampling methods by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are also described. In addition, the data analysis methods commonly used to analyse quantitative data in neuropathology, including analyses of variance (ANOVA) and principal components analysis (PCA), are discussed. These methods are illustrated with reference to particular problems in the pathological diagnosis of AD and dementia with Lewy bodies (DLB).
Resumo:
Stereology and other image analysis methods have enabled rapid and objective quantitative measurements to be made on histological sections. These mesurements may include total volumes, surfaces, lengths and numbers of cells and blood vessels or pathological lesions. Histological features, however, may not be randomly distributed across a section but exhibit 'dispersion', a departure from randomness either towards regularity or aggregation. Information of population dispersion may be valuable not only in understanding the two-or three-dimensional structure but also in elucidating the pathogenesis of lesions in pathological conditions. This article reviews some of the statistical methods available for studying dispersion. These range from simple tests of whether the distribution of a histological faeture departs significantly from random to more complex methods which can detect the intensity of aggregation and the sizes, distribution and spacing of the clusters.
Resumo:
This book is aimed primarily at microbiologists who are undertaking research and who require a basic knowledge of statistics to analyse their experimental data. Computer software employing a wide range of data analysis methods is widely available to experimental scientists. The availability of this software, however, makes it essential that investigators understand the basic principles of statistics. Statistical analysis of data can be complex with many different methods of approach, each of which applies in a particular experimental circumstance. Hence, it is possible to apply an incorrect statistical method to data and to draw the wrong conclusions from an experiment. The purpose of this book, which has its origin in a series of articles published in the Society for Applied Microbiology journal ‘The Microbiologist’, is an attempt to present the basic logic of statistics as clearly as possible and therefore, to dispel some of the myths that often surround the subject. The 28 ‘Statnotes’ deal with various topics that are likely to be encountered, including the nature of variables, the comparison of means of two or more groups, non-parametric statistics, analysis of variance, correlating variables, and more complex methods such as multiple linear regression and principal components analysis. In each case, the relevant statistical method is illustrated with examples drawn from experiments in microbiological research. The text incorporates a glossary of the most commonly used statistical terms and there are two appendices designed to aid the investigator in the selection of the most appropriate test.
Resumo:
Having access to suitably stable, functional recombinant protein samples underpins diverse academic and industrial research efforts to understand the workings of the cell in health and disease. Synthesising a protein in recombinant host cells typically allows the isolation of the pure protein in quantities much higher than those found in the protein's native source. Yeast is a popular host as it is a eukaryote with similar synthetic machinery to the native human source cells of many proteins of interest, while also being quick, easy, and cheap to grow and process. Even in these cells the production of some proteins can be plagued by low functional yields. We have identified molecular mechanisms and culture parameters underpinning high yields and have consolidated our findings to engineer improved yeast cell factories. In this chapter, we provide an overview of the opportunities available to improve yeast as a host system for recombinant protein production.
Resumo:
The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.
Resumo:
Current debate within forensic authorship analysis has tended to polarise those who argue that analysis methods should reflect a strong cognitive theory of idiolect and others who see less of a need to look behind the stylistic variation of the texts they are examining. This chapter examines theories of idiolect and asks how useful or necessary they are to the practice of forensic authorship analysis. Taking a specific text messaging case the chapter demonstrates that methodologically rigorous, theoretically informed authorship analysis need not appeal to cognitive theories of idiolect in order to be valid. By considering text messaging forensics, lessons will be drawn which can contribute to wider debates on the role of theories of idiolect in forensic casework.
Resumo:
The hypoxia-inducible factor (HIF) is a key regulator of the transcriptional response to hypoxia. While the mechanism underpinning HIF activation is well understood, little is known about its resolution. Both the protein and the mRNA levels of HIF-1a (but not HIF-2a) were decreased in intestinal epithelial cells exposed to prolonged hypoxia. Coincident with this, microRNA (miRNA) array analysis revealed multiple hypoxiainducible miRNAs. Among these was miRNA-155 (miR-155), which is predicted to target HIF-1a mRNA. We confirmed the hypoxic upregulation of miR-155 in cultured cells and intestinal tissue from mice exposed to hypoxia. Furthermore, a role for HIF-1a in the induction of miR-155 in hypoxia was suggested by the identification of hypoxia response elements in the miR-155 promoter and confirmed experimentally. Application of miR-155 decreased the HIF-1a mRNA, protein, and transcriptional activity in hypoxia, and neutralization of endogenous miR-155 reversed the resolution of HIF-1a stabilization and activity. Based on these data and a mathematical model of HIF-1a suppression by miR-155, we propose that miR-155 induction contributes to an isoform-specific negative-feedback loop for the resolution of HIF-1a activity in cells exposed to prolonged hypoxia, leading to oscillatory behavior of HIF-1a-dependent transcription. © 2011, American Society for Microbiology.
Resumo:
The article proposes the model of management of information about program flow analysis for conducting computer experiments with program transformations. It considers the architecture and context of the flow analysis subsystem within the framework of Specialized Knowledge Bank on Program Transformations and describes the language for presenting flow analysis methods in the knowledge bank.
Resumo:
Purpose: To assess the validity and repeatability of objective compared to subjective contact lens fit analysis. Methods: Thirty-five subjects (aged 22.0. ±. 3.0 years) wore two different soft contact lens designs. Four lens fit variables: centration, horizontal lag, post-blink movement in up-gaze and push-up recovery speed were assessed subjectively (four observers) and objectively from slit-lamp biomicroscopy captured images and video. The analysis was repeated a week later. Results: The average of the four experienced observers was compared to objective measures, but centration, movement on blink, lag and push-up recovery speed all varied significantly between them (p <. 0.001). Horizontal lens centration was on average close to central as assessed both objectively and subjectively (p > 0.05). The 95% confidence interval of subjective repeatability was better than objective assessment (±0.128. mm versus ±0.168. mm, p = 0.417), but utilised only 78% of the objective range. Vertical centration assessed objectively showed a slight inferior decentration (0.371. ±. 0.381. mm) with good inter- and intrasession repeatability (p > 0.05). Movement-on-blink was lower estimated subjectively than measured objectively (0.269. ±. 0.179. mm versus 0.352. ±. 0.355. mm; p = 0.035), but had better repeatability (±0.124. mm versus ±0.314. mm 95% confidence interval) unless correcting for the smaller range (47%). Horizontal lag was lower estimated subjectively (0.562. ±. 0.259. mm) than measured objectively (0.708. ±. 0.374. mm, p <. 0.001), had poorer repeatability (±0.132. mm versus ±0.089. mm 95% confidence interval) and had a smaller range (63%). Subjective categorisation of push-up speed of recovery showed reasonable differentiation relative to objective measurement (p <. 0.001). Conclusions: The objective image analysis allows an accurate, reliable and repeatable assessment of soft contact lens fit characteristics, being a useful tool for research and optimisation of lens fit in clinical practice.
Resumo:
Computing the similarity between two protein structures is a crucial task in molecular biology, and has been extensively investigated. Many protein structure comparison methods can be modeled as maximum weighted clique problems in specific k-partite graphs, referred here as alignment graphs. In this paper we present both a new integer programming formulation for solving such clique problems and a dedicated branch and bound algorithm for solving the maximum cardinality clique problem. Both approaches have been integrated in VAST, a software for aligning protein 3D structures largely used in the National Center for Biotechnology Information, an original clique solver which uses the well known Bron and Kerbosch algorithm (BK). Our computational results on real protein alignment instances show that our branch and bound algorithm is up to 116 times faster than BK.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
2000 Mathematics Subject Classi cation: Primary 90C31. Secondary 62C12, 62P05, 93C41.