28 resultados para MOLECULAR TYPING METHODS
Resumo:
Conotoxins are valuable probes of receptors and ion channels because of their small size and highly selective activity. alpha-Conotoxin EpI, a 16-residue peptide from the mollusk-hunting Conus episcopatus, has the amino acid sequence GCCSDPRCNMNNPDY(SO3H)C-NH2 and appears to be an extremely potent and selective inhibitor of the alpha 3 beta 2 and alpha 3 beta 4 neuronal subtypes of the nicotinic acetylcholine receptor (nAChR). The desulfated form of EpI ([Tyr(15)]EpI) has a potency and selectivity for the nAChR receptor similar to those of EpI. Here we describe the crystal structure of [Tyr(15)]EpI solved at a resolution of 1.1 Angstrom using SnB. The asymmetric unit has a total of 284 non-hydrogen atoms, making this one of the largest structures solved de novo try direct methods. The [Tyr(15)]EpI structure brings to six the number of alpha-conotoxin structures that have been determined to date. Four of these, [Tyr(15)]EpI, PnIA, PnIB, and MII, have an alpha 4/7 cysteine framework and are selective for the neuronal subtype of the nAChR. The structure of [Tyr(15)]EpI has the same backbone fold as the other alpha 4/7-conotoxin structures, supporting the notion that this conotoxin cysteine framework and spacing give rise to a conserved fold. The surface charge distribution of [Tyr(15)]EpI is similar to that of PnIA and PnIB but is likely to be different from that of MII, suggesting that [Tyr(15)]EpI and MII may have different binding modes for the same receptor subtype.
Resumo:
Over recent years databases have become an extremely important resource for biomedical research. Immunology research is increasingly dependent on access to extensive biological databases to extract existing information, plan experiments, and analyse experimental results. This review describes 15 immunological databases that have appeared over the last 30 years. In addition, important issues regarding database design and the potential for misuse of information contained within these databases are discussed. Access pointers are provided for the major immunological databases and also for a number of other immunological resources accessible over the World Wide Web (WWW). (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
A number of techniques have been developed to study the disposition of drugs in the head and, in particular, the role of the blood-brain barrier (BBB) in drug uptake. The techniques can be divided into three groups: in-vitro, in-vivo and in-situ. The most suitable method depends on the purpose(s) and requirements of the particular study being conducted. In-vitro techniques involve the isolation of cerebral endothelial cells so that direct investigations of these cells can be carried out. The most recent preparations are able to maintain structural and functional characteristics of the BBB by simultaneously culturing endothelial cells with astrocytic cells,The main advantages of the in-vitro methods are the elimination of anaesthetics and surgery. In-vivo methods consist of a diverse range of techniques and include the traditional Brain Uptake Index and indicator diffusion methods, as well as microdialysis and positron emission tomography. In-vivo methods maintain the cells and vasculature of an organ in their normal physiological states and anatomical position within the animal. However, the shortcomings include renal acid hepatic elimination of solutes as well as the inability to control blood flow. In-situ techniques, including the perfused head, are more technically demanding. However, these models have the ability to vary the composition and flow rate of the artificial perfusate. This review is intended as a guide for selecting the most appropriate method for studying drug uptake in the brain.
Resumo:
Background: A variety of methods for prediction of peptide binding to major histocompatibility complex (MHC) have been proposed. These methods are based on binding motifs, binding matrices, hidden Markov models (HMM), or artificial neural networks (ANN). There has been little prior work on the comparative analysis of these methods. Materials and Methods: We performed a comparison of the performance of six methods applied to the prediction of two human MHC class I molecules, including binding matrices and motifs, ANNs, and HMMs. Results: The selection of the optimal prediction method depends on the amount of available data (the number of peptides of known binding affinity to the MHC molecule of interest), the biases in the data set and the intended purpose of the prediction (screening of a single protein versus mass screening). When little or no peptide data are available, binding motifs are the most useful alternative to random guessing or use of a complete overlapping set of peptides for selection of candidate binders. As the number of known peptide binders increases, binding matrices and HMM become more useful predictors. ANN and HMM are the predictive methods of choice for MHC alleles with more than 100 known binding peptides. Conclusion: The ability of bioinformatic methods to reliably predict MHC binding peptides, and thereby potential T-cell epitopes, has major implications for clinical immunology, particularly in the area of vaccine design.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Sound application of molecular epidemiological principles requires working knowledge of both molecular biological and epidemiological methods. Molecular tools have become an increasingly important part of studying the epidemiology of infectious agents. Molecular tools have allowed the aetiological agent within a population to be diagnosed with a greater degree of efficiency and accuracy than conventional diagnostic tools. They have increased the understanding of the pathogenicity, virulence, and host-parasite relationships of the aetiological agent, provided information on the genetic structure and taxonomy of the parasite and allowed the zoonotic potential of previously unidentified agents to be determined. This review describes the concept of epidemiology and proper study design, describes the array of currently available molecular biological tools and provides examples of studies that have integrated both disciplines to successfully unravel zoonotic relationships that would otherwise be impossible utilising conventional diagnostic tools. The current limitations of applying these tools, including cautions that need to be addressed during their application are also discussed.(c) 2005 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Smoothing the potential energy surface for structure optimization is a general and commonly applied strategy. We propose a combination of soft-core potential energy functions and a variation of the diffusion equation method to smooth potential energy surfaces, which is applicable to complex systems such as protein structures; The performance of the method was demonstrated by comparison with simulated annealing using the refinement of the undecapeptide Cyclosporin A as a test case. Simulations were repeated many times using different initial conditions and structures since the methods are heuristic and results are only meaningful in a statistical sense.
Resumo:
Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.
Resumo:
This paper presents the comparison of surface diffusivities of hydrocarbons in activated carbon. The surface diffusivities are obtained from the analysis of kinetic data collected using three different kinetics methods- the constant molar flow, the differential adsorption bed and the differential permeation methods. In general the values of surface diffusivity obtained by these methods agree with each other, and it is found that the surface diffusivity increases very fast with loading. Such a fast increase can not be accounted for by a thermodynamic Darken factor, and the surface heterogeneity only partially accounts for the fast rise of surface diffusivity versus loading. Surface diffusivities of methane, ethane, propane, n-butane, n-hexane, benzene and ethanol on activated carbon are reported in this paper.
Resumo:
Current serotyping methods classify Pasteurella multocida into five capsular serogroups (serogroups A, B, D, E, and F) and 16 somatic serotypes (serotypes 1 to 16). In the present study, we have developed a multiplex PCR assay as a rapid alternative to the conventional capsular serotyping system. The serogroup-specific primers used in this assay were designed following identification, sequence determination, and analysis of the capsular biosynthetic loci of each capsular serogroup. The entire capsular biosynthetic loci of P. multocida A:1 (X-73) and B:2 (M1404) have been cloned and sequenced previously (J. Y. Chung, Y. M. Zhang, and B. Adler, FEMS Microbiol. Lett. 166:289-296, 1998; J. D. Boyce, J. Y. Chung, and B. Adler, Vet. Microbiol. 72:121-134, 2000). Nucleotide sequence analysis of the biosynthetic region (region 2) from each of the remaining three serogroups, serogroups D, E, and F, identified serogroup-specific regions and gave an indication of the capsular polysaccharide composition. The multiplex capsular PCR assay was highly specific, and its results, with the exception of those for some serogroup F strains, correlated well with conventional serotyping results. Sequence analysis of the strains that gave conflicting results confirmed the validity of the multiplex PCR and indicated that these strains were in fact capsular serogroup A. The multiplex PCR will clarify the distinction between closely related serogroups A and F and constitutes a rapid assay for the definitive classification of P. multocida capsular types
Resumo:
Molecular evolution has been considered to be essentially a stochastic process, little influenced by the pace of phenotypic change. This assumption was challenged by a study that demonstrated an association between rates of morphological and molecular change estimated for total-evidence phylogenies, a finding that led some researchers to challenge molecular date estimates of major evolutionary radiations. Here we show that Omland's (1997) result is probably due to methodological bias, particularly phylogenetic nonindependence, rather than being indicative of an underlying evolutionary phenomenon. We apply three new methods specifically designed to overcome phylogenetic bias to 13 published phylogenetic datasets for vertebrate taxa, each of which includes both morphological characters and DNA sequence data. We find no evidence of an association between rates of molecular and morphological rates of change.
Resumo:
There are several competing methods commonly used to solve energy grained master equations describing gas-phase reactive systems. When it comes to selecting an appropriate method for any particular problem, there is little guidance in the literature. In this paper we directly compare several variants of spectral and numerical integration methods from the point of view of computer time required to calculate the solution and the range of temperature and pressure conditions under which the methods are successful. The test case used in the comparison is an important reaction in combustion chemistry and incorporates reversible and irreversible bimolecular reaction steps as well as isomerizations between multiple unimolecular species. While the numerical integration of the ODE with a stiff ODE integrator is not the fastest method overall, it is the fastest method applicable to all conditions.