998 resultados para Ecologcial niche modeling
Resumo:
Nuclear receptors are a major component of signal transduction in animals. They mediate the regulatory activities of many hormones, nutrients and metabolites on the homeostasis and physiology of cells and tissues. It is of high interest to model the corresponding regulatory networks. While molecular and cell biology studies of individual promoters have provided important mechanistic insight, a more complex picture is emerging from genome-wide studies. The regulatory circuitry of nuclear receptor regulated gene expression networks, and their response to cellular signaling, appear highly dynamic, and involve long as well as short range chromatin interactions. We review how progress in understanding the kinetics and regulation of cofactor recruitment, and the development of new genomic methods, provide opportunities but also a major challenge for modeling nuclear receptor mediated regulatory networks.
Resumo:
Abstract Macroevolutionary and microevolutionary studies provide complementary explanations of the processes shaping the evolution of niche breadth. Macroevolutionary approaches scrutinize factors such as the temporal and spatial environmental heterogeneities that drive differentiation among species. Microevolutionary studies, in contrast, focus on the processes that affect intraspecific variability. We combine these perspectives by using macroevolutionary models in a comparative study of intraspecific variability. We address potential differences in rates of evolution of niche breadth and position in annual and perennial plants of the Eriogonoideae subfamily of the Polygonaceae. We anticipated higher rates of evolution in annuals than in perennials owing to differences in generation time that are paralleled by rates of molecular evolution. Instead, we found that perennial eriogonoid species present greater environmental tolerance (wider climate niche) than annual species. Niche breadth of perennial species has evolved two to four times faster than in annuals, while niche optimum has diversified more rapidly among annual species than among perennials. Niche breadth and average elevation of species are correlated. Moreover, niche breadth increases more rapidly with mean species elevation in perennials than in annuals. Our results suggest that both environmental gradients and life-history strategy influence rates and patterns of niche breadth evolution.
Resumo:
PURPOSE: Aerodynamic drag plays an important role in performance for athletes practicing sports that involve high-velocity motions. In giant slalom, the skier is continuously changing his/her body posture, and this affects the energy dissipated in aerodynamic drag. It is therefore important to quantify this energy to understand the dynamic behavior of the skier. The aims of this study were to model the aerodynamic drag of alpine skiers in giant slalom simulated conditions and to apply these models in a field experiment to estimate energy dissipated through aerodynamic drag. METHODS: The aerodynamic characteristics of 15 recreational male and female skiers were measured in a wind tunnel while holding nine different skiing-specific postures. The drag and the frontal area were recorded simultaneously for each posture. Four generalized and two individualized models of the drag coefficient were built, using different sets of parameters. These models were subsequently applied in a field study designed to compare the aerodynamic energy losses between a dynamic and a compact skiing technique. RESULTS: The generalized models estimated aerodynamic drag with an accuracy of between 11.00% and 14.28%, and the individualized models estimated aerodynamic drag with an accuracy between 4.52% and 5.30%. The individualized model used for the field study showed that using a dynamic technique led to 10% more aerodynamic drag energy loss than using a compact technique. DISCUSSION: The individualized models were capable of discriminating different techniques performed by advanced skiers and seemed more accurate than the generalized models. The models presented here offer a simple yet accurate method to estimate the aerodynamic drag acting upon alpine skiers while rapidly moving through the range of positions typical to turning technique.
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facility location modeling. This is not at all surprising since location policy is one of the most profitable areas of applied systems analysis in regional science and ample theoretical and applied challenges are offered. Location-allocation models seek the location of facilities and/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or several objectives generally related to the efficiency of the system or to the allocation of resources. This paper concerns the location of facilities or services in discrete space or networks, that are related to the public sector, such as emergency services (ambulances, fire stations, and police units), school systems and postal facilities. The paper is structured as follows: first, we will focus on public facility location models that use some type of coverage criterion, with special emphasis in emergency services. The second section will examine models based on the P-Median problem and some of the issues faced by planners when implementing this formulation in real world locational decisions. Finally, the last section will examine new trends in public sector facility location modeling.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
The methodology for generating a homology model of the T1 TCR-PbCS-K(d) class I major histocompatibility complex (MHC) class I complex is presented. The resulting model provides a qualitative explanation of the effect of over 50 different mutations in the region of the complementarity determining region (CDR) loops of the T cell receptor (TCR), the peptide and the MHC's alpha(1)/alpha(2) helices. The peptide is modified by an azido benzoic acid photoreactive group, which is part of the epitope recognized by the TCR. The construction of the model makes use of closely related homologs (the A6 TCR-Tax-HLA A2 complex, the 2C TCR, the 14.3.d TCR Vbeta chain, the 1934.4 TCR Valpha chain, and the H-2 K(b)-ovalbumine peptide), ab initio sampling of CDR loops conformations and experimental data to select from the set of possibilities. The model shows a complex arrangement of the CDR3alpha, CDR1beta, CDR2beta and CDR3beta loops that leads to the highly specific recognition of the photoreactive group. The protocol can be applied systematically to a series of related sequences, permitting the analysis at the structural level of the large TCR repertoire specific for a given peptide-MHC complex.
Resumo:
Synaptic plasticity involves a complex molecular machinery with various protein interactions but it is not yet clear how its components give rise to the different aspects of synaptic plasticity. Here we ask whether it is possible to mathematically model synaptic plasticity by making use of known substances only. We present a model of a multistable biochemical reaction system and use it to simulate the plasticity of synaptic transmission in long-term potentiation (LTP) or long-term depression (LTD) after repeated excitation of the synapse. According to our model, we can distinguish between two phases: first, a "viscosity" phase after the first excitation, the effects of which like the activation of NMDA receptors and CaMKII fade out in the absence of further excitations. Second, a "plasticity" phase actuated by an identical subsequent excitation that follows after a short time interval and causes the temporarily altered concentrations of AMPA subunits in the postsynaptic membrane to be stabilized. We show that positive feedback is the crucial element in the core chemical reaction, i.e. the activation of the short-tail AMPA subunit by NEM-sensitive factor, which allows generating multiple stable equilibria. Three stable equilibria are related to LTP, LTD and a third unfixed state called ACTIVE. Our mathematical approach shows that modeling synaptic multistability is possible by making use of known substances like NMDA and AMPA receptors, NEM-sensitive factor, glutamate, CaMKII and brain-derived neurotrophic factor. Furthermore, we could show that the heteromeric combination of short- and long-tail AMPA receptor subunits fulfills the function of a memory tag.
Resumo:
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
This monthly report from the Iowa Department of Natural Resources is about the water quality management of Iowa's rivers, streams and lakes.
Resumo:
The activation of the specific immune response against tumor cells is based on the recognition by the CD8+ Cytotoxic Τ Lymphocytes (CTL), of antigenic peptides (p) presented at the surface of the cell by the class I major histocompatibility complex (MHC). The ability of the so-called T-Cell Receptors (TCR) to discriminate between self and non-self peptides constitutes the most important specific control mechanism against infected cells. The TCR/pMHC interaction has been the subject of much attention in cancer therapy since the design of the adoptive transfer approach, in which Τ lymphocytes presenting an interesting response against tumor cells are extracted from the patient, expanded in vitro, and reinfused after immunodepletion, possibly leading to cancer regression. In the last decade, major progress has been achieved by the introduction of engineered lypmhocytes. In the meantime, the understanding of the molecular aspects of the TCRpMHC interaction has become essential to guide in vitro and in vivo studies. In 1996, the determination of the first structure of a TCRpMHC complex by X-ray crystallography revealed the molecular basis of the interaction. Since then, molecular modeling techniques have taken advantage of crystal structures to study the conformational space of the complex, and understand the specificity of the recognition of the pMHC by the TCR. In the meantime, experimental techniques used to determine the sequences of TCR that bind to a pMHC complex have been used intensively, leading to the collection of large repertoires of TCR sequences that are specific for a given pMHC. There is a growing need for computational approaches capable of predicting the molecular interactions that occur upon TCR/pMHC binding without relying on the time consuming resolution of a crystal structure. This work presents new approaches to analyze the molecular principles that govern the recognition of the pMHC by the TCR and the subsequent activation of the T-cell. We first introduce TCRep 3D, a new method to model and study the structural properties of TCR repertoires, based on homology and ab initio modeling. We discuss the methodology in details, and demonstrate that it outperforms state of the art modeling methods in predicting relevant TCR conformations. Two successful applications of TCRep 3D that supported experimental studies on TCR repertoires are presented. Second, we present a rigid body study of TCRpMHC complexes that gives a fair insight on the TCR approach towards pMHC. We show that the binding mode of the TCR is correctly described by long-distance interactions. Finally, the last section is dedicated to a detailed analysis of an experimental hydrogen exchange study, which suggests that some regions of the constant domain of the TCR are subject to conformational changes upon binding to the pMHC. We propose a hypothesis of the structural signaling of TCR molecules leading to the activation of the T-cell. It is based on the analysis of correlated motions in the TCRpMHC structure. - L'activation de la réponse immunitaire spécifique dirigée contre les cellules tumorales est basée sur la reconnaissance par les Lymphocytes Τ Cytotoxiques (CTL), d'un peptide antigénique (p) présenté à la suface de la cellule par le complexe majeur d'histocompatibilité de classe I (MHC). La capacité des récepteurs des lymphocytes (TCR) à distinguer les peptides endogènes des peptides étrangers constitue le mécanisme de contrôle le plus important dirigé contre les cellules infectées. L'interaction entre le TCR et le pMHC est le sujet de beaucoup d'attention dans la thérapie du cancer, depuis la conception de la méthode de transfer adoptif: les lymphocytes capables d'une réponse importante contre les cellules tumorales sont extraits du patient, amplifiés in vitro, et réintroduits après immunosuppression. Il peut en résulter une régression du cancer. Ces dix dernières années, d'importants progrès ont été réalisés grâce à l'introduction de lymphocytes modifiés par génie génétique. En parallèle, la compréhension du TCRpMHC au niveau moléculaire est donc devenue essentielle pour soutenir les études in vitro et in vivo. En 1996, l'obtention de la première structure du complexe TCRpMHC à l'aide de la cristallographie par rayons X a révélé les bases moléculaires de l'interaction. Depuis lors, les techniques de modélisation moléculaire ont exploité les structures expérimentales pour comprendre la spécificité de la reconnaissance du pMHC par le TCR. Dans le même temps, de nouvelles techniques expérimentales permettant de déterminer la séquence de TCR spécifiques envers un pMHC donné, ont été largement exploitées. Ainsi, d'importants répertoires de TCR sont devenus disponibles, et il est plus que jamais nécessaire de développer des approches informatiques capables de prédire les interactions moléculaires qui ont lieu lors de la liaison du TCR au pMHC, et ce sans dépendre systématiquement de la résolution d'une structure cristalline. Ce mémoire présente une nouvelle approche pour analyser les principes moléculaires régissant la reconnaissance du pMHC par le TCR, et l'activation du lymphocyte qui en résulte. Dans un premier temps, nous présentons TCRep 3D, une nouvelle méthode basée sur les modélisations par homologie et ab initio, pour l'étude de propriétés structurales des répertoires de TCR. Le procédé est discuté en détails et comparé à des approches standard. Nous démontrons ainsi que TCRep 3D est le plus performant pour prédire des conformations pertinentes du TCR. Deux applications à des études expérimentales des répertoires TCR sont ensuite présentées. Dans la seconde partie de ce travail nous présentons une étude de complexes TCRpMHC qui donne un aperçu intéressant du mécanisme d'approche du pMHC par le TCR. Finalement, la dernière section se concentre sur l'analyse détaillée d'une étude expérimentale basée sur les échanges deuterium/hydrogène, dont les résultats révèlent que certaines régions clés du domaine constant du TCR sont sujettes à un changement conformationnel lors de la liaison au pMHC. Nous proposons une hypothèse pour la signalisation structurelle des TCR, menant à l'activation du lymphocyte. Celle-ci est basée sur l'analyse des mouvements corrélés observés dans la structure du TCRpMHC.
Resumo:
This paper presents a two-factor (Vasicek-CIR) model of the term structure of interest rates and develops its pricing and empirical properties. We assume that default free discount bond prices are determined by the time to maturity and two factors, the long-term interest rate and the spread. Assuming a certain process for both factors, a general bond pricing equation is derived and a closed-form expression for bond prices is obtained. Empirical evidence of the model's performance in comparisson with a double Vasicek model is presented. The main conclusion is that the modeling of the volatility in the long-term rate process can help (in a large amount) to fit the observed data can improve - in a reasonable quantity - the prediction of the future movements in the medium- and long-term interest rates. However, for shorter maturities, it is shown that the pricing errors are, basically, negligible and it is not so clear which is the best model to be used.