890 resultados para phylogeography, consensus approach, ensemble modeling, Pleistocene, ENM, ecological niche modeling
Resumo:
The nicotinic Acetylcholine Receptor (nAChR) is the major class of neurotransmitter receptors that is involved in many neurodegenerative conditions such as schizophrenia, Alzheimer's and Parkinson's diseases. The N-terminal region or Ligand Binding Domain (LBD) of nAChR is located at pre- and post-synaptic nervous system, which mediates synaptic transmission. nAChR acts as the drug target for agonist and competitive antagonist molecules that modulate signal transmission at the nerve terminals. Based on Acetylcholine Binding Protein (AChBP) from Lymnea stagnalis as the structural template, the homology modeling approach was carried out to build three dimensional model of the N-terminal region of human alpha(7)nAChR. This theoretical model is an assembly of five alpha(7) subunits with 5 fold axis symmetry, constituting a channel, with the binding picket present at the interface region of the subunits. alpha-netlrotoxin is a potent nAChR competitive antagonist that readily blocks the channel resulting in paralysis. The molecular interaction of alpha-Bungarotoxin, a long chain alpha-neurotoxin from (Bungarus multicinctus) and human alpha(7)nAChR seas studied. Agonists such as acetylcholine, nicotine, which are used in it diverse array of biological activities, such as enhancements of cognitive performances, were also docked with the theoretical model of human alpha(7)nAChR. These docked complexes were analyzed further for identifying the crucial residues involved in interaction. These results provide the details of interaction of agonists and competitive antagonists with three dimensional model of the N-terminal region of human alpha(7)nAChR and thereby point to the design of novel lead compounds.
Resumo:
This chapter introduces ABMs, their construction, and the pros and cons of their use. Although relatively new, agent-basedmodels (ABMs) have great potential for use in ecotoxicological research – their primary advantage being the realistic simulations that can be constructed and particularly their explicit handling of space and time in simulations. Examples are provided of their use in ecotoxicology primarily exemplified by different implementations of the ALMaSS system. These examples presented demonstrate how multiple stressors, landscape structure, details regarding toxicology, animal behavior, and socioeconomic effects can and should be taken into account when constructing simulations for risk assessment. Like ecological systems, in ABMs the behavior at the system level is not simply the mean of the component responses, but the sum of the often nonlinear interactions between components in the system; hence this modeling approach opens the door to implementing and testing much more realistic and holistic ecotoxicological models than are currently used.
Resumo:
We argue that population modeling can add value to ecological risk assessment by reducing uncertainty when extrapolating from ecotoxicological observations to relevant ecological effects. We review other methods of extrapolation, ranging from application factors to species sensitivity distributions to suborganismal (biomarker and "-omics'') responses to quantitative structure activity relationships and model ecosystems, drawing attention to the limitations of each. We suggest a simple classification of population models and critically examine each model in an extrapolation context. We conclude that population models have the potential for adding value to ecological risk assessment by incorporating better understanding of the links between individual responses and population size and structure and by incorporating greater levels of ecological complexity. A number of issues, however, need to be addressed before such models are likely to become more widely used. In a science context, these involve challenges in parameterization, questions about appropriate levels of complexity, issues concerning how specific or general the models need to be, and the extent to which interactions through competition and trophic relationships can be easily incorporated.
Resumo:
In this paper, we ask why so much ecological scientific research does not have a greater policy impact in the UK. We argue that there are two potentially important and related reasons for this failing. First, much current ecological science is not being conducted at a scale that is readily meaningful to policy-makers. Second, to make much of this research policy-relevant requires collaborative interdisciplinary research between ecologists and social scientists. However, the challenge of undertaking useful interdisciplinary research only re-emphasises the problems of scale: ecologists and social scientists traditionally frame their research questions at different scales and consider different facets of natural resource management, setting different objectives and using different language. We argue that if applied ecological research is to have greater impact in informing environmental policy, much greater attention needs to be given to the scale of the research efforts as well as to the interaction with social scientists. Such an approach requires an adjustment in existing research and funding infrastructures.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.
Resumo:
This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.
Resumo:
Analyses of high-density single-nucleotide polymorphism (SNP) data, such as genetic mapping and linkage disequilibrium (LD) studies, require phase-known haplotypes to allow for the correlation between tightly linked loci. However, current SNP genotyping technology cannot determine phase, which must be inferred statistically. In this paper, we present a new Bayesian Markov chain Monte Carlo (MCMC) algorithm for population haplotype frequency estimation, particulary in the context of LD assessment. The novel feature of the method is the incorporation of a log-linear prior model for population haplotype frequencies. We present simulations to suggest that 1) the log-linear prior model is more appropriate than the standard coalescent process in the presence of recombination (>0.02cM between adjacent loci), and 2) there is substantial inflation in measures of LD obtained by a "two-stage" approach to the analysis by treating the "best" haplotype configuration as correct, without regard to uncertainty in the recombination process. Genet Epidemiol 25:106-114, 2003. (C) 2003 Wiley-Liss, Inc.
Resumo:
The main objectives of this paper are to: firstly, identify key issues related to sustainable intelligent buildings (environmental, social, economic and technological factors); develop a conceptual model for the selection of the appropriate KPIs; secondly, test critically stakeholder's perceptions and values of selected KPIs intelligent buildings; and thirdly develop a new model for measuring the level of sustainability for sustainable intelligent buildings. This paper uses a consensus-based model (Sustainable Built Environment Tool- SuBETool), which is analysed using the analytical hierarchical process (AHP) for multi-criteria decision-making. The use of the multi-attribute model for priority setting in the sustainability assessment of intelligent buildings is introduced. The paper commences by reviewing the literature on sustainable intelligent buildings research and presents a pilot-study investigating the problems of complexity and subjectivity. This study is based upon a survey perceptions held by selected stakeholders and the value they attribute to selected KPIs. It is argued that the benefit of the new proposed model (SuBETool) is a ‘tool’ for ‘comparative’ rather than an absolute measurement. It has the potential to provide useful lessons from current sustainability assessment methods for strategic future of sustainable intelligent buildings in order to improve a building's performance and to deliver objective outcomes. Findings of this survey enrich the field of intelligent buildings in two ways. Firstly, it gives a detailed insight into the selection of sustainable building indicators, as well as their degree of importance. Secondly, it tesst critically stakeholder's perceptions and values of selected KPIs intelligent buildings. It is concluded that the priority levels for selected criteria is largely dependent on the integrated design team, which includes the client, architects, engineers and facilities managers.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
We consider a physical model of ultrafast evolution of an initial electron distribution in a quantum wire. The electron evolution is described by a quantum-kinetic equation accounting for the interaction with phonons. A Monte Carlo approach has been developed for solving the equation. The corresponding Monte Carlo algorithm is NP-hard problem concerning the evolution time. To obtain solutions for long evolution times with small stochastic error we combine both variance reduction techniques and distributed computations. Grid technologies are implemented due to the large computational efforts imposed by the quantum character of the model.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
1. Reductions in resource availability, associated with land-use change and agricultural intensification in the UK and Europe, have been linked with the widespread decline of many farmland bird species over recent decades. However, the underlying ecological processes which link resource availability and population trends are poorly understood. 2. We construct a spatial depletion model to investigate the relationship between the population persistence of granivorous birds within the agricultural landscape and the temporal dynamics of stubble field availability, an important source of winter food for many of those species. 3. The model is capable of accurately predicting the distribution of a given number of finches and buntings amongst patches of different stubble types in an agricultural landscape over the course of a winter and assessing the relative value of different landscapes in terms of resource availability. 4. Sensitivity analyses showed that the model is relatively robust to estimates of energetic requirements, search efficiency and handling time but that daily seed survival estimates have a strong influence on model fit. Understanding resource dynamics in agricultural landscapes is highlighted as a key area for further research. 5. There was a positive relationship between the predicted number of bird days supported by a landscape over-winter and the breeding population trend for yellowhammer Emberiza citrinella, a species for which survival has been identified as the primary driver of population dynamics, but not for linnet Carduelis cannabina, a species for which productivity has been identified as the primary driver of population dynamics. 6. Synthesis and applications. We believe this model can be used to guide the effective delivery of over-winter food resources under agri-environment schemes and to assess the impacts on granivorous birds of changing resource availability associated with novel changes in land use. This could be very important in the future as farming adapts to an increasingly dynamic trading environment, in which demands for increased agricultural production must be reconciled with objectives for environmental protection, including biodiversity conservation.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.