110 resultados para cosmology : theory
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
Cannabis use among adolescents and young adults has become a major public health challenge. Several European countries are currently developing short screening instruments to identify 'problematic' forms of cannabis use in general population surveys. One such instrument is the Cannabis Use Disorders Identification Test (CUDIT), a 10-item questionnaire based on the Alcohol Use Disorders Identification Test. Previous research found that some CUDIT items did not perform well psychometrically. In the interests of improving the psychometric properties of the CUDIT, this study replaces the poorly performing items with new items that specifically address cannabis use. Analyses are based on a sub-sample of 558 recent cannabis users from a representative population sample of 5722 individuals (aged 13-32) who were surveyed in the 2007 Swiss Cannabis Monitoring Study. Four new items were added to the original CUDIT. Psychometric properties of all 14 items, as well as the dimensionality of the supplemented CUDIT were then examined using Item Response Theory. Results indicate the unidimensionality of CUDIT and an improvement in its psychometric performance when three original items (usual hours being stoned; injuries; guilt) are replaced by new ones (motives for using cannabis; missing out leisure time activities; difficulties at work/school). However, improvements were limited to cannabis users with a high problem score. For epidemiological purposes, any further revision of CUDIT should therefore include a greater number of 'easier' items.
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Nowak et al. argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. However, we believe that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature. We will focus our comments on three general issues.
Resumo:
The objective of this paper is to discuss whether children have a capacity for deonticreasoning that is irreducible to mentalizing. The results of two experiments point tothe existence of such non-mentalistic understanding and prediction of the behaviourof others. In Study 1, young children (3- and 4-year-olds) were told different versionsof classic false-belief tasks, some of which were modified by the introduction of a ruleor a regularity. When the task (a standard change of location task) included a rule, theperformance of 3-year-olds, who fail traditional false-belief tasks, significantly improved.In Study 2, 3-year-olds proved to be able to infer a rule from a social situation and touse it in order to predict the behaviour of a character involved in a modified versionof the false-belief task. These studies suggest that rules play a central role in the socialcognition of young children and that deontic reasoning might not necessarily involvemind reading.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.
Resumo:
On the efficiency of recursive evaluations with applications to risk theoryCette thèse est composée de trois essais qui portent sur l'efficacité des évaluations récursives de la distribution du montant total des sinistres d'un portefeuille de polices d'assurance au cours d'un période donnée. Le calcul de sa fonction de probabilité ou de quantités liées à cette distribution apparaît fréquemment dans la plupart des domaines de la pratique actuarielle.C'est le cas notamment pour le calcul du capital de solvabilité en Suisse ou pour modéliser la perte d'une assurance vie au cours d'une année. Le principal problème des évaluations récursives est que la propagation des erreurs provenant de la représentation des nombres réels par l'ordinateur peut être désastreuse. Mais, le gain de temps qu'elles procurent en réduisant le nombre d'opérations arithmétiques est substantiel par rapport à d'autres méthodes.Dans le premier essai, nous utilisons certaines propriétés d'un outil informatique performant afin d'optimiser le temps de calcul tout en garantissant une certaine qualité dans les résultats par rapport à la propagation de ces erreurs au cours de l'évaluation.Dans le second essai, nous dérivons des expressions exactes et des bornes pour les erreurs qui se produisent dans les fonctions de distribution cumulatives d'un ordre donné lorsque celles-ci sont évaluées récursivement à partir d'une approximation de la transformée de De Pril associée. Ces fonctions cumulatives permettent de calculer directement certaines quantités essentielles comme les primes stop-loss.Finalement, dans le troisième essai, nous étudions la stabilité des évaluations récursives de ces fonctions cumulatives par rapport à la propagation des erreurs citées ci-dessus et déterminons la précision nécessaire dans la représentation des nombres réels afin de garantir des résultats satisfaisants. Cette précision dépend en grande partie de la transformée de De Pril associée.
Resumo:
The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.