70 resultados para Ephemeral Computation
Resumo:
Résumé Alors que les pratiques religieuses régulières diminuent et que les groupes religieux majoritaires perdent des membres formellement enregistrés, une évènementisation de l'appartenance religieuse peut être observée depuis environ deux décennies. Dans un premier exemple, l'auteure montre que le réseau transnational de la confrérie soufie des Mourides resserre les liens entre adeptes dans un contexte migratoire qui tend à fragiliser les relations sociales. La confrérie y fait face à travers une spectacularisation de la pratique religieuse, occupant l'espace public à Harlem ou, dans une moindre mesure, à Paris. Le second exemple traite du pèlerinage aux Saintes et Saints d'Afrique dans la ville valaisanne de Saint Maurice. Ce dernier fut inventé par l'Église catholique suisse afin d'attirer des migrants africains, mais aussi afin de redynamiser une pratique religieuse ordinaire en perte de vitesse. L'évènement a lieu en même temps que le pèlerinage de Namugongo en Ouganda, réunissant plus d'un demi-million de personnes. Les communautés évènementielles ainsi créées sont-elles durables ou plutôt liquides ? Abstract While regular religious practice is decreasing and the major religious groups are losing formally registered members, a 'spectacularization' of religious belonging can be observed over the last two decades. The author presents two examples to support this argument. In the first, the transnational network of the Murids, a Sufi brotherhood, has tried to reinforce the relations between its members, which become fragile during migration. The brotherhood occupies a public space in Harlem, and, to a lesser extent, in Paris, in a spectacularization of religious belonging, in order to remain attractive to migrants. The second example deals with the pilgrimage dedicated to African Saints in the Swiss town of Saint Maurice. This event was invented by the missionary service of the Swiss Catholic Church in order to attract African migrants, but also in order to make ordinary religious practice more appealing. The event takes place at the same time as the Ugandan pilgrimage of Namugongo, which assembles more than half a million people. However, are these 'event communities' sustainable or ephemeral?
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
To provide a quantitative support to the handwriting evidence evaluation, a new method was developed through the computation of a likelihood ratio based on a Bayesian approach. In the present paper, the methodology is briefly described and applied to data collected within a simulated case of a threatening letter. Fourier descriptors are used to characterise the shape of loops of handwritten characters "a" of the true writer of the threatening letter, and: 1) with reference characters "a" of the true writer of the threatening letter, and then 2) with characters "a" of a writer who did not write the threatening letter. The findings support that the probabilistic methodology correctly supports either the hypothesis of authorship or the alternative hypothesis. Further developments will enable the handwriting examiner to use this methodology as a helpful assistance to assess the strength of evidence in handwriting casework.
Resumo:
Whether or not species participating in specialized and obligate interactions display similar and simultaneous demographic variations at the intraspecific level remains an open question in phylogeography. In the present study, we used the mutualistic nursery pollination occurring between the European globeflower Trollius europaeus and its specialized pollinators in the genus Chiastocheta as a case study. Explicitly, we investigated if the phylogeographies of the pollinating flies are significantly different from the expectation under a scenario of plant-insect congruence. Based on a large-scale sampling, we first used mitochondrial data to infer the phylogeographical histories of each fly species. Then, we defined phylogeographical scenarios of congruence with the plant history, and used maximum likelihood and Bayesian approaches to test for plant-insect phylogeographical congruence for the three Chiastocheta species. We show that the phylogeographical histories of the three fly species differ. Only Chiastocheta lophota and Chiastocheta dentifera display strong spatial genetic structures, which do not appear to be statistically different from those expected under scenarios of phylogeographical congruence with the plant. The results of the present study indicate that the fly species responded in independent and different ways to shared evolutionary forces, displaying varying levels of congruence with the plant genetic structure
Resumo:
Hydrological models developed for extreme precipitation of PMP type are difficult to calibrate because of the scarcity of available data for these events. This article presents the process and results of calibration for a distributed hydrological model at fine scale developed for the estimation of probable maximal floods in the case of a PMP. This calibration is done on two Swiss catchments for two events of summer storms. The calculation done is concentrated on the estimation of the parameters of the model, divided in two parts. The first is necessary for the computation of flow speeds while the second is required for the determination of the initial and final infiltration capacities for each terrain type. The results, validated with the Nash equation show a good correlation between the simulated and observed flows. We also apply this model on two Romanian catchments, showing the river network and estimated flow.
Resumo:
Tripping is considered a major cause of fall in older people. Therefore, foot clearance (i.e., height of the foot above ground during swing phase) could be a key factor to better understand the complex relationship between gait and falls. This paper presents a new method to estimate clearance using a foot-worn and wireless inertial sensor system. The method relies on the computation of foot orientation and trajectory from sensors signal data fusion, combined with the temporal detection of toe-off and heel-strike events. Based on a kinematic model that automatically estimates sensor position relative to the foot, heel and toe trajectories are estimated. 2-D and 3-D models are presented with different solving approaches, and validated against an optical motion capture system on 12 healthy adults performing short walking trials at self-selected, slow, and fast speed. Parameters corresponding to local minimum and maximum of heel and toe clearance were extracted and showed accuracy ± precision of 4.1 ± 2.3 cm for maximal heel clearance and 1.3 ± 0.9 cm for minimal toe clearance compared to the reference. The system is lightweight, wireless, easy to wear and to use, and provide a new and useful tool for routine clinical assessment of gait outside a dedicated laboratory.
Resumo:
In this work we present a method for the image analysisof Magnetic Resonance Imaging (MRI) of fetuses. Our goalis to segment the brain surface from multiple volumes(axial, coronal and sagittal acquisitions) of a fetus. Tothis end we propose a two-step approach: first, a FiniteGaussian Mixture Model (FGMM) will segment the image into3 classes: brain, non-brain and mixture voxels. Second, aMarkov Random Field scheme will be applied tore-distribute mixture voxels into either brain ornon-brain tissue. Our main contributions are an adaptedenergy computation and an extended neighborhood frommultiple volumes in the MRF step. Preliminary results onfour fetuses of different gestational ages will be shown.
Resumo:
Gradients of variation-or clines-have always intrigued biologists. Classically, they have been interpreted as the outcomes of antagonistic interactions between selection and gene flow. Alternatively, clines may also establish neutrally with isolation by distance (IBD) or secondary contact between previously isolated populations. The relative importance of natural selection and these two neutral processes in the establishment of clinal variation can be tested by comparing genetic differentiation at neutral genetic markers and at the studied trait. A third neutral process, surfing of a newly arisen mutation during the colonization of a new habitat, is more difficult to test. Here, we designed a spatially explicit approximate Bayesian computation (ABC) simulation framework to evaluate whether the strong cline in the genetically based reddish coloration observed in the European barn owl (Tyto alba) arose as a by-product of a range expansion or whether selection has to be invoked to explain this colour cline, for which we have previously ruled out the actions of IBD or secondary contact. Using ABC simulations and genetic data on 390 individuals from 20 locations genotyped at 22 microsatellites loci, we first determined how barn owls colonized Europe after the last glaciation. Using these results in new simulations on the evolution of the colour phenotype, and assuming various genetic architectures for the colour trait, we demonstrate that the observed colour cline cannot be due to the surfing of a neutral mutation. Taking advantage of spatially explicit ABC, which proved to be a powerful method to disentangle the respective roles of selection and drift in range expansions, we conclude that the formation of the colour cline observed in the barn owl must be due to natural selection.
Resumo:
Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown.
Resumo:
En France, la décentralisation et la territorialisation de l'action publique ont fait des sports de nature un objet d'action publique légitime en donnant naissance à de nouveaux outils de management public dédiés à la concertation et à la planification des usages de la nature. Nés de l'article 52 de la Loi sur le sport modifiée en 2000, la Commission Départementale des Espaces, Sites et Itinéraires relatifs aux sports de nature (CDESI) et le Plan Départemental des Espaces Sites et Itinéraires relatifs aux sports de nature (PDESI) sont des outils de concertation territoriale dédiés à la gestion publique des sports de nature au niveau départemental. Un enjeu de ce travail tient à l'appréhension des transformations de l'action publique en s'attachant à l'étude des dispositifs de concertation sur les sports de nature. Un deuxième enjeu de ce travail s'attache à mettre en évidence les effets de la concertation en analysant les interactions et les différents modes d'engagements des acteurs au cours de la « chose publique en train de se faire » (Cefaï, 2002). Les acteurs s'engagent non seulement dans la concertation comprise comme une activité sociale faite d'interactions, mais ils s'engagent également dans la concertation en tant que processus d'action publique. Aussi, un autre enjeu de ce travail est d'appréhender les effets de la concertation par une analyse processuelle des engagements (Fillieule, 2004) des acteurs et des organisations. En mobilisant les outils conceptuels de la sociologie interactionniste, de la sociologie pragmatique, ainsi que de la sociologie structuraliste, l'analyse des situations interactionnelles a notamment permis d'identifier les procédures de cadrage et les techniques dramaturgiques mises en oeuvre par les interactants, ainsi que les répertoires argumentatifs mobilisés par ces acteurs pendant l « 'épreuve » de la concertation. Les confrontations des points de vue et les justifications des prises de positions des acteurs peuvent faire évoluer la configuration initiale des jeux d'acteurs même si, pour certains, ces changements ne restent parfois qu'éphémères. Les organisations s'engagent dans la concertation en fonction de la revendication d'une légitimité qui est à comprendre comme une forme militantisme institutionnel s'articulant autour de la valorisation d'une expertise militante, environnementale, institutionnelle, ou encore de leur statut de partenaire institutionnel. In France, decentralization and territorialization of public action have made outdoor sports become an object of public policies justifiable by giving birth to new tools of public management dedicated to the public consultation, the dialogue, and the planning of the uses of the landscapes. Indeed, born of article 52 of the Law on sport modified in 2000, the Departmental committee for Spaces, Sites and Routes relative to natural sports ( CDESI) and the Departmental Plan of Spaces Sites and Routes relative to natural sports ( PDESI) are governance tools dedicated to the public management of outdoor sports for counties. A challenge of this work is to understand the changes of public policy by focusing on the study of mechanisms for consultation on outdoor sports. A second item of this work is to highlight the effects of cooperation by focusing on the analysis of interactions and actor's commitments during the "public thing in the making" (Cefaï, 2002). Actors commit themselves not only in the dialogue included as a social activity made by interactions, but they also take part to the dialogue included as a process of public action. Also, another issue of this work is to understand the effects of consultation by a processual approach of individual commitments (Fillieule, 2004) of actors and organizations. Using the conceptual tools of symbolic interactionism, pragmatic sociology, and structuralist sociology, the analysis of interactional situations has highlighted the framing work and procedures implemented by the interactants, as well as the dramaturgical techniques and argumentative directories which, they mobilize during the "test" of the consultation. Confrontation of viewpoints and justifications of interactants' positions can evolve from their initial configuration sets, even if for some of them these changes are sometimes ephemeral. Organizations involve themselves according to demands of legitimacy which, are to understand as a shape institutional militancy articulating around the valuation of a militant, environmental, institutional expertise, or still around their status of institutional partner.
Resumo:
Sex chromosomes are expected to evolve suppressed recombination, which leads to degeneration of the Y and heteromorphism between the X and Y. Some sex chromosomes remain homomorphic, however, and the factors that prevent degeneration of the Y in these cases are not well understood. The homomorphic sex chromosomes of the European tree frogs (Hyla spp.) present an interesting paradox. Recombination in males has never been observed in crossing experiments, but molecular data are suggestive of occasional recombination between the X and Y. The hypothesis that these sex chromosomes recombine has not been tested statistically, however, nor has the X-Y recombination rate been estimated. Here, we use approximate Bayesian computation coupled with coalescent simulations of sex chromosomes to quantify X-Y recombination rate from existent data. We find that microsatellite data from H. arborea, H. intermedia and H. molleri support a recombination rate between X and Y that is significantly different from zero. We estimate that rate to be approximately 10(5) times smaller than that between X chromosomes. Our findings support the notion that very low recombination rate may be sufficient to maintain homomorphism in sex chromosomes.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Whole-body counting is a technique of choice for assessing the intake of gamma-emitting radionuclides. An appropriate calibration is necessary, which is done either by experimental measurement or by Monte Carlo (MC) calculation. The aim of this work was to validate a MC model for calibrating whole-body counters (WBCs) by comparing the results of computations with measurements performed on an anthropomorphic phantom and to investigate the effect of a change in phantom's position on the WBC counting sensitivity. GEANT MC code was used for the calculations, and an IGOR phantom loaded with several types of radionuclides was used for the experimental measurements. The results show a reasonable agreement between measurements and MC computation. A 1-cm error in phantom positioning changes the activity estimation by >2%. Considering that a 5-cm deviation of the positioning of the phantom may occur in a realistic counting scenario, this implies that the uncertainty of the activity measured by a WBC is ∼10-20%.