14 resultados para Economies of density

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Economics of Urban Diversity explores ethnic and religious minorities in urban economies. In this exciting work, the contributors develop an integrative approach to urban diversity and economy by employing concepts from different studies and linking historical and contemporary analyses of economic, societal, demographic, and cultural development. Contributors from a variety of disciplines-geography, economics, history, sociology, anthropology, and planning-make for a transdisciplinary analysis of past and present migration-related economic and social issues, which helps to better understand the situation of ethnic and religious minorities in metropolitan areas today.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The populations of Capercaillie (Tetrao urogallus), the largest European grouse, have seriously declined during the last century over most of their distribution in western and central Europe. In the Jura mountains, the relict population is now isolated and critically endangered (about 500 breeding adults). We developed a simulation software (TetrasPool) that accounts for age and spatial structure as well as stochastic processes, to perform a viability analysis and explore management scenarios for this population, capitalizing on a 24 years-long series of field data. Simulations predict a marked decline and a significant extinction risk over the next century, largely due to environmental and demographic stochasticity (average values of life-history parameters would otherwise allow stability). Variances among scenarios mainly stem from uncertainties about the shape and intensity of density dependence. Uncertainty analyses suggest to focus conservation efforts on enhancing, not only adult survival (as often advocated for long-lived species), but also recruitment. The juvenile stage matters when local populations undergo extinctions, because it ensures connectivity and recolonization. Besides limiting human perturbations, a silvicultural strategy aimed at opening forest structure should improve the quality and surface of available patches, independent of their size and localization. Such measures are to be taken urgently, if the population is to be saved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The calculation of elasticity parameters by sonic and ultra sonic wave propagation in saturated soils using Biot's theory needs the following variables : forpiation density and porosity (p, ø), compressional and shear wave velocities (Vp, Vs), fluid density, viscosity and compressibility (Pfi Ilfi Ki), matrix density and compressibility (p" K), The first four parameters can be determined in situ using logging probes. Because fluid and matrix characteristics are not modified during core extraction, they can be obtained through laboratory measurements. All parameters necessitate precise calibrations in various environments and for specific range of values encountered in soils. The slim diameter of boreholes in shallow geophysics and the high cost of petroleum equipment demand the use of specific probes, which usually only give qualitative results. The measurement 'of density is done with a gamma-gamma probe and the measurement of hydrogen index, in relation to porosity, by a neutron probe. The first step of this work has been carried out in synthetic formations in the laboratory using homogeneous media of known density and porosity. To establish borehole corrections different casings have been used. Finally a comparison between laboratory and in situ data in cored holes of known geometry and casing has been performed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The population density of an organism is one of the main aspects of its environment, and shoud therefore strongly influence its adaptive strategy. The r/K theory, based on the logistic model, was developed to formalize this influence. K-selectioon is classically thought to favour large body sizes. This prediction, however, cannot be directly derived from the logistic model: some auxiliary hypotheses are therefor implicit. These are to be made explicit if the theory is to be tested. An alternative approach, based on the Euler-Lotka equation, shows that density itself is irrelevant, but that the relative effect of density on adult and juvenile features is crucial. For instance, increasing population will select for a smaller body size if the density affects mainly juvenile growth and/or survival. In this case, density shoud indeed favour large body sizes. The theory appears nevertheless inconsistent, since a probable consequence of increasing body size will be a decrease in the carrying capacity

Relevância:

90.00% 90.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many models proposed to study the evolution of collective action rely on a formalism that represents social interactions as n-player games between individuals adopting discrete actions such as cooperate and defect. Despite the importance of spatial structure in biological collective action, the analysis of n-player games games in spatially structured populations has so far proved elusive. We address this problem by considering mixed strategies and by integrating discrete-action n-player games into the direct fitness approach of social evolution theory. This allows to conveniently identify convergence stable strategies and to capture the effect of population structure by a single structure coefficient, namely, the pairwise (scaled) relatedness among interacting individuals. As an application, we use our mathematical framework to investigate collective action problems associated with the provision of three different kinds of collective goods, paradigmatic of a vast array of helping traits in nature: "public goods" (both providers and shirkers can use the good, e.g., alarm calls), "club goods" (only providers can use the good, e.g., participation in collective hunting), and "charity goods" (only shirkers can use the good, e.g., altruistic sacrifice). We show that relatedness promotes the evolution of collective action in different ways depending on the kind of collective good and its economies of scale. Our findings highlight the importance of explicitly accounting for relatedness, the kind of collective good, and the economies of scale in theoretical and empirical studies of the evolution of collective action.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: New ways of representing diffusion data emerged recently and achieved to create structural connectivitymaps in healthy brains (Hagmann P et al. (2008)). These maps have the capacity to study alterations over the entire brain at the connection and network level. This is of high interest in complex disconnection diseases like schizophrenia. In this Pathology where multiple lines of evidence suggest the association of the pathology with abnormalities in neural circuitry and impaired structural connectivity, the diffusion imaging has been widely applied. Despite the large findings, most of the research using the diffusion just uses some scalar map derived from diffusion to show that some markers of white matter integrity are diminished in several areas of the brain (Kyriakopoulos M et al (2008)). Thanks to the structural connectionmatrix constructed by the whole brain tractography, we report in this work the network connectivity alterations in the schizophrenic patients. Methods: We investigated 13 schizophrenic patients as assessed by the DIGS (Diagnostic Interview for genetic studies, DSM IV criteria) and 13 healthy controls. We have got from each volunteer a DT-MRI as well as Qball imaging dataset and a high resolution anatomic T1 performed during the same session; with a 3 T clinical MRI scanner. The controls were matched on age, gender, handedness, and parental social economic-status. For all the subjects, a low resolution connection matrix is obtained by dividing the cortex into 66 gyral based ROIs. A higher resolution matrix is constructed using 250 ROIs as described in Hagmann P et al. (2008). These ROIs are respectively used jointly with the diffusion tractography to construct the high and low resolution densities connection matrices for each subject. In a first step the matrices of the groups are compared in term of connectivity, and not in term of density to check if the pathological group shows a loss of global connectivity. In this context the density connection matrices were binarized. As some local connectivity changes were also suspected, especially in frontal and temporal areas, we have also looked for the areas where the connectivity showed significant changes. Results: The statistical analysis revealed a significant loss of global connectivity in the schizophrenic's brains at level 5%. Furthermore, by constructing specific statistics which represent local connectivity within the anatomical regions (66 ROIs) using the data obtained by the finest resolution (250 ROIs) to improve the robustness, we found the regions that cause this significant loss of connectivity. The significance is observed after multiple testing corrections by the False Discovery Rate. Discussion: The detected regions are almost the same as those reported in the literature as the involved regions in schizophrenia. Most of the connectivity decreases are noted in both hemispheres in the fronto-frontal and temporo-temporal regions as well as some temporal ROIs with their adjacent ROIs in parietal and occipital lobes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Average physical stature has increased dramatically during the 20th century in many populations across the world with few exceptions. It remains unclear if social inequalities in height persist despite improvements in living standards in the welfare economies of Western Europe. We examined trends in the association between height and socioeconomic indicators in adults over three decades in France. The data were drawn from the French Decennial Health Surveys: a multistage, stratified, random survey of households, representative of the population, conducted in 1970, 1980, 1991, and 2003. We categorised age into 10-year bands, 25-34, 35-44, 45-54 and 55-64 years. Education and income were the two socioeconomic measures used. The slope index of inequality (SII) was used as a summary index of absolute social inequalities in height. The results show that average height increased over this period; men and women aged 25-34 years were 171.9 and 161.2 cm tall in 1970 and 177.0 and 164.0 cm in 2003, respectively. However, education-related inequalities in height remained unchanged over this period and in men were 4.48 cm (1970), 4.71 cm (1980), 5.58 cm (1991) and 4.69 cm (2003), the corresponding figures in women were 2.41, 2.37, 3.14 and 2.96 cm. Income-related inequalities in height were smaller and much attenuated after adjustment for education. These results suggest that in France, social inequalities in adult height in absolute terms have remained unchanged across the three decades under examination.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Question: How do clonal traits of a locally dominant grass (Elymus repens (L.) Gould.) respond to soil heterogeneity and shape spatial patterns of its tillers? How do tiller spatial patterns constrain seedling recruitment within the community?Locations: Artificial banks of the River Rhone, France.Material and Methods: We examined 45 vegetation patches dominated by Elymus repens. During a first phase we tested relationships between soil variables and three clonal traits (spacer length, number of clumping tillers and branching rate), and between the same clonal traits and spatial patterns (i.e. density and degree of spatial aggregation) of tillers at a very fine scale. During a second phase, we performed a sowing experiment to investigate effects of density and spatial patterns of E. repens on recruitment of eight species selected from the regional species pool.Results: Clonal traits had clear effects - especially spacer length - on densification and aggregation of E. repens tillers and, at the same time, a clear response of these same clonal traits as soil granulometry changed. The density and degree of aggregation of E. repens tillers was positively correlated to total seedling cover and diversity at the finest spatial scales.Conclusions: Spatial patterning of a dominant perennial grass responds to soil heterogeneity through modifications of its clonal morphology as a trade-off between phalanx and guerrilla forms. In turn, spatial patterns have strong effects on abundance and diversity of seedlings. Spatial patterns of tillers most probably led to formation of endogenous gaps in which the recruitment of new plant individuals was enhanced. Interestingly, we also observed more idiosyncratic effects of tiller spatial patterns on seedling cover and diversity when focusing on different growth forms of the sown species.