945 resultados para Moretti, Franco: Graphs, Maps, Trees. Abstract models for a literaty theory
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
INTRODUCTION Associations of hormone-receptor positive breast cancer with excess adiposity are reasonably well characterized; however, uncertainty remains regarding the association of body mass index (BMI) with hormone-receptor negative malignancies, and possible interactions by hormone replacement therapy (HRT) use. METHODS Within the European EPIC cohort, Cox proportional hazards models were used to describe the relationship of BMI, waist and hip circumferences with risk of estrogen-receptor (ER) negative and progesterone-receptor (PR) negative (n = 1,021) and ER+PR+ (n = 3,586) breast tumors within five-year age bands. Among postmenopausal women, the joint effects of BMI and HRT use were analyzed. RESULTS For risk of ER-PR- tumors, there was no association of BMI across the age bands. However, when analyses were restricted to postmenopausal HRT never users, a positive risk association with BMI (third versus first tertile HR = 1.47 (1.01 to 2.15)) was observed. BMI was inversely associated with ER+PR+ tumors among women aged ≤49 years (per 5 kg/m2 increase, HR = 0.79 (95%CI 0.68 to 0.91)), and positively associated with risk among women ≥65 years (HR = 1.25 (1.16 to 1.34)). Adjusting for BMI, waist and hip circumferences showed no further associations with risks of breast cancer subtypes. Current use of HRT was significantly associated with an increased risk of receptor-negative (HRT current use compared to HRT never use HR: 1.30 (1.05 to 1.62)) and positive tumors (HR: 1.74 (1.56 to 1.95)), although this risk increase was weaker for ER-PR- disease (Phet = 0.035). The association of HRT was significantly stronger in the leaner women (BMI ≤22.5 kg/m2) than for more overweight women (BMI ≥25.9 kg/m2) for, both, ER-PR- (HR: 1.74 (1.15 to 2.63)) and ER+PR+ (HR: 2.33 (1.84 to 2.92)) breast cancer and was not restricted to any particular HRT regime. CONCLUSIONS An elevated BMI may be positively associated with risk of ER-PR- tumors among postmenopausal women who never used HRT. Furthermore, postmenopausal HRT users were at an increased risk of ER-PR- as well as ER+PR+ tumors, especially among leaner women. For hormone-receptor positive tumors, but not for hormone-receptor negative tumors, our study confirms an inverse association of risk with BMI among young women of premenopausal age. Our data provide evidence for a possible role of sex hormones in the etiology of hormone-receptor negative tumors.
Resumo:
BACKGROUND AND HYPOTHESIS Although prodromal angina occurring shortly before an acute myocardial infarction (MI) has protective effects against in-hospital complications, this effect has not been well documented after initial hospitalization, especially in older or diabetic patients. We examined whether angina 1 week before a first MI provides protection in these patients. METHODS A total of 290 consecutive patients, 143 elderly (>64 years of age) and 147 adults (<65 years of age), 68 of whom were diabetic (23.4%) and 222 nondiabetic (76.6%), were examined to assess the effect of preceding angina on long-term prognosis (56 months) after initial hospitalization for a first MI. RESULTS No significant differences were found in long-term complications after initial hospitalization in these adult and elderly patients according to whether or not they had prodromal angina (44.4% with angina vs 45.4% without in adults; 45.5% vs 58% in elderly, P < 0.2). Nor were differences found according to their diabetic status (61.5% with angina vs 72.7% without in diabetics; 37.3% vs 38.3% in nondiabetics; P = 0.4). CONCLUSION The occurrence of angina 1 week before a first MI does not confer long-term protection against cardiovascular complications after initial hospitalization in adult or elderly patients, whether or not they have diabetes.
Resumo:
Chikungunya virus (CHIKV) transmission has been detected in America in 2013 and recently reached south up to Bolivia, Brazil and Paraguay, bordering countries of Argentina. The presence of the mosquito Aedes aegyptiin half of the country together with the regional context drove us to make a rapid assessment of transmission risk. Temperature thresholds for vector breeding and for virus transmission, together with adult activity from the literature, were mapped on a monthly basis to estimate risk. Transmission of chikungunya byAe. aegyptiin the world was seen at monthly mean temperatures from 21-34ºC, with the majority occurring between 26-28ºC. In Argentina temperatures above 21ºC are observed since September in the northeast, expanding south until January and retreating back to the northeast in April. The maximum area under risk encompasses more than half the country and around 32 million inhabitants. Vector adult activity was registered where monthly means temperatures exceeded 13ºC, in the northeast all over the year and in the northern half from September-May. The models herein proposed show that conditions for transmission are already present. Considering the regional context and the historic inability to control dengue in the region, chikungunya fever illness seems unavoidable.
Resumo:
Abstract Significance: Schizophrenia (SZ) and bipolar disorder (BD) are classified as two distinct diseases. However, accumulating evidence shows that both disorders share genetic, pathological, and epidemiological characteristics. Based on genetic and functional findings, redox dysregulation due to an imbalance between pro-oxidants and antioxidant defense mechanisms has been proposed as a risk factor contributing to their pathophysiology. Recent Advances: Altered antioxidant systems and signs of increased oxidative stress are observed in peripheral tissues and brains of SZ and BD patients, including abnormal prefrontal levels of glutathione (GSH), the major cellular redox regulator and antioxidant. Here we review experimental data from rodent models demonstrating that permanent as well as transient GSH deficit results in behavioral, morphological, electrophysiological, and neurochemical alterations analogous to pathologies observed in patients. Mice with GSH deficit display increased stress reactivity, altered social behavior, impaired prepulse inhibition, and exaggerated locomotor responses to psychostimulant injection. These behavioral changes are accompanied by N-methyl-D-aspartate receptor hypofunction, elevated glutamate levels, impairment of parvalbumin GABA interneurons, abnormal neuronal synchronization, altered dopamine neurotransmission, and deficient myelination. Critical Issues: Treatment with the GSH precursor and antioxidant N-acetylcysteine normalizes some of those deficits in mice, but also improves SZ and BD symptoms when given as adjunct to antipsychotic medication. Future Directions: These data demonstrate the usefulness of GSH-deficient rodent models to identify the mechanisms by which a redox imbalance could contribute to the development of SZ and BD pathophysiologies, and to develop novel therapeutic approaches based on antioxidant and redox regulator compounds. Antioxid. Redox Signal. 18, 1428-1443.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.
Resumo:
An important problem in descriptive and prescriptive research in decision making is to identify regions of rationality, i.e., the areas for which heuristics are and are not effective. To map the contours of such regions, we derive probabilities that heuristics identify the best of m alternatives (m > 2) characterized by k attributes or cues (k > 1). The heuristics include a single variable (lexicographic), variations of elimination-by-aspects, equal weighting, hybrids of the preceding, and models exploiting dominance. We use twenty simulated and four empirical datasets for illustration. We further provide an overview by regressing heuristic performance on factors characterizing environments. Overall, sensible heuristics generally yield similar choices in many environments. However, selection of the appropriate heuristic can be important in some regions (e.g., if there is low inter-correlation among attributes/cues). Since our work assumes a hit or miss decision criterion, we conclude by outlining extensions for exploring the effects of different loss functions.
Resumo:
Eurymetopum is an Andean clerid genus with 22 species. We modeled the ecological niches of 19 species with Maxent and used them as potential distributional maps to identify patterns of richness and endemicity. All modeled species maps were overlapped in a single map in order to determine richness. We performed an optimality analysis with NDM/VNDM in a grid of 1º latitude-longitude in order to identify endemism. We found a highly rich area, located between 32º and 41º south latitude, where the richest pixels have 16 species. One area of endemism was identified, located in the Maule and Valdivian Forest biogeographic provinces, which extends also to the Santiago province of the Central Chilean subregion, and contains four endemic species (E. parallelum, E. prasinum, E. proteus, and E. viride), as well as 16 non-endemic species. The sympatry of these phylogenetically unrelated species might indicate ancient vicariance processes, followed by episodes of dispersal. Based on our results, we suggest a close relationship between these provinces, with the Maule representing a complex area.
Resumo:
Species' geographic ranges are usually considered as basic units in macroecology and biogeography, yet it is still difficult to measure them accurately for many reasons. About 20 years ago, researchers started using local data on species' occurrences to estimate broad scale ranges, thereby establishing the niche modeling approach. However, there are still many problems in model evaluation and application, and one of the solutions is to find a consensus solution among models derived from different mathematical and statistical models for niche modeling, climatic projections and variable combination, all of which are sources of uncertainty during niche modeling. In this paper, we discuss this approach of ensemble forecasting and propose that it can be divided into three phases with increasing levels of complexity. Phase I is the simple combination of maps to achieve a consensual and hopefully conservative solution. In Phase II, differences among the maps used are described by multivariate analyses, and Phase III consists of the quantitative evaluation of the relative magnitude of uncertainties from different sources and their mapping. To illustrate these developments, we analyzed the occurrence data of the tiger moth, Utetheisa ornatrix (Lepidoptera, Arctiidae), a Neotropical moth species, and modeled its geographic range in current and future climates.
Resumo:
Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.
Resumo:
Evolutionary graph theory has been proposed as providing new fundamental rules for the evolution of co-operation and altruism. But how do these results relate to those of inclusive fitness theory? Here, we carry out a retrospective analysis of the models for the evolution of helping on graphs of Ohtsuki et al. [Nature (2006) 441, 502] and Ohtsuki & Nowak [Proc. R. Soc. Lond. Ser. B Biol. Sci (2006) 273, 2249]. We show that it is possible to translate evolutionary graph theory models into classical kin selection models without disturbing at all the mathematics describing the net effect of selection on helping. Model analysis further demonstrates that costly helping evolves on graphs through limited dispersal and overlapping generations. These two factors are well known to promote relatedness between interacting individuals in spatially structured populations. By allowing more than one individual to live at each node of the graph and by allowing interactions to vary with the distance between nodes, our inclusive fitness model allows us to consider a wider range of biological scenarios leading to the evolution of both helping and harming behaviours on graphs.
Resumo:
n the last two decades, interest in species distribution models (SDMs) of plants and animals has grown dramatically. Recent advances in SDMs allow us to potentially forecast anthropogenic effects on patterns of biodiversity at different spatial scales. However, some limitations still preclude the use of SDMs in many theoretical and practical applications. Here, we provide an overview of recent advances in this field, discuss the ecological principles and assumptions underpinning SDMs, and highlight critical limitations and decisions inherent in the construction and evaluation of SDMs. Particular emphasis is given to the use of SDMs for the assessment of climate change impacts and conservation management issues. We suggest new avenues for incorporating species migration, population dynamics, biotic interactions and community ecology into SDMs at multiple spatial scales. Addressing all these issues requires a better integration of SDMs with ecological theory.