870 resultados para Agent-Based Models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a l’Snider Entrepreneurial Research Center de la Wharton School de la University of Pennsilvanya y, EUA entre juliol i desembre del 2007. L’objectiu d’aquest projecte és estudiar la relació entre les estratègies de gestió del coneixement i les tecnologies de la informació i la comunicació (TIC) en l’evolució de les poblacions d’organitzacions i els seus efectes en els patrons industrials d’aglomeració espacial. Per a això s’adopta una aproximació fonamentada en la utilització d'un model basats en agents per a obtenir hipòtesis significatives i provables sobre l’evolució de les poblacions d’organitzacions al si de clústers geogràfics. El model de simulació incorpora les perspectives i supòsits d’un marc conceptual, l’Espai de la Informació o I-Space. Això permet una conceptualització basada en la informació de l’entorn econòmic que té en compte les seves dimensions espacials i temporals. Mitjançant els paràmetres del model es dóna la possibilitat d’assignar estratègies específiques de gestió del coneixement als diversos agents i de localitzar-los en una posició de l’espai físic. La simulació mostra com l'adopció d'estratègies diverses pel que fa a la gestió del coneixement influeix en l'evolució de les organitzacions i de la seva localització espacial, i que aquesta evolució es veu modificada pel desenvolupament de les TIC. A través de la modelització de dos casos ben coneguts de clústers geogràfics d’alta tecnologia, com són Silicon Valley a Califòrnia i la Route 128 als voltants de Boston, s’estudia la interrelació entre les estratègies de gestió del coneixement adoptades per les empreses i la seva tria de localització espacial, i també com això és afectat per l’evolució de les tecnologies de la informació i de la comunicació (TIC). Els resultats obtinguts generen una sèrie d’hipòtesis de rica potencialitat sobre l’impacte del desenvolupament de les TIC en la dinàmica d’aquests clusters geogràfics. Concretament, es troba que la estructuració del coneixement i l’aglomeració espacial co-evolucionen i que aquesta coevolució es veu significativament alterada pel desenvolupament de les TIC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Throughout much of the Quaternary Period, inhospitable environmental conditions above the Arctic Circle have been a formidable barrier separating most marine organisms in the North Atlantic from those in the North Pacific(1,2). Rapid warming has begun to lift this barrier(3), potentially facilitating the interchange of marine biota between the two seas(4). Here, we forecast the potential northward progression of 515 fish species following climate change, and report the rate of potential species interchange between the Atlantic and the Pacific via the Northwest Passage and the Northeast Passage. For this, we projected niche-based models under climate change scenarios and simulated the spread of species through the passages when climatic conditions became suitable. Results reveal a complex range of responses during this century, and accelerated interchange after 2050. By 2100 up to 41 species could enter the Pacific and 44 species could enter the Atlantic, via one or both passages. Consistent with historical and recent biodiversity interchanges(5,6), this exchange of fish species may trigger changes for biodiversity and food webs in the North Atlantic and North Pacific, with ecological and economic consequences to ecosystems that at present contribute 39% to global marine fish landings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Wechsler Intelligence Scale for Children-fourth edition (i.e. WISC-IV) recognizes a four-factor scoring structure in addition to the Full Scale IQ (FSIQ) score: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI) indices. However, several authors suggested that models based on the Cattell-Horn-Carroll (CHC) theory with 5 or 6 factors provided a better fit to the data than does the current four-factor solution. By comparing the current four-factor structure to CHC-based models, this research aimed to investigate the factorial structure and the constructs underlying the WISC-IV subtest scores with French-speaking Swiss children (N = 249). To deal with this goal, confirmatory factor analyses (CFAs) were conducted. Results showed that a CHC-based model with five factors better fitted the French-Swiss data than did the current WISC-IV scoring structure. All together, these results support the hypothesis of the appropriateness of the CHC model with French-speaking children.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Geographical Information Systems (GIS) facilitate access to epidemiological data through visualization and may be consulted for the development of mathematical models and analysis by spatial statistics. Variables such as land-cover, land-use, elevations, surface temperatures, rainfall etc. emanating from earth-observing satellites, complement GIS as this information allows the analysis of disease distribution based on environmental characteristics. The strength of this approach issues from the specific environmental requirements of those causative infectious agents, which depend on intermediate hosts for their transmission. The distribution of these diseases is restricted, both by the environmental requirements of their intermediate hosts/vectors and by the ambient temperature inside these hosts, which effectively govern the speed of maturation of the parasite. This paper discusses the current capabilities with regard to satellite data collection in terms of resolution (spatial, temporal and spectral) of the sensor instruments on board drawing attention to the utility of computer-based models of the Earth for epidemiological research. Virtual globes, available from Google and other commercial firms, are superior to conventional maps as they do not only show geographical and man-made features, but also allow instant import of data-sets of specific interest, e.g. environmental parameters, demographic information etc., from the Internet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many animals that live in groups maintain competitive relationships, yet avoid continual fighting, by forming dominance hierarchies. We compare predictions of stochastic, individual-based models with empirical experimental evidence using shore crabs to test competing hypotheses regarding hierarchy development. The models test (1) what information individuals use when deciding to fight or retreat, (2) how past experience affects current resource-holding potential, and (3) how individuals deal with changes to the social environment. First, we conclude that crabs assess only their own state and not their opponent's when deciding to fight or retreat. Second, willingness to enter, and performance in, aggressive contests are influenced by previous contest outcomes. Winning increases the likelihood of both fighting and winning future interactions, while losing has the opposite effect. Third, when groups with established dominance hierarchies dissolve and new groups form, individuals reassess their ranks, showing no memory of previous rank or group affiliation. With every change in group composition, individuals fight for their new ranks. This iterative process carries over as groups dissolve and form, which has important implications for the relationship between ability and hierarchy rank. We conclude that dominance hierarchies emerge through an interaction of individual and social factors, and discuss these findings in terms of an underlying mechanism. Overall, our results are consistent with crabs using a cumulative assessment strategy iterated across changes in group composition, in which aggression is constrained by an absolute threshold in energy spent and damage received while fighting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND & AIMS Hy's Law, which states that hepatocellular drug-induced liver injury (DILI) with jaundice indicates a serious reaction, is used widely to determine risk for acute liver failure (ALF). We aimed to optimize the definition of Hy's Law and to develop a model for predicting ALF in patients with DILI. METHODS We collected data from 771 patients with DILI (805 episodes) from the Spanish DILI registry, from April 1994 through August 2012. We analyzed data collected at DILI recognition and at the time of peak levels of alanine aminotransferase (ALT) and total bilirubin (TBL). RESULTS Of the 771 patients with DILI, 32 developed ALF. Hepatocellular injury, female sex, high levels of TBL, and a high ratio of aspartate aminotransferase (AST):ALT were independent risk factors for ALF. We compared 3 ways to use Hy's Law to predict which patients would develop ALF; all included TBL greater than 2-fold the upper limit of normal (×ULN) and either ALT level greater than 3 × ULN, a ratio (R) value (ALT × ULN/alkaline phosphatase × ULN) of 5 or greater, or a new ratio (nR) value (ALT or AST, whichever produced the highest ×ULN/ alkaline phosphatase × ULN value) of 5 or greater. At recognition of DILI, the R- and nR-based models identified patients who developed ALF with 67% and 63% specificity, respectively, whereas use of only ALT level identified them with 44% specificity. However, the level of ALT and the nR model each identified patients who developed ALF with 90% sensitivity, whereas the R criteria identified them with 83% sensitivity. An equal number of patients who did and did not develop ALF had alkaline phosphatase levels greater than 2 × ULN. An algorithm based on AST level greater than 17.3 × ULN, TBL greater than 6.6 × ULN, and AST:ALT greater than 1.5 identified patients who developed ALF with 82% specificity and 80% sensitivity. CONCLUSIONS When applied at DILI recognition, the nR criteria for Hy's Law provides the best balance of sensitivity and specificity whereas our new composite algorithm provides additional specificity in predicting the ultimate development of ALF.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given the rate of projected environmental change for the 21st century, urgent adaptation and mitigation measures are required to slow down the on-going erosion of biodiversity. Even though increasing evidence shows that recent human-induced environmental changes have already triggered species' range shifts, changes in phenology and species' extinctions, accurate projections of species' responses to future environmental changes are more difficult to ascertain. This is problematic, since there is a growing awareness of the need to adopt proactive conservation planning measures using forecasts of species' responses to future environmental changes. There is a substantial body of literature describing and assessing the impacts of various scenarios of climate and land-use change on species' distributions. Model predictions include a wide range of assumptions and limitations that are widely acknowledged but compromise their use for developing reliable adaptation and mitigation strategies for biodiversity. Indeed, amongst the most used models, few, if any, explicitly deal with migration processes, the dynamics of population at the "trailing edge" of shifting populations, species' interactions and the interaction between the effects of climate and land-use. In this review, we propose two main avenues to progress the understanding and prediction of the different processes A occurring on the leading and trailing edge of the species' distribution in response to any global change phenomena. Deliberately focusing on plant species, we first explore the different ways to incorporate species' migration in the existing modelling approaches, given data and knowledge limitations and the dual effects of climate and land-use factors. Secondly, we explore the mechanisms and processes happening at the trailing edge of a shifting species' distribution and how to implement them into a modelling approach. We finally conclude this review with clear guidelines on how such modelling improvements will benefit conservation strategies in a changing world. (c) 2007 Rubel Foundation, ETH Zurich. Published by Elsevier GrnbH. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A major challenge in studying social behaviour stems from the need to disentangle the behaviour of each individual from the resulting collective. One way to overcome this problem is to construct a model of the behaviour of an individual, and observe whether combining many such individuals leads to the predicted outcome. This can be achieved by using robots. In this review we discuss the strengths and weaknesses of such an approach for studies of social behaviour. We find that robots-whether studied in groups of simulated or physical robots, or used to infiltrate and manipulate groups of living organisms-have important advantages over conventional individual-based models and have contributed greatly to the study of social behaviour. In particular, robots have increased our understanding of self-organization and the evolution of cooperative behaviour and communication. However, the resulting findings have not had the desired impact on the biological community. We suggest reasons for why this may be the case, and how the benefits of using robots can be maximized in future research on social behaviour.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject "Value and prices in Russian economic thought (1890--1920)" should evoke several names and debates in the reader's mind. For a long time, Western scholars have been aware that the Russian economists Tugan-Baranovsky and Bortkiewicz were active participants to the Marxian transformation problem, that the mathematical models of Dmitriev prefigured forthcoming neoricardian based models, and that many Russian economists were either supporting the Marxian labour theory of value or being revisionists. Moreover, these ideas were preparing the ground for Soviet planning. Russian scholars additionally knew that this period was the time of introduction of marginalism in Russia, and that, during this period, economists were active in thinking the relation of ethics with economic theory. All these issues are well covered in the existing literature. But there is a big gap that this dissertation intends to fill. The existing literature handles these pieces separately, although they are part of a single, more general, history. All these issues (the labour theory of value, marginalism, the Marxian transformation problem, planning, ethics, mathematical economics) were part of what this dissertation calls here "The Russian synthesis". The Russian synthesis (in the singular) designates here all the attempts at synthesis between classical political economy and marginalism, between labour theory of value and marginal utility, and between value and prices that occurred in Russian economic thought between 1890 and 1920, and that embraces the whole set of issues evoked above. This dissertation has the ambition of being the first comprehensive history of that Russian synthesis. In this, this contribution is unique. It has always surprised the author of the present dissertation that such a book has not yet been written. Several good reasons, both in terms of scarce availability of sources and of ideological restrictions, may accounted for a reasonable delay of several decades. But it is now urgent to remedy the situation before the protagonists of the Russian synthesis are definitely classified under the wrong labels in the pantheon of economic thought. To accomplish this task, it has seldom be sufficient to gather together the various existing studies on aspects of this story. It as been necessary to return to the primary sources in the Russian language. The most important part of the primary literature has never been translated, and in the last years only some of them have been republished in Russian. Therefore, most translations from the Russian have been made by the author of the present dissertation. The secondary literature has been surveyed in the languages that are familiar (Russian, English and French) or almost familiar (German) to the present author, and which are hopefully the most pertinent to the present investigation. Besides, and in order to increase the acquaintance with the text, which was the objective of all this, some archival sources were used. The analysis consists of careful chronological studies of the authors' writings and their evolution in their historical and intellectual context. As a consequence, the dissertation brings new authors to the foreground - Shaposhnikov and Yurovsky - who were traditionally confined to the substitutes' bench, because they only superficially touched the domains quoted above. In the Russian synthesis however, they played an important part of the story. As a side effect, some authors that used to play in the foreground - Dmitriev and Bortkiewicz - are relegated to the background, but are not forgotten. Besides, the dissertation refreshes the views on authors already known, such as Ziber and, especially, Tugan-Baranovsky. The ultimate objective of this dissertation is to change the opinion that one could have on "value and prices in Russian economic thought", by setting the Russian synthesis at the centre of the debates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use a two-person 3-stage game to investigate whether people chooseto punish or reward another player by sacrificing money to increase or decrease the other person's payoff. One player sends a message indicating an intended play, which is either favorable or unfavorable to the other player in the game. After the message, the sender and the receiver play a simultaneous 2x2 game. A deceptive message may be made, in an effort to induce the receiver to make a play favorable to the sender. Our focus is on whether receivers' rates of monetary sacrifice depend on the process and the perceived sender's intention,as is suggested by the literature on deception and proceduralsatisfaction. Models such as Rabin (1993), Sen (1997), and Charnessand Rabin (1999) also permit rates of sacrifice to be sensitive to the sender's perceived intention, while outcome-based models such as Fehr and Schmidt (1999) and Bolton and Ockenfels (1997) predict otherwise. We find that deception substantially increases the punishment rate as a response to an action that is unfavorable to the receiver. We also find that a small but significant percentage of subjects choose to reward a favorable action choice made by the sender.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many studies have forecasted the possible impact of climate change on plant distribution using models based on ecological niche theory. In their basic implementation, niche-based models do not constrain predictions by dispersal limitations. Hence, most niche-based modelling studies published so far have assumed dispersal to be either unlimited or null. However, depending on the rate of climatic change, the landscape fragmentation and the dispersal capabilities of individual species, these assumptions are likely to prove inaccurate, leading to under- or overestimation of future species distributions and yielding large uncertainty between these two extremes. As a result, the concepts of "potentially suitable" and "potentially colonisable" habitat are expected to differ significantly. To quantify to what extent these two concepts can differ, we developed MIGCLIM, a model simulating plant dispersal under climate change and landscape fragmentation scenarios. MIGCLIM implements various parameters, such as dispersal distance, increase in reproductive potential over time, barriers to dispersal or long distance dispersal. Several simulations were run for two virtual species in a study area of the western Swiss Alps, by varying dispersal distance and other parameters. Each simulation covered the hundred-year period 2001-2100 and three different IPCC-based temperature warming scenarios were considered. Our results indicate that: (i) using realistic parameter values, the future potential distributions generated using MIGCLIM can differ significantly (up to more than 95% decrease in colonized surface) from those that ignore dispersal; (ii) this divergence increases both with increasing climate warming and over longer time periods; (iii) the uncertainty associated with the warming scenario can be nearly as large as the one related to dispersal parameters; (iv) accounting for dispersal, even roughly, can importantly reduce uncertainty in projections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Niche-based models calibrated in the native range by relating species observations to climatic variables are commonly used to predict the potential spatial extent of species' invasion. This climate matching approach relies on the assumption that invasive species conserve their climatic niche in the invaded ranges. We test this assumption by analysing the climatic niche spaces of Spotted Knapweed in western North America and Europe. We show with robust cross-continental data that a shift of the observed climatic niche occurred between native and non-native ranges, providing the first empirical evidence that an invasive species can occupy climatically distinct niche spaces following its introduction into a new area. The models fail to predict the current invaded distribution, but correctly predict areas of introduction. Climate matching is thus a useful approach to identify areas at risk of introduction and establishment of newly or not-yet-introduced neophytes, but may not predict the full extent of invasions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical studies indicate that the transition to parenthood is influenced by an individual's peer group. To study the mechanisms creating interdepen- dencies across individuals' transition to parenthood and its timing we apply an agent-based simulation model. We build a one-sex model and provide agents with three different characteristics regarding age, intended education and parity. Agents endogenously form their network based on social closeness. Network members then may influence the agents' transition to higher parity levels. Our numerical simulations indicate that accounting for social inter- actions can explain the shift of first-birth probabilities in Austria over the period 1984 to 2004. Moreover, we apply our model to forecast age-specific fertility rates up to 2016.