869 resultados para Agent-based model
Resumo:
Los servicios de salud son sistemas muy complejos, pero de alta importancia, especialmente en algunos momentos críticos, en todo el mundo. Los departamentos de urgencias pueden ser una de las áreas más dinámicas y cambiables de todos los servicios de salud y a la vez más vulnerables a dichos cambios. La mejora de esos departamentos se puede considerar uno de los grandes retos que tiene cualquier administrador de un hospital, y la simulación provee una manera de examinar este sistema tan complejo sin poner en peligro los pacientes que son atendidos. El objetivo de este trabajo ha sido el modelado de un departamento de urgencias y el desarrollo de un simulador que implementa este modelo con la finalidad de explorar el comportamiento y las características de dicho servicio de urgencias. El uso del simulador ofrece la posibilidad de visualizar el comportamiento del modelo con diferentes parámetros y servirá como núcleo de un sistema de ayuda a la toma de decisiones que pueda ser usado en departamentos de urgencias. El modelo se ha desarrollado con técnicas de modelado basado en agentes (ABM) que permiten crear modelos funcionalmente más próximos a la realidad que los modelos de colas o de dinámicas de sistemas, al permitir la inclusión de la singularidad que implica el modelado a nivel de las personas. Los agentes del modelo presentado, descritos internamente como máquinas de estados, representan a todo el personal del departamento de urgencias y los pacientes que usan este servicio. Un análisis del modelo a través de su implementación en el simulador muestra que el sistema se comporta de manera semejante a un departamento de urgencias real.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
Summary : Division of labour is one of the most fascinating aspects of social insects. The efficient allocation of individuals to a multitude of different tasks requires a dynamic adjustment in response to the demands of a changing environment. A considerable number of theoretical models have focussed on identifying the mechanisms allowing colonies to perform efficient task allocation. The large majority of these models are built on the observation that individuals in a colony vary in their propensity (response threshold) to perform different tasks. Since individuals with a low threshold for a given task stimulus are more likely to perform that task than individuals with a high threshold, infra-colony variation in individual thresholds results in colony division of labour. These theoretical models suggest that variation in individual thresholds is affected by the within-colony genetic diversity. However, the models have not considered the genetic architecture underlying the individual response thresholds. This is important because a better understanding of division of labour requires determining how genotypic variation relates to differences in infra-colony response threshold distributions. In this thesis, we investigated the combined influence on task allocation efficiency of both, the within-colony genetic variability (stemming from variation in the number of matings by queens) and the number of genes underlying the response thresholds. We used an agent-based simulator to model a situation where workers in a colony had to perform either a regulatory task (where the amount of a given food item in the colony had to be maintained within predefined bounds) or a foraging task (where the quantity of a second type of food item collected had to be the highest possible). The performance of colonies was a function of workers being able to perform both tasks efficiently. To study the effect of within-colony genetic diversity, we compared the performance of colonies with queens mated with varying number of males. On the other hand, the influence of genetic architecture was investigated by varying the number of loci underlying the response threshold of the foraging and regulatory tasks. Artificial evolution was used to evolve the allelic values underlying the tasks thresholds. The results revealed that multiple matings always translated into higher colony performance, whatever the number of loci encoding the thresholds of the regulatory and foraging tasks. However, the beneficial effect of additional matings was particularly important when the genetic architecture of queens comprised one or few genes for the foraging task's threshold. By contrast, higher number of genes encoding the foraging task reduced colony performance with the detrimental effect being stronger when queens had mated with several males. Finally, the number of genes determining the threshold for the regulatory task only had a minor but incremental effect on colony performance. Overall, our numerical experiments indicate the importance of considering the effects of queen mating frequency, genetic architecture underlying task thresholds and the type of task performed when investigating the factors regulating the efficiency of division of labour in social insects. In this thesis we also investigate the task allocation efficiency of response threshold models and compare them with neural networks. While response threshold models are widely used amongst theoretical biologists interested in division of labour in social insects, our simulation reveals that they perform poorly compared to a neural network model. A major shortcoming of response thresholds is that they fail at one of the most crucial requirement of division of labour, the ability of individuals in a colony to efficiently switch between tasks under varying environmental conditions. Moreover, the intrinsic properties of the threshold models are that they lead to a large proportion of idle workers. Our results highlight these limitations of the response threshold models and provide an adequate substitute. Altogether, the experiments presented in this thesis provide novel contributions to the understanding of how division of labour in social insects is influenced by queen mating frequency and genetic architecture underlying worker task thresholds. Moreover, the thesis also provides a novel model of the mechanisms underlying worker task allocation that maybe more generally applicable than the widely used response threshold models. Resumé : La répartition du travail est l'un des aspects les plus fascinants des insectes vivant en société. Une allocation efficace de la multitude de différentes tâches entre individus demande un ajustement dynamique afin de répondre aux exigences d'un environnement en constant changement. Un nombre considérable de modèles théoriques se sont attachés à identifier les mécanismes permettant aux colonies d'effectuer une allocation efficace des tâches. La grande majorité des ces modèles sont basés sur le constat que les individus d'une même colonie diffèrent dans leur propension (inclination à répondre) à effectuer différentes tâches. Etant donné que les individus possédant un faible seuil de réponse à un stimulus associé à une tâche donnée sont plus disposés à effectuer cette dernière que les individus possédant un seuil élevé, les différences de seuils parmi les individus vivant au sein d'une même colonie mènent à une certaine répartition du travail. Ces modèles théoriques suggèrent que la variation des seuils des individus est affectée par la diversité génétique propre à la colonie. Cependant, ces modèles ne considèrent pas la structure génétique qui est à la base des seuils de réponse individuels. Ceci est très important car une meilleure compréhension de la répartition du travail requière de déterminer de quelle manière les variations génotypiques sont associées aux différentes distributions de seuils de réponse à l'intérieur d'une même colonie. Dans le cadre de cette thèse, nous étudions l'influence combinée de la variabilité génétique d'une colonie (qui prend son origine dans la variation du nombre d'accouplements des reines) avec le nombre de gènes supportant les seuils de réponse, vis-à-vis de la performance de l'allocation des tâches. Nous avons utilisé un simulateur basé sur des agents pour modéliser une situation où les travailleurs d'une colonie devaient accomplir une tâche de régulation (1a quantité d'une nourriture donnée doit être maintenue à l'intérieur d'un certain intervalle) ou une tâche de recherche de nourriture (la quantité d'une certaine nourriture doit être accumulée autant que possible). Dans ce contexte, 'efficacité des colonies tient en partie des travailleurs qui sont capable d'effectuer les deux tâches de manière efficace. Pour étudier l'effet de la diversité génétique d'une colonie, nous comparons l'efficacité des colonies possédant des reines qui s'accouplent avec un nombre variant de mâles. D'autre part, l'influence de la structure génétique a été étudiée en variant le nombre de loci à la base du seuil de réponse des deux tâches de régulation et de recherche de nourriture. Une évolution artificielle a été réalisée pour évoluer les valeurs alléliques qui sont à l'origine de ces seuils de réponse. Les résultats ont révélé que de nombreux accouplements se traduisaient toujours en une plus grande performance de la colonie, quelque soit le nombre de loci encodant les seuils des tâches de régulation et de recherche de nourriture. Cependant, les effets bénéfiques d'accouplements additionnels ont été particulièrement important lorsque la structure génétique des reines comprenait un ou quelques gènes pour le seuil de réponse pour la tâche de recherche de nourriture. D'autre part, un nombre plus élevé de gènes encodant la tâche de recherche de nourriture a diminué la performance de la colonie avec un effet nuisible d'autant plus fort lorsque les reines s'accouplent avec plusieurs mâles. Finalement, le nombre de gènes déterminant le seuil pour la tâche de régulation eu seulement un effet mineur mais incrémental sur la performance de la colonie. Pour conclure, nos expériences numériques révèlent l'importance de considérer les effets associés à la fréquence d'accouplement des reines, à la structure génétique qui est à l'origine des seuils de réponse pour les tâches ainsi qu'au type de tâche effectué au moment d'étudier les facteurs qui régulent l'efficacité de la répartition du travail chez les insectes vivant en communauté. Dans cette thèse, nous étudions l'efficacité de l'allocation des tâches des modèles prenant en compte des seuils de réponses, et les comparons à des réseaux de neurones. Alors que les modèles basés sur des seuils de réponse sont couramment utilisés parmi les biologistes intéressés par la répartition des tâches chez les insectes vivant en société, notre simulation montre qu'ils se révèlent peu efficace comparé à un modèle faisant usage de réseaux de neurones. Un point faible majeur des seuils de réponse est qu'ils échouent sur un point crucial nécessaire à la répartition des tâches, la capacité des individus d'une colonie à commuter efficacement entre des tâches soumises à des conditions environnementales changeantes. De plus, les propriétés intrinsèques des modèles basés sur l'utilisation de seuils conduisent à de larges populations de travailleurs inactifs. Nos résultats mettent en évidence les limites de ces modèles basés sur l'utilisation de seuils et fournissent un substitut adéquat. Ensemble, les expériences présentées dans cette thèse fournissent de nouvelles contributions pour comprendre comment la répartition du travail chez les insectes vivant en société est influencée par la fréquence d'accouplements des reines ainsi que par la structure génétique qui est à l'origine, pour un travailleur, du seuil de réponse pour une tâche. De plus, cette thèse fournit également un nouveau modèle décrivant les mécanismes qui sont à l'origine de l'allocation des tâches entre travailleurs, mécanismes qui peuvent être appliqué de manière plus générale que ceux couramment utilisés et basés sur des seuils de réponse.
Resumo:
We examined drivers of article citations using 776 articles that were published from 1990-2012 in a broad-based and high-impact social sciences journal, The Leadership Quarterly. These articles had 1,191 unique authors having published and received in total (at the time of their most recent article published in our dataset) 16,817 articles and 284,777 citations, respectively. Our models explained 66.6% of the variance in citations and showed that quantitative, review, method, and theory articles were significantly more cited than were qualitative articles or agent-based simulations. As concerns quantitative articles, which constituted the majority of the sample, our model explained 80.3% of the variance in citations; some methods (e.g., use of SEM) and designs (e.g., meta-analysis), as well as theoretical approaches (e.g., use of transformational, charismatic, or visionary type-leadership theories) predicted higher article citations. Regarding the statistical conclusion validity of quantitative articles, articles having endogeneity threats received significantly fewer citations than did those using a more robust design or an estimation procedure that ensured correct causal estimation. We make several general recommendations on how to improve research practice and article citations.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
Empirical studies indicate that the transition to parenthood is influenced by an individual's peer group. To study the mechanisms creating interdepen- dencies across individuals' transition to parenthood and its timing we apply an agent-based simulation model. We build a one-sex model and provide agents with three different characteristics regarding age, intended education and parity. Agents endogenously form their network based on social closeness. Network members then may influence the agents' transition to higher parity levels. Our numerical simulations indicate that accounting for social inter- actions can explain the shift of first-birth probabilities in Austria over the period 1984 to 2004. Moreover, we apply our model to forecast age-specific fertility rates up to 2016.
Resumo:
The Soil Nitrogen Availability Predictor (SNAP) model predicts daily and annual rates of net N mineralization (NNM) based on daily weather measurements, daily predictions of soil water and soil temperature, and on temperature and moisture modifiers obtained during aerobic incubation (basal rate). The model was based on in situ measurements of NNM in Australian soils under temperate climate. The purpose of this study was to assess this model for use in tropical soils under eucalyptus plantations in São Paulo State, Brazil. Based on field incubations for one month in three, NNM rates were measured at 11 sites (0-20 cm layer) for 21 months. The basal rate was determined in in situ incubations during moist and warm periods (January to March). Annual rates of 150-350 kg ha-1 yr-1 NNM predicted by the SNAP model were reasonably accurate (R2 = 0.84). In other periods, at lower moisture and temperature, NNM rates were overestimated. Therefore, if used carefully, the model can provide adequate predictions of annual NNM and may be useful in practical applications. For NNM predictions for shorter periods than a year or under suboptimal incubation conditions, the temperature and moisture modifiers need to be recalibrated for tropical conditions.
Resumo:
Organisations in Multi-Agent Systems (MAS) have proven to be successful in regulating agent societies. Nevertheless, changes in agents' behaviour or in the dynamics of the environment may lead to a poor fulfilment of the system's purposes, and so the entire organisation needs to be adapted. In this paper we focus on endowing the organisation with adaptation capabilities, instead of expecting agents to be capable of adapting the organisation by themselves. We regard this organisational adaptation as an assisting service provided by what we call the Assistance Layer. Our generic Two Level Assisted MAS Architecture (2-LAMA) incorporates such a layer. We empirically evaluate this approach by means of an agent-based simulator we have developed for the P2P sharing network domain. This simulator implements 2-LAMA architecture and supports the comparison between different adaptation methods, as well as, with the standard BitTorrent protocol. In particular, we present two alternatives to perform norm adaptation and one method to adapt agents'relationships. The results show improved performance and demonstrate that the cost of introducing an additional layer in charge of the system's adaptation is lower than its benefits.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
With over 68 thousand miles of gravel roads in Iowa and the importance of these roads within the farm-to-market transportation system, proper water management becomes critical for maintaining the integrity of the roadway materials. However, the build-up of water within the aggregate subbase can lead to frost boils and ultimately potholes forming at the road surface. The aggregate subbase and subgrade soils under these gravel roads are produced with material opportunistically chosen from local sources near the site and, many times, the compositions of these sublayers are far from ideal in terms of proper water drainage with the full effects of this shortcut not being fully understood. The primary objective of this project was to provide a physically-based model for evaluating the drainability of potential subbase and subgrade materials for gravel roads in Iowa. The Richards equation provided the appropriate framework to study the transient unsaturated flow that usually occurs through the subbase and subgrade of a gravel road. From which, we identified that the saturated hydraulic conductivity, Ks, was a key parameter driving the time to drain of subgrade soils found in Iowa, thus being a good proxy variable for accessing roadway drainability. Using Ks, derived from soil texture, we were able to identify potential problem areas in terms of roadway drainage . It was found that there is a threshold for Ks of 15 cm/day that determines if the roadway will drain efficiently, based on the requirement that the time to drain, Td, the surface roadway layer does not exceed a 2-hr limit. Two of the three highest abundant textures (loam and silty clay loam), which cover nearly 60% of the state of Iowa, were found to have average Td values greater than the 2-hr limit. With such a large percentage of the state at risk for the formation of boils due to the soil with relatively low saturated hydraulic conductivity values, it seems pertinent that we propose alternative design and/or maintenance practices to limit the expensive repair work in Iowa. The addition of drain tiles or French mattresses my help address drainage problems. However, before pursuing this recommendation, a comprehensive cost-benefit analysis is needed.
Resumo:
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Resumo:
Ohjelmistoteollisuudessa pitkiä ja vaikeita kehityssyklejä voidaan helpottaa käyttämällä hyväksi ohjelmistokehyksiä (frameworks). Ohjelmistokehykset edustavat kokoelmaa luokkia, jotka tarjoavat yleisiä ratkaisuja tietyn ongelmakentän tarpeisiin vapauttaen ohjelmistokehittäjät keskittymään sovelluskohtaisiin vaatimuksiin. Hyvin suunniteltujen ohjelmistokehyksien käyttö lisää suunnitteluratkaisujen sekä lähdekoodin uudelleenkäytettävyyttä enemmän kuin mikään muu suunnittelulähestymistapa. Tietyn kohdealueen tietämys voidaan tallentaa ohjelmistokehyksiin, joista puolestaan voidaan erikoistaa viimeisteltyjä ohjelmistotuotteita. Tässä diplomityössä kuvataan ohjelmistoagentteihin (software agents) perustuvaa ohjelmistokehyksen suunnittelua toteutusta. Pääpaino työssä on vaatimusmäärittelyä vastaavan suunnitelman sekä toteutuksen kuvaaminen ohjelmistokehykselle, josta voidaan erikoistaa erilaiseen tiedonkeruuseen kykeneviä ohjelmistoja Internet ympäristöön. Työn kokeellisessa osuudessa esitellään myös esimerkkisovellus, joka perustuu työssä kehitettyyn ohjelmistokehykseen.