93 resultados para Agent negotiation strategies
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
Plan recognition is the problem of inferring the goals and plans of an agent from partial observations of her behavior. Recently, it has been shown that the problem can be formulated and solved usingplanners, reducing plan recognition to plan generation.In this work, we extend this model-basedapproach to plan recognition to the POMDP setting, where actions are stochastic and states are partially observable. The task is to infer a probability distribution over the possible goals of an agent whose behavior results from a POMDP model. The POMDP model is shared between agent and observer except for the true goal of the agent that is hidden to the observer. The observations are action sequences O that may contain gaps as some or even most of the actions done by the agent may not be observed. We show that the posterior goal distribution P(GjO) can be computed from the value function VG(b) over beliefs b generated by the POMDPplanner for each possible goal G. Some extensionsof the basic framework are discussed, and a numberof experiments are reported.
Resumo:
This workshop paper states that fostering active student participation both in face-to-face lectures / seminars and outside the classroom (personal and group study at home, the library, etc.) requires a certain level of teacher-led inquiry. The paper presents a set of strategies drawn from real practice in higher education with teacher-led inquiry ingredients that promote active learning. Thesepractices highlight the role of the syllabus, the importance of iterative learning designs, explicit teacher-led inquiry, and the implications of the context, sustainability and practitioners’ creativity. The strategies discussed in this paper can serve as input to the workshop as real cases that need to be represented in design and supported in enactment (with and without technologies).
Resumo:
El tema de este estudio es el aumento de la comprensión teórica y empírica de la estrategia de negocio de código abierto en el dominio de sistemas embebidos por investigar modelos de negocios de código abierto, retos, recursos y capacidades operativas y dinámicas.
Resumo:
We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combinationof several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.
Resumo:
We exhibit and characterize an entire class of simple adaptive strategies,in the repeated play of a game, having the Hannan-consistency property: In the long-run, the player is guaranteed an average payoff as large as the best-reply payoff to the empirical distribution of play of the otherplayers; i.e., there is no "regret." Smooth fictitious play (Fudenberg and Levine [1995]) and regret-matching (Hart and Mas-Colell [1998]) areparticular cases. The motivation and application of this work come from the study of procedures whose empirical distribution of play is, in thelong-run, (almost) a correlated equilibrium. The basic tool for the analysis is a generalization of Blackwell's [1956a] approachability strategy for games with vector payoffs.
Resumo:
The problems arising in the logistics of commercial distribution are complexand involve several players and decision levels. One important decision isrelated with the design of the routes to distribute the products, in anefficient and inexpensive way.This article explores three different distribution strategies: the firststrategy corresponds to the classical vehicle routing problem; the second isa master route strategy with daily adaptations and the third is a strategythat takes into account the cross-functional planning through amulti-objective model with two objectives. All strategies are analyzed ina multi-period scenario. A metaheuristic based on the Iteratetd Local Search,is used to solve the models related with each strategy. A computationalexperiment is performed to evaluate the three strategies with respect to thetwo objectives. The cross functional planning strategy leads to solutions thatput in practice the coordination between functional areas and better meetbusiness objectives.
Resumo:
I study the optimal project choice when the principal relies on the agent in charge of production for project evaluation. The principal has to choose between a safe project generating a fixed revenue and a risky project generating an uncertain revenue. The agent has private information about the production cost under each project but also about the signal regarding the profitability of the risky project. If the signal favoring the adoption of the risky project is goods news to the agent, integrating production and project evaluation tasks does not generate any loss compared to the benchmark in which the principal herself receives the signal. By contrast, if it is bad news, task integration creates an endogenous reservation utility which is type-dependent and thereby generates countervailing incentives, which can make a bias toward either project optimal. Our results can offer an explanation for why good firms can go bad and a rationale for the separation of day-to-day operating decisions from long-term strategic decisions stressed by Williamson.
Resumo:
The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.
Resumo:
This paper is concerned with the realism of mechanisms that implementsocial choice functions in the traditional sense. Will agents actually playthe equilibrium assumed by the analysis? As an example, we study theconvergence and stability properties of Sj\"ostr\"om's (1994) mechanism, onthe assumption that boundedly rational players find their way to equilibriumusing monotonic learning dynamics and also with fictitious play. Thismechanism implements most social choice functions in economic environmentsusing as a solution concept the iterated elimination of weakly dominatedstrategies (only one round of deletion of weakly dominated strategies isneeded). There are, however, many sets of Nash equilibria whose payoffs maybe very different from those desired by the social choice function. Withmonotonic dynamics we show that many equilibria in all the sets ofequilibria we describe are the limit points of trajectories that havecompletely mixed initial conditions. The initial conditions that lead tothese equilibria need not be very close to the limiting point. Furthermore,even if the dynamics converge to the ``right'' set of equilibria, it stillcan converge to quite a poor outcome in welfare terms. With fictitious play,if the agents have completely mixed prior beliefs, beliefs and play convergeto the outcome the planner wants to implement.
Resumo:
Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.
Resumo:
We study the interaction between insurance and capital markets within singlebut general framework.We show that capital markets greatly enhance the risksharing capacity of insurance markets and the scope of risks that areinsurable because efficiency does not depend on the number of agents atrisk, nor on risks being independent, nor on the preferences and endowmentsof agents at risk being the same. We show that agents share risks by buyingfull coverage for their individual risks and provide insurance capitalthrough stock markets.We show that aggregate risk enters private insuranceas positive loading on insurance prices and despite that agents will buyfull coverage. The loading is determined by the risk premium of investorsin the stock market and hence does not depend on the agent s willingnessto pay. Agents provide insurance capital by trading an equally weightedportfolio of insurance company shares and riskless asset. We are able toconstruct agents optimal trading strategies explicitly and for verygeneral preferences.
Resumo:
The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents, which combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solutionof the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. The model includes not only productivity shocks, but also shocks to redistributive taxation, which cause substantial short-run variation in the cross-sectional distribution of wealth. If those shocks are operative, it is shown that a solution method based on very few statistics of the distribution is not suitable, while the proposed method can solve the model with high accuracy, at least for the case of small aggregate shocks. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible.Matlab programs to solve the model can be downloaded.
Resumo:
This paper characterizes the innovation strategy of manufacturing firms andexamines the relation between the innovation strategy and importantindustry-, firm- and innovation-specific characteristics using Belgiandata from the Eurostat Community Innovation Survey. In addition to importantsize effects explaining innovation, we find that high perceived risks andcosts and low appropriability of innovations do not discourage innovation,but rather determine how the innovation sourcing strategy is chosen. Withrespect to the determinants of the decision of the innovative firm toproduce technology itself (Make) or to source technology externally (Buy),we find that small firms are more likely restrict their innovation strategyto an exclusive make or buy strategy, while large firms are more likely tocombine both internal and external knowledge acquisition in their innovationstrategy. An interesting result that highlights the complementary nature ofthe Make and Buy decisions, is that, controlled for firm size, companies forwhich internal information is an important source for innovation are morelikely to combine internal and external sources of technology. We find thisto be evidence of the fact that in-house R&D generates the necessaryabsorptive capacity to profit from external knowledge acquisition. Also theeffectiveness of different mechanisms to appropriate the benefits ofinnovations and the internal organizational resistance against change areimportant determinants of the firm's technology sourcing strategy.