992 resultados para Adaptive game AI
Resumo:
A number of Game Strategies (GS) have been developed in past decades. They have been used in the fields of economics, engineering, computer science and biology due to their efficiency in solving design optimization problems. In addition, research in multi-objective (MO) and multidisciplinary design optimization (MDO) has focused on developing robust and efficient optimization methods to produce a set of high quality solutions with low computational cost. In this paper, two optimization techniques are considered; the first optimization method uses multi-fidelity hierarchical Pareto optimality. The second optimization method uses the combination of two Game Strategies; Nash-equilibrium and Pareto optimality. The paper shows how Game Strategies can be hybridised and coupled to Multi-Objective Evolutionary Algorithms (MOEA) to accelerate convergence speed and to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid-Game Strategies are clearly demonstrated
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.
Resumo:
Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This paper presents an agent design we have developed, which combines cellular automata for environmental modeling with influence maps for agent decision-making. The agents were implemented into a 3D game environment we have developed, the EmerGEnT system, and tuned through three experiments. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.
Resumo:
This paper defines and discusses two contrasting approaches to designing game environments. The first, referred to as scripting, requires developers to anticipate, hand-craft and script specific game objects, events and player interactions. The second, known as emergence, involves defining general, global rules that interact to give rise to emergent gameplay. Each of these approaches is defined, discussed and analyzed with respect to the considerations and affects for game developers and game players. Subsequently, various techniques for implementing these design approaches are identified and discussed. It is concluded that scripting and emergence are two extremes of the same continuum, neither of which are ideal for game development. Rather, there needs to be a compromise in which the boundaries of action (such as story and game objectives) can be hardcoded and non-scripted behavior (such as interactions and strategies) are able to emerge within these boundaries.
Resumo:
This paper investigates the High Lift System (HLS) application of complex aerodynamic design problem using Particle Swarm Optimisation (PSO) coupled to Game strategies. Two types of optimization methods are used; the first method is a standard PSO based on Pareto dominance and the second method hybridises PSO with a well-known Nash Game strategies named Hybrid-PSO. These optimization techniques are coupled to a pre/post processor GiD providing unstructured meshes during the optimisation procedure and a transonic analysis software PUMI. The computational efficiency and quality design obtained by PSO and Hybrid-PSO are compared. The numerical results for the multi-objective HLS design optimisation clearly shows the benefits of hybridising a PSO with the Nash game and makes promising the above methodology for solving other more complex multi-physics optimisation problems in Aeronautics.
Resumo:
A number of game strategies have been developed in past decades and used in the fields of economics, engineering, computer science, and biology due to their efficiency in solving design optimization problems. In addition, research in multiobjective and multidisciplinary design optimization has focused on developing a robust and efficient optimization method so it can produce a set of high quality solutions with less computational time. In this paper, two optimization techniques are considered; the first optimization method uses multifidelity hierarchical Pareto-optimality. The second optimization method uses the combination of game strategies Nash-equilibrium and Pareto-optimality. This paper shows how game strategies can be coupled to multiobjective evolutionary algorithms and robust design techniques to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid and non-Hybrid-Game strategies are demonstrated.
Resumo:
The use of adaptive wing/aerofoil designs is being considered as promising techniques in aeronautic/aerospace since they can reduce aircraft emissions, improve aerodynamic performance of manned or unmanned aircraft. The paper investigates the robust design and optimisation for one type of adaptive techniques; Active Flow Control (AFC) bump at transonic flow conditions on a Natural Laminar Flow (NLF) aerofoil designed to increase aerodynamic efficiency (especially high lift to drag ratio). The concept of using Shock Control Bump (SCB) is to control supersonic flow on the suction/pressure side of NLF aerofoil: RAE 5243 that leads to delaying shock occurrence or weakening its strength. Such AFC technique reduces total drag at transonic speeds due to reduction of wave drag. The location of Boundary Layer Transition (BLT) can influence the position the supersonic shock occurrence. The BLT position is an uncertainty in aerodynamic design due to the many factors, such as surface contamination or surface erosion. The paper studies the SCB shape design optimisation using robust Evolutionary Algorithms (EAs) with uncertainty in BLT positions. The optimisation method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. Two test cases are conducted; the first test assumes the BLT is at 45% of chord from the leading edge and the second test considers robust design optimisation for SCB at the variability of BLT positions and lift coefficient. Numerical result shows that the optimisation method coupled to uncertainty design techniques produces Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
Video games have shown great potential as tools that both engage and motivate players to achieve tasks and build communities in fantasy worlds. We propose that the application of game elements to real world activities can aid in delivering contextual information in interesting ways and help young people to engage in everyday events. Our research will explore how we can unite utility and fun to enhance information delivery, encourage participation, build communities and engage users with utilitarian events situated in the real world. This research aims to identify key game elements that work effectively to engage young digital natives, and provide guidelines to influence the design of interactions and interfaces for event applications in the future. This research will primarily contribute to areas of user experience and pervasive gaming.
Resumo:
Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.
Resumo:
Power systems in many countries are stressed towards their stability limit. If these stable systems experience any unexpected serious contingencies, or disturbances, there is a significant risk of instability, which may lead to wide-spread blackout. Frequency is a reliable indicator for such instability condition exists on the power system; therefore under-frequency load shedding technique is used to stable the power system by curtail some load. In this paper, the SFR-UFLS model redeveloped to generate optimal load shedding method is that optimally shed load following one single particular contingency event. The proposed optimal load shedding scheme is then tested on the 39-bus New England test system to show the performance against random load shedding scheme.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention onto the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles, and the path that focuses on the idea, the spirit – and the more successful teams put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. All of these pictures were taken with a 1945 Leica III camera. The use of this classic, film-based camera, gives the images a granularity and depth, this older slower technology exposes the very human moments of digital creativity. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
The 48-hour game making challenge started out in 2007 as a creative community event. We have run this event each year since and seen over 120 games made. 2011 was the most remarkable in that each of the 20 teams made a playable game – the shape of the challenge has changed …. We have invested in the process of reflective practice & action research, with the event being part of a sweep of programs that inform this research, with each year giving us fresh insights into both the creative practice and essential concerns, process and trends of the independent games industry creative community, which we then respond to within our curatorial development of the subsequent programming. The 2011 48-hour challenge research project focused on the people and the site. We were specifically interested in the manner in which the community occupied the creative space.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.