10 resultados para Computer Game Testing
em CentAUR: Central Archive University of Reading - UK
Resumo:
A computer game was used to study psychophysiological reactions to emotion-relevant events. Two dimensions proposed by Scherer (1984a, 1984b) in his appraisal theory, the intrinsic pleasantness and goal conduciveness of game events, were studied in a factorial design. The relative level at which a player performed at the moment of an event was also taken into account. A total of 33 participants played the game while cardiac activity, skin conductance, skin temperature, and muscle activity as well as emotion self-reports were assessed. The self-reports indicate that game events altered levels of pride, joy, anger, and surprise. Goal conduciveness had little effect on muscle activity but was associated with significant autonomic effects, including changes to interbeat interval, pulse transit time, skin conductance, and finger temperature. The manipulation of intrinsic pleasantness had little impact on physiological responses. The results show the utility of attempting to manipulate emotion-constituent appraisals and measure their peripheral physiological signatures.
Resumo:
The Web's link structure (termed the Web Graph) is a richly connected set of Web pages. Current applications use this graph for indexing and information retrieval purposes. In contrast the relationship between Web Graph and application is reversed by letting the structure of the Web Graph influence the behaviour of an application. Presents a novel Web crawling agent, AlienBot, the output of which is orthogonally coupled to the enemy generation strategy of a computer game. The Web Graph guides AlienBot, causing it to generate a stochastic process. Shows the effectiveness of such unorthodox coupling to both the playability of the game and the heuristics of the Web crawler. In addition, presents the results of the sample of Web pages collected by the crawling process. In particular, shows: how AlienBot was able to identify the power law inherent in the link structure of the Web; that 61.74 per cent of Web pages use some form of scripting technology; that the size of the Web can be estimated at just over 5.2 billion pages; and that less than 7 per cent of Web pages fully comply with some variant of (X)HTML.
Resumo:
Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel-paired, three-participants game presented in his 1950 paper, and the “jury-service” one-to-one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury-service tests in the preliminary phase and parallel-paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel-paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
Resumo:
The 1999 Kasparov-World game for the first time enabled anyone to join a team playing against a World Chess Champion via the web. It included a surprise in the opening, complex middle-game strategy and a deep ending. As the game headed for its mysterious finale, the World Team re-quested a KQQKQQ endgame table and was provided with two by the authors. This paper describes their work, compares the methods used, examines the issues raised and summarises the concepts involved for the benefit of future workers in the endgame field. It also notes the contribution of this endgame to chess itself.
Resumo:
An eddy current testing system consists of a multi-sensor probe, a computer and a special expansion card and software for data-collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.
Resumo:
An eddy current testing system consists of a multi-sensor probe, computer and a special expansion card and software for data collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.
Resumo:
Theorem-proving is a one-player game. The history of computer programs being the players goes back to 1956 and the ‘LT’ LOGIC THEORY MACHINE of Newell, Shaw and Simon. In game-playing terms, the ‘initial position’ is the core set of axioms chosen for the particular logic and the ‘moves’ are the rules of inference. Now, the Univalent Foundations Program at IAS Princeton and the resulting ‘HoTT’ book on Homotopy Type Theory have demonstrated the success of a new kind of experimental mathematics using computer theorem proving.