2 resultados para Virtual Memory

em Abertay Research Collections - Abertay University’s repository


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The self-reference effect (SRE) in memory is thought to depend on specialized mechanisms that enhance memory for self-relevant information. We investigated whether these mechanisms can be engaged “by proxy” when we simulate other people, by asking participants to interact with two virtual partners: one similar and one dissimilar to self. Participants viewed pairs of objects and picked one for themselves, for their similar partner, or their dissimilar partner. A surprise memory test followed that required participants to identify which object of each pair was chosen, and for whom. Finally, participants were shown both partners’ object pairs again, and asked to indicate their personal preference. Four key findings were observed. Overlap between participants’ own choice and those made for their partner was significantly higher for the similar than the dissimilar partner, revealing participants’ use of their own preferences to simulate the similar partner. Recollection of chosen objects was significantly higher for self than for both partners and, critically, was significantly higher for similar than dissimilar partners. Source confusion between self and the similar partner was also higher. These findings suggest that self-reference by proxy enhances memory for non-self-relevant material, and we consider the theoretical implications for functional interpretation of the SRE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.