870 resultados para multisensory perception
Resumo:
Estimating the abundance of cetaceans from aerial survey data requires careful attention to survey design and analysis. Once an aerial observer perceives a marine mammal or group of marine mammals, he or she has only a few seconds to identify and enumerate the individuals sighted, as well as to determine the distance to the sighting and record this information. In line-transect survey analyses, it is assumed that the observer has correctly identified and enumerated the group or individual. We describe methods used to test this assumption and how survey data should be adjusted to account for observer errors. Harbor porpoises (Phocoena phocoena) were censused during aerial surveys in the summer of 1997 in Southeast Alaska (9844 km survey effort), in the summer of 1998 in the Gulf of Alaska (10,127 km), and in the summer of 1999 in the Bering Sea (7849 km). Sightings of harbor porpoise during a beluga whale (Phocoena phocoena) survey in 1998 (1355 km) provided data on harbor porpoise abundance in Cook Inlet for the Gulf of Alaska stock. Sightings by primary observers at side windows were compared to an independent observer at a belly window to estimate the probability of misidentification, underestimation of group size, and the probability that porpoise on the surface at the trackline were missed (perception bias, g(0)). There were 129, 96, and 201 sightings of harbor porpoises in the three stock areas, respectively. Both g(0) and effective strip width (the realized width of the survey track) depended on survey year, and g(0) also depended on the visibility reported by observers. Harbor porpoise abundance in 1997–99 was estimated at 11,146 animals for the Southeast Alaska stock, 31,046 animals for the Gulf of Alaska stock, and 48,515 animals for the Bering Sea stock.
Resumo:
Most behavioral tasks have time constraints for successful completion, such as catching a ball in flight. Many of these tasks require trading off the time allocated to perception and action, especially when only one of the two is possible at any time. In general, the longer we perceive, the smaller the uncertainty in perceptual estimates. However, a longer perception phase leaves less time for action, which results in less precise movements. Here we examine subjects catching a virtual ball. Critically, as soon as subjects began to move, the ball became invisible. We study how subjects trade-off sensory and movement uncertainty by deciding when to initiate their actions. We formulate this task in a probabilistic framework and show that subjects' decisions when to start moving are statistically near optimal given their individual sensory and motor uncertainties. Moreover, we accurately predict individual subject's task performance. Thus we show that subjects in a natural task are quantitatively aware of how sensory and motor variability depend on time and act so as to minimize overall task variability.
Resumo:
Human locomotion is known to be influenced by observation of another person's gait. For example, athletes often synchronize their step in long distance races. However, how interaction with a virtual runner affects the gait of a real runner has not been studied. We investigated this by creating an illusion of running behind a virtual model (VM) using a treadmill and large screen virtual environment showing a video of a VM. We looked at step synchronization between the real and virtual runner and at the role of the step frequency (SF) in the real runner's perception of VM speed. We found that subjects match VM SF when asked to match VM speed with their own (Figure 1). This indicates step synchronization may be a strategy of speed matching or speed perception. Subjects chose higher speeds when VMSF was higher (though VM was 12km/h in all videos). This effect was more pronounced when the speed estimate was rated verbally while standing still. (Figure 2). This may due to correlated physical activity affecting the perception of VM speed [Jacobs et al. 2005]; or step synchronization altering the subjects' perception of self speed [Durgin et al. 2007]. Our findings indicate that third person activity in a collaborative virtual locomotive environment can have a pronounced effect on an observer's gait activity and their perceptual judgments of the activity of others: the SF of others (virtual or real) can potentially influence one's perception of self speed and lead to changes in speed and SF. A better understanding of the underlying mechanisms would support the design of more compelling virtual trainers and may be instructive for competitive athletics in the real world. © 2009 ACM.
Resumo:
We compared early stages of face processing in young and older participants as indexed by ERPs elicited by faces and non-face stimuli presented in upright and inverted orientations. The P1 and N170 components were larger in older than in young participant
Resumo:
Whether mice perceive the depth of space dependent on the visual size of object targets was explored when visual cues such as perspective and partial occlusion in space were excluded. A mouse was placed on a platform the height of which is adjustable. The platform located inside a box in which all other walls were dark exception its bottom through that light was projected as a sole visual cue. The visual object cue was composed of 4x4 grids to allow a mouse estimating the distance of the platform relative to the grids. Three sizes of grids reduced in a proportion of 2/3 and seven distances with an equal interval between the platform and the grids at the bottom were applied in the experiments. The duration of a mouse staying on the platform at each height was recorded when the different sizes of the grids were presented randomly to test whether the Judgment of the mouse for the depth of the platform from the bottom was affected by the size information of the visual target. The results from all conditions of three object sizes show that time of mice staying on the platform became longer with the increase in height. In distance of 20 similar to 30 cm, the mice did not use the size information of a target to judge the depth, while mainly used the information of binocular disparity. In distance less than 20 cm or more than 30 cm, however, especially in much higher distance 50 cm, 60 cm and 70 cm, the mice were able to use the size information to do so in order to compensate the lack of binocular disparity information from both eyes. Because the mice have only 1/3 of the visual field that is binocular. This behavioral paradigm established in the current study is a useful model and can be applied to the experiments using transgenic mouse as an animal model to investigate the relationships between behaviors and gene functions.