49 resultados para props (object genres)
Resumo:
Perception is linked to action via two routes: a direct route based on affordance information in the environment and an indirect route based on semantic knowledge about objects. The present study explored the factors modulating the recruitment of the two routes, in particular which factors affecting the selection of paired objects. In Experiment 1, we presented real objects among semantically related or unrelated distracters. Participants had to select two objects that can interact. The presence of distracters affected selection times, but not the semantic relations of the objects with the distracters. Furthermore, participants first selected the active object (e.g. teaspoon) with their right hand, followed by the passive object (e.g. mug), often with their left hand. In Experiment 2, we presented pictures of the same objects with no hand grip, congruent or incongruent hand grip. Participants had to decide whether the two objects can interact. Action decisions were faster when the presentation of the active object preceded the presentation of the passive object, and when the grip was congruent. Interestingly, participants were slower when the objects were semantically but not functionally related; this effect increased with congruently gripped objects. Our data showed that action decisions in the presence of strong affordance cues (real objects, pictures of congruently gripped objects) relied on sensory-motor representation, supporting the direct route from perception-to-action that bypasses semantic knowledge. However, in the case of weak affordance cues (pictures), semantic information interfered with action decisions, indicating that semantic knowledge impacts action decisions. The data support the dual-route account from perception-to-action.
Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach
Resumo:
Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.
Resumo:
This paper describes a new approach to detect and track maritime objects in real time. The approach particularly addresses the highly dynamic maritime environment, panning cameras, target scale changes, and operates on both visible and thermal imagery. Object detection is based on agglomerative clustering of temporally stable features. Object extents are first determined based on persistence of detected features and their relative separation and motion attributes. An explicit cluster merging and splitting process handles object creation and separation. Stable object clus- ters are tracked frame-to-frame. The effectiveness of the approach is demonstrated on four challenging real-world public datasets.