40 resultados para Motion perception (Vision)
Resumo:
Laughter is a frequently occurring social signal and an important part of human non-verbal communication. However it is often overlooked as a serious topic of scientific study. While the lack of research in this area is mostly due to laughter’s non-serious nature, it is also a particularly difficult social signal to produce on demand in a convincing manner; thus making it a difficult topic for study in laboratory settings. In this paper we provide some techniques and guidance for inducing both hilarious laughter and conversational laughter. These techniques were devised with the goal of capturing mo- tion information related to laughter while the person laughing was either standing or seated. Comments on the value of each of the techniques and general guidance as to the importance of atmosphere, environment and social setting are provided.
Resumo:
Paradoxical kinesia describes the motor improvement in Parkinson's disease (PD) triggered by the presence of external sensory information relevant for the movement. This phenomenon has been puzzling scientists for over 60 years, both in neurological and motor control research, with the underpinning mechanism still being the subject of fierce debate. In this paper we present novel evidence supporting the idea that the key to understanding paradoxical kinesia lies in both spatial and temporal information conveyed by the cues and the coupling between perception and action. We tested a group of 7 idiopathic PD patients in an upper limb mediolateral movement task. Movements were performed with and without a visual point light display, travelling at 3 different speeds. The dynamic information presented in the visual point light display depicted three different movement speeds of the same amplitude performed by a healthy adult. The displays were tested and validated on a group of neurologically healthy participants before being tested on the PD group. Our data show that the temporal aspects of the movement (kinematics) in PD can be moderated by the prescribed temporal information presented in a dynamic environmental cue. Patients demonstrated a significant improvement in terms of movement time and peak velocity when executing movement in accordance with the information afforded by the point light display, compared to when the movement of the same amplitude and direction was performed without the display. In all patients we observed the effect of paradoxical kinesia, with a strong relationship between the perceptual information prescribed by the biological motion display and the observed motor performance of the patients. © 2013 Elsevier B.V. All rights reserved.
Resumo:
Previous research has shown that prior adaptation to a spatially circumscribed, oscillating grating results in the duration of a subsequent stimulus briefly presented within the adapted region being underestimated. There is an on-going debate about where in the motion processing pathway the adaptation underlying this distortion of sub-second duration perception occurs. One position is that the LGN and, perhaps, early cortical processing areas are likely sites for the adaptation; an alternative suggestion is that visual area MT+ contains the neural mechanisms for sub-second timing; and a third position proposes that the effect is driven by adaptation at multiple levels of the motion processing pathway. A related issue is in what frame of reference – retinotopic or spatiotopic – does adaptation induced duration distortion occur. We addressed these questions by having participants adapt to a unidirectional random dot kinematogram (RDK), and then measuring perceived duration of a 600 ms test RDK positioned in either the same retinotopic or the same spatiotopic location as the adaptor. We found that, when it did occur, duration distortion of the test stimulus was direction contingent; that is it occurred when the adaptor and test stimuli drifted in the same direction, but not when they drifted in opposite directions. Furthermore the duration compression was evident primarily under retinotopic viewing conditions, with little evidence of duration distortion under spatiotopic viewing conditions. Our results support previous research implicating cortical mechanisms in the duration encoding of sub-second visual events, and reveal that these mechanisms encode duration within a retinotopic frame of reference.
Resumo:
Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.
Resumo:
In this paper we extend the minimum-cost network flow approach to multi-target tracking, by incorporating a motion model, allowing the tracker to better cope with longterm occlusions and missed detections. In our new method, the tracking problem is solved iteratively: Firstly, an initial tracking solution is found without the help of motion information. Given this initial set of tracklets, the motion at each detection is estimated, and used to refine the tracking solution.
Finally, special edges are added to the tracking graph, allowing a further revised tracking solution to be found, where distant tracklets may be linked based on motion similarity. Our system has been tested on the PETS S2.L1 and Oxford town-center sequences, outperforming the baseline system, and achieving results comparable with the current state of the art.
Resumo:
Accurately encoding the duration and temporal order of events is essential for survival and important to everyday activities, from holding conversations to driving in fast flowing traffic. Although there is a growing body of evidence that the timing of brief events (< 1s) is encoded by modality-specific mechanisms, it is not clear how such mechanisms register event duration. One approach gaining traction is a channel-based model; this envisages narrowly-tuned, overlapping timing mechanisms that respond preferentially to different durations. The channel-based model predicts that adapting to a given event duration will result in overestimating and underestimating the duration of longer and shorter events, respectively. We tested the model by having observers judge the duration of a brief (600ms) visual test stimulus following adaptation to longer (860ms) and shorter (340ms) stimulus durations. The channel-based model predicts perceived duration compression of the test stimulus in the former condition and perceived duration expansion in the latter condition. Duration compression occurred in both conditions, suggesting that the channel-based model does not adequately account for perceived duration of visual events.
Resumo:
The duration compression effect is a phenomenon in which prior adaptation to a spatially circumscribed dynamic stimulus results in the duration of subsequent subsecond stimuli presented in the adapted region being underestimated. There is disagreement over the frame of reference within which the duration compression phenomenon occurs. One view holds that the effect is driven by retinotopic-tuned mechanisms located at early stages of visual processing, and an alternate position is that the mechanisms are spatiotopic and occur at later stages of visual processing (MT+). We addressed the retinotopic-spatiotopic question by using adapting stimuli – drifting plaids - that are known to activate global-motion mechanisms in area MT. If spatiotopic mechanisms contribute to the duration compression effect, drifting plaid adaptors should be well suited to revealing them. Following adaptation participants were tasked with estimating the duration of a 600ms random dot stimulus, whose direction was identical to the pattern direction of the adapting plaid, presented at either the same retinotopic or the same spatiotopic location as the adaptor. Our results reveal significant duration compression in both conditions, pointing to the involvement of both retinotopic-tuned and spatiotopic-tuned mechanisms in the duration compression effect.