942 resultados para Direction of motion
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception. The theory clarifies why parallel streams Vl --> V2, Vl --> MT, and Vl --> V2 --> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The theory suggests that the static form system (Static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction-ofcontrast and insensitive to direction-of-motion, whereas the motion form system (Motion BCS) generates emergent boundary segmentations whose outputs are insensitive to directionof-contrast but sensitive to direction-of-motion. The theory is used to explain classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include beta motion; split motion; gamma motion and reverse-contrast gamma motion; delta motion; visual inertia; the transition from group motion to element motion in response to a Ternus display as the interstimulus interval (ISI) decreases; group motion in response to a reverse-contrast Ternus display even at short ISIs; speed-up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, ISI, and motion threshold known as Korte's Laws; dependence of motion strength on stimulus orientation and spatial frequency; short-range and long-range form-color interactions; and binocular interactions of flashes to different eyes.
Resumo:
How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.
Resumo:
Our understanding of how the visual system processes motion transparency, the phenomenon by which multiple directions of motion are perceived to co-exist in the same spatial region, has grown considerably in the past decade. There is compelling evidence that the process is driven by global-motion mechanisms. Consequently, although transparently moving surfaces are readily segmented over an extended space, the visual system cannot separate two motion signals that co-exist in the same local region. A related issue is whether the visual system can detect transparently moving surfaces simultaneously, or whether the component signals encounter a serial â??bottleneckâ?? during their processing? Our initial results show that, at sufficiently short stimulus durations, observers cannot accurately detect two superimposed directions; yet they have no difficulty in detecting one pattern direction in noise, supporting the serial-bottleneck scenario. However, in a second experiment, the difference in performance between the two tasks disappears when the component patterns are segregated. This discrepancy between the processing of transparent and non-overlapping patterns may be a consequence of suppressed activity of global-motion mechanisms when the transparent surfaces are presented in the same depth plane. To test this explanation, we repeated our initial experiment while separating the motion components in depth. The marked improvement in performance leads us to conclude that transparent motion signals are represented simultaneously.
Resumo:
Respiratory motion introduces complex spatio-temporal variations in the dosimetry of radiotherapy and may contribute towards uncertainties in radiotherapy planning. This study investigates the potential radiobiological implications occurring due to tumour motion in areas of geometric miss in lung cancer radiotherapy. A bespoke phantom and motor-driven platform to replicate respiratory motion and study the consequences on tumour cell survival in vitro was constructed. Human non-small-cell lung cancer cell lines H460 and H1299 were irradiated in modulated radiotherapy configurations in the presence and absence of respiratory motion. Clonogenic survival was calculated for irradiated and shielded regions. Direction of motion, replication of dosimetry by multi-leaf collimator (MLC) manipulation and oscillating lead shielding were investigated to confirm differences in cell survival. Respiratory motion was shown to significantly increase survival for out-of-field regions for H460/H1299 cell lines when compared with static irradiation (p <0.001). Significantly higher survival was found in the in-field region for the H460 cell line (p <0.030). Oscillating lead shielding also produced these significant differences. Respiratory motion and oscillatory delivery of radiation dose to human tumour cells has a significant impact on in- and out-of-field survival in the presence of non-uniform irradiation in this in vitro set-up. This may have important radiobiological consequences for modulated radiotherapy in lung cancer.
Resumo:
The effect of multiple haptic distractors on target selection performance was examined in terms of times to select the target and the associated cursor movement patterns. Two experiments examined: a) The effect of multiple haptic distractors around a single target and b) the effect of inter-item spacing in a linear selection task. It was found that certain target-distractor arrangements hindered performance and that this could be associated with specific, explanatory cursor patterns. In particular, it was found that the presence of distractors along the task axis in front of the target was detrimental to performance, and that there was evidence to suggest that this could sometimes be associated with consequent cursor oscillation between distractors adjacent to a desired target. A further experiment examined the effect of target-distractor spacing in two orientations on a user’s ability to select a target when caught in the gravity well of a distractor. Times for movements in the vertical direction were found to be faster than those in the horizontal direction. In addition, although times for the vertical direction appeared equivalent across five target-distractor distances, times for the horizontal direction exhibited peaks at certain distances. The implications of these results for the design and implementation of haptically enhanced interfaces using the force feedback mouse are discussed.
Resumo:
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
Resumo:
The aim of this thesis was to describe the development of motion analysis protocols for applications on upper and lower limb extremities, by using inertial sensors-based systems. Inertial sensors-based systems are relatively recent. Knowledge and development of methods and algorithms for the use of such systems for clinical purposes is therefore limited if compared with stereophotogrammetry. However, their advantages in terms of low cost, portability, small size, are a valid reason to follow this direction. When developing motion analysis protocols based on inertial sensors, attention must be given to several aspects, like the accuracy of inertial sensors-based systems and their reliability. The need to develop specific algorithms/methods and software for using these systems for specific applications, is as much important as the development of motion analysis protocols based on them. For this reason, the goal of the 3-years research project described in this thesis was achieved first of all trying to correctly design the protocols based on inertial sensors, in terms of exploring and developing which features were suitable for the specific application of the protocols. The use of optoelectronic systems was necessary because they provided a gold standard and accurate measurement, which was used as a reference for the validation of the protocols based on inertial sensors. The protocols described in this thesis can be particularly helpful for rehabilitation centers in which the high cost of instrumentation or the limited working areas do not allow the use of stereophotogrammetry. Moreover, many applications requiring upper and lower limb motion analysis to be performed outside the laboratories will benefit from these protocols, for example performing gait analysis along the corridors. Out of the buildings, the condition of steady-state walking or the behavior of the prosthetic devices when encountering slopes or obstacles during walking can also be assessed. The application of inertial sensors on lower limb amputees presents conditions which are challenging for magnetometer-based systems, due to ferromagnetic material commonly adopted for the construction of idraulic components or motors. INAIL Prostheses Centre stimulated and, together with Xsens Technologies B.V. supported the development of additional methods for improving the accuracy of MTx in measuring the 3D kinematics for lower limb prostheses, with the results provided in this thesis. In the author’s opinion, this thesis and the motion analysis protocols based on inertial sensors here described, are a demonstration of how a strict collaboration between the industry, the clinical centers, the research laboratories, can improve the knowledge, exchange know-how, with the common goal to develop new application-oriented systems.
Resumo:
Perceptual learning can occur when stimuli are only imagined, i.e., without proper stimulus presentation. For example, perceptual learning improved bisection discrimination when only the two outer lines of the bisection stimulus were presented and the central line had to be imagined. Performance improved also with other static stimuli. In non-learning imagery experiments, imagining static stimuli is different from imagining motion stimuli. We hypothesized that those differences also affect imagery perceptual learning. Here, we show that imagery training also improves motion direction discrimination. Learning occurs when no stimulus at all is presented during training, whereas no learning occurs when only noise is presented. The interference between noise and mental imagery possibly hinders learning. For static bisection stimuli, the pattern is just the opposite. Learning occurs when presented with the two outer lines of the bisection stimulus, i.e., with only a part of the visual stimulus, while no learning occurs when no stimulus at all is presented.
Resumo:
The recurrent interaction among orientation-selective neurons in the primary visual cortex (V1) is suited to enhance contours in a noisy visual scene. Motion is known to have a strong pop-up effect in perceiving contours, but how motion-sensitive neurons in V1 support contour detection remains vastly elusive. Here we suggest how the various types of motion-sensitive neurons observed in V1 should be wired together in a micro-circuitry to optimally extract contours in the visual scene. Motion-sensitive neurons can be selective about the direction of motion occurring at some spot or respond equally to all directions (pandirectional). We show that, in the light of figure-ground segregation, direction-selective motion neurons should additively modulate the corresponding orientation-selective neurons with preferred orientation orthogonal to the motion direction. In turn, to maximally enhance contours, pandirectional motion neurons should multiplicatively modulate all orientation-selective neurons with co-localized receptive fields. This multiplicative modulation amplifies the local V1-circuitry among co-aligned orientation-selective neurons for detecting elongated contours. We suggest that the additive modulation by direction-specific motion neurons is achieved through synaptic projections to the somatic region, and the multiplicative modulation by pandirectional motion neurons through projections to the apical region of orientation-specific pyramidal neurons. For the purpose of contour detection, the V1-intrinsic integration of motion information is advantageous over a downstream integration as it exploits the recurrent V1-circuitry designed for that task.
Resumo:
We demonstrate performance-related changes in cortical and cerebellar activity. The largest learning-dependent changes were observed in the anterior lateral cerebellum, where the extent and intensity of activation correlated inversely with psychophysical performance. After learning had occurred (a few minutes), the cerebellar activation almost disappeared; however, it was restored when the subjects were presented with a novel, untrained direction of motion for which psychophysical performance also reverted to chance level. Similar reductions in the extent and intensity of brain activations in relation to learning occurred in the superior colliculus, anterior cingulate, and parts of the extrastriate cortex. The motion direction-sensitive middle temporal visual complex was a notable exception, where there was an expansion of the cortical territory activated by the trained stimulus. Together, these results indicate that the learning and representation of visual motion discrimination are mediated by different, but probably interacting, neuronal subsystems.
Resumo:
We sought to determine the extent to which red–green, colour–opponent mechanisms in the human visual system play a role in the perception of drifting luminance–modulated targets. Contrast sensitivity for the directional discrimination of drifting luminance–modulated (yellow–black) test sinusoids was measured following adaptation to isoluminant red–green sinusoids drifting in either the same or opposite direction. When the test and adapt stimuli drifted in the same direction, large sensitivity losses were evident at all test temporal frequencies employed (1–16 Hz). The magnitude of the loss was independent of temporal frequency. When adapt and test stimuli drifted in opposing directions, large sensitivity losses were evident at lower temporal frequencies (1–4 Hz) and declined with increasing temporal frequency. Control studies showed that this temporal–frequency–dependent effect could not reflect the activity of achromatic units. Our results provide evidence that chromatic mechanisms contribute to the perception of luminance–modulated motion targets drifting at speeds of up to at least 32°s-1. We argue that such mechanisms most probably lie within a parvocellular–dominated cortical visual pathway, sensitive to both chromatic and luminance modulation, but only weakly selective for the direction of stimulus motion.
Resumo:
The Silk Road Project was a practice-based research project investigating the potential of motion capture technology to inform perceptions of embodiment in dance performance. The project created a multi-disciplinary collaborative performance event using dance performance and real-time motion capture at Deakin University’s Deakin Motion Lab. Several new technological advances in producing real-time motion capture performance were produced, along with a performance event that examined the aesthetic interplay between a dancer’s movement and the precise mappings of its trajectories created by motion capture and real-time motion graphic visualisations.
Resumo:
The accuracy of marker placement on palpable surface anatomical landmarks is an important consideration in biomechanics. Although marker placement reliability has been studied in some depth, it remains unclear whether or not the markers are accurately positioned over the intended landmark in order to define the static position and orientation of the segment. A novel method using commonly available X-ray imaging was developed to identify the accuracy of markers placed on the shoe surface by palpating landmarks through the shoe. An anterior–posterior and lateral–medial X-ray was taken on 24 participants with a newly developed marker set applied to both the skin and shoe. The vector magnitude of both skin- and shoe-mounted markers from the anatomical landmark was calculated, as well as the mean marker offset between skin- and shoe-mounted markers. The accuracy of placing markers on the shoe relative to the skin-mounted markers, accounting for shoe thickness, was less than 5mm for all markers studied. Further, when using the developed guidelines provided in this study, the method was deemed reliable (Intra-rater ICCs¼0.50–0.92). In conclusion, the method proposed here can reliably assess marker placement accuracy on the shoe surface relative to chosen anatomical landmarks beneath the skin.