183 resultados para Visual Saliency
Resumo:
Augmented visual feedback can have a profound bearing on the stability of bimanual coordination. Indeed, this has been used to render tractable the study of patterns of coordination that cannot otherwise be produced in a stable fashion. In previous investigations (Carson et al. 1999), we have shown that rhythmic movements, brought about by the contraction of muscles on one side of the body, lead to phase-locked changes in the excitability of homologous motor pathways of the opposite limb. The present study was conducted to assess whether these changes are influenced by the presence of visual feedback of the moving limb. Eight participants performed rhythmic flexion-extension movements of the left wrist to the beat of a metronome (1.5 Hz). In 50% of trials, visual feedback of wrist displacement was provided in relation to a target amplitude, defined by the mean movement amplitude generated during the immediately preceding no feedback trial. Motor potentials (MEPs) were evoked in the quiescent muscles of the right limb by magnetic stimulation of the left motor cortex. Consistent with our previous observations, MEP amplitudes were modulated during the movement cycle of the opposite limb. The extent of this modulation was, however, smaller in the presence of visual feedback of the moving limb (FCR omega(2) =0.41; ECR omega(2)=0.29) than in trials in which there was no visual feedback (FCR omega(2)=0.51; ECR omega(2)=0.48). In addition, the relationship between the level of FCR activation and the excitability of the homologous corticospinal pathway of the opposite limb was sensitive to the vision condition; the degree of correlation between the two variables was larger when there was no visual feedback of the moving limb. The results of the present study support the view that increases in the stability of bimanual coordination brought about by augmented feedback may be mediated by changes in the crossed modulation of excitability in homologous motor pathways.
Resumo:
It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction after-effect (DAE). Both result in inaccurate perception of direction when a moving pattern is either superimposed on (direction repulsion), or presented following adaptation to (DAE), another pattern moving in a different direction. Remarkable similarities in tuning characteristics suggest that common processes underlie the two illusions. What is not clear, however, is whether the processes driving the two illusions are expressions of the same or different neural substrates. Here we report two experiments demonstrating that direction repulsion and the DAE are, in fact, expressions of different neural substrates. Our strategy was to use each of the illusions to create a distorted perceptual representation upon which the mechanisms generating the other illusion could potentially operate. We found that the processes mediating direction repulsion did indeed access the distorted perceptual representation induced by the DAE. Conversely, the DAE was unaffected by direction repulsion. Thus parallels in perceptual phenomenology do not necessarily imply common neural substrates. Our results also demonstrate that the neural processes driving the DAE occur at an earlier stage of motion processing than those underlying direction repulsion.
Resumo:
In this paper we present the application of Hidden Conditional Random Fields (HCRFs) to modelling speech for visual speech recognition. HCRFs may be easily adapted to model long range dependencies across an observation sequence. As a result visual word recognition performance can be improved as the model is able to take more of a contextual approach to generating state sequences. Results are presented from a speaker-dependent, isolated digit, visual speech recognition task using comparisons with a baseline HMM system. We firstly illustrate that word recognition rates on clean video using HCRFs can be improved by increasing the number of past and future observations being taken into account by each state. Secondly we compare model performances using various levels of video compression on the test set. As far as we are aware this is the first attempted use of HCRFs for visual speech recognition.
Resumo:
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.
Resumo:
In this survey, we will summarize the latest developments of visual cryptography since its inception in 1994, introduce the main research topics in this area and outline the current problems and possible solutions. Directions and trends for future VC work shall also be examined along with possible VC applications.
Resumo:
In this journal article, we take multiple secrets into consideration and generate a key share for all the secrets; correspondingly, we share each secret using this key share. The secrets are recovered when the key is superimposed on the combined share in different locations using the proposed scheme. Also discussed and illustrated within this paper is how to embed a share of visual cryptography into halftone and colour images. The remaining share is used as a key share in order to perform the decryption. It is also worth noting that no information regarding the secrets is leaked in any of our proposed schemes. We provide the corresponding results in this paper.
Resumo:
OBJECTIVE:
To elucidate the contribution of environmental versus genetic factors to the significant losses in visual function associated with normal aging.
DESIGN:
A classical twin study.
PARTICIPANTS:
Forty-two twin pairs (21 monozygotic and 21 dizygotic; age 57-75 years) with normal visual acuity recruited through the Australian Twin Registry.
METHODS:
Cone function was evaluated by establishing absolute cone contrast thresholds to flicker (4 and 14 Hz) and isoluminant red and blue colors under steady state adaptation. Adaptation dynamics were determined for both cones and rods. Bootstrap resampling was used to return robust intrapair correlations for each parameter.
MAIN OUTCOME MEASURES:
Psychophysical thresholds and adaptational time constants.
RESULTS:
The intrapair correlations for all color and flicker thresholds, as well as cone absolute threshold, were significantly higher in monozygotic compared with dizygotic twin pairs (P<0.05). Rod absolute thresholds (P = 0.28) and rod and cone recovery rate (P = 0.83; P = 0.79, respectively) did not show significant differences between monozygotic and dizygotic twins in their intrapair correlations, indicating that steady-state cone thresholds and flicker thresholds have a marked genetic contribution, in contrast with rod thresholds and adaptive processes, which are influenced more by environmental factors over a lifetime.
CONCLUSIONS:
Genes and the environment contribute differently to important neuronal processes in the retina and the role they may play in the decline in visual function as we age. Consequently, retinal structures involved in rod thresholds and adaptive processes may be responsive to appropriate environmental manipulation. Because the functions tested are commonly impaired in the early stages of age-related macular degeneration, which is known to have a multifactorial etiology, this study supports the view that pathogenic pathways early in the disease may be altered by appropriate environmental intervention.