47 resultados para visual information

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large volume of visual content is inaccessible until effective and efficient indexing and retrieval of such data is achieved. In this paper, we introduce the DREAM system, which is a knowledge-assisted semantic-driven context-aware visual information retrieval system applied in the film post production domain. We mainly focus on the automatic labelling and topic map related aspects of the framework. The use of the context- related collateral knowledge, represented by a novel probabilistic based visual keyword co-occurrence matrix, had been proven effective via the experiments conducted during system evaluation. The automatically generated semantic labels were fed into the Topic Map Engine which can automatically construct ontological networks using Topic Maps technology, which dramatically enhances the indexing and retrieval performance of the system towards an even higher semantic level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two experiments were undertaken to examine whether there is an age-related change in the speed with which readers can capture visual information during fixations in reading. Children’s and adults’ eye movements were recorded as they read sentences that were presented either normally or as “disappearing text”. The disappearing text manipulation had a surprisingly small effect on the children, inconsistent with the notion of an age-related change in the speed with which readers can capture visual information from the page. Instead, we suggest that differences between adults and children are related to the level of difficulty of the sentences for readers of different ages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tactile discrimination performance depends on the receptive field (RF) size of somatosensory cortical (SI) neurons. Psychophysical masking effects can reveal the RF of an idealized "virtual" somatosensory neuron. Previous studies show that top-down factors strongly affect tactile discrimination performance. Here, we show that non-informative vision of the touched body part influences tactile discrimination by modulating tactile RFs. Ten subjects performed spatial discrimination between touch locations on the forearm. Performance was improved when subjects saw their forearm compared to viewing a neutral object in the same location. The extent of visual information was relevant, since restricted view of the forearm did not have this enhancing effect. Vibrotactile maskers were placed symmetrically on either side of the tactile target locations, at two different distances. Overall, masking significantly impaired discrimination performance, but the spatial gradient of masking depended on what subjects viewed. Viewing the body reduced the effect of distant maskers, but enhanced the effect of close maskers, as compared to viewing a neutral object. We propose that viewing the body improves functional touch by sharpening tactile RFs in an early somatosensory map. Top-down modulation of lateral inhibition could underlie these effects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To steer a course through the world, people are almost entirely dependent on visual information, of which a key component is optic flow. In many models of locomotion, heading is described as the fundamental control variable; however, it has also been shown that fixating points along or near one's future path could be the basis of an efficient control solution. Here, the authors aim to establish how well observers can pinpoint instantaneous heading and path, by measuring their accuracy when looking at these features while traveling along straight and curved paths. The results showed that observers could identify both heading and path accurately (similar to 3 degrees) when traveling along straight paths, but on curved paths they were more accurate at identifying a point on their future path (similar to 5 degrees) than indicating their instantaneous heading (similar to 13 degrees). Furthermore, whereas participants could track changes in the tightness of their path, they were unable to accurately track the rate of change of heading. In light of these results, the authors suggest it is unlikely that heading is primarily used by the visual system to support active steering.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rationale: Liking, cravings and addiction for chocolate ("chocoholism") are often explained through the presence of pharmacologically active compounds. However, mere "presence" does not guarantee psycho-activity. Objectives: Two double-blind, placebo-controlled studies measured the effects on cognitive performance and mood of the amounts of cocoa powder and methylxanthines found in a 50 g bar of dark chocolate. Methods: In study 1, participants (n=20) completed a test battery once before and twice after treatment administration. Treatments included 11.6 g cocoa powder and a caffeine and theobromine combination (19 and 250 mg, respectively). Study 2 (n=22) comprised three post-treatment test batteries and investigated the effects of "milk" and "dark" chocolate levels of these methylxanthines. The test battery consisted of a long duration simple reaction time task, a rapid visual information processing task, and a mood questionnaire. Results: Identical improvements on the mood construct "energetic arousal" and cognitive function were found for cocoa powder and the caffeine+theobromine combination versus placebo. In chocolate, both "milk chocolate" and "dark chocolate" methylxanthine doses improved cognitive function compared with "white chocolate". The effects of white chocolate did not differ significantly from those of water. Conclusion: A normal portion of chocolate exhibits psychopharmacological activity. The identical profile of effects exerted by cocoa powder and its methylxanthine constituents shows this activity to be confined to the combination of caffeine and theobromine. Methylxanthines may contribute to the popularity of chocolate; however, other attributes are probably much more important in determining chocolate's special appeal and in explaining related self-reports of chocolate cravings and "chocoholism".

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first similar to 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigated whether it is possible to control the temporal window of attention used to rapidly integrate visual information. To study the underlying neural mechanisms, we recorded ERPs in an attentional blink task, known to elicit Lag-1 sparing. Lag-1 sparing fosters joint integration of the two targets, evidenced by increased order errors. Short versus long integration windows were induced by showing participants mostly fast or slow stimuli. Participants expecting slow speed used a longer integration window, increasing joint integration. Difference waves showed an early (200 ms post-T2) negative and a late positive modulation (390 ms) in the fast group, but not in the slow group. The modulations suggest the creation of a separate event for T2, which is not needed in the slow group, where targets were often jointly integrated. This suggests that attention can be guided by global expectations of presentation speed within tens of milliseconds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes a region-based algorithm for deriving a concise description of a first order optical flow field. The algorithm described achieves performance improvements over existing algorithms without compromising the accuracy of the flow field values calculated. These improvements are brought about by not computing the entire flow field between two consecutive images, but by considering only the flow vectors of a selected subset of the images. The algorithm is presented in the context of a project to balance a bipedal robot using visual information.