974 resultados para Feedback visual
Resumo:
Objects in an environment are often encountered sequentially during spatial learning, forming a path along which object locations are experienced. The present study investigated the effect of spatial information conveyed through the path in visual and proprioceptive learning of a room-sized spatial layout, exploring whether different modalities differentially depend on the integrity of the path. Learning object locations along a coherent path was compared with learning them in a spatially random manner. Path integrity had little effect on visual learning, whereas learning with the coherent path produced better memory performance than random order learning for proprioceptive learning. These results suggest that path information has differential effects in visual and proprioceptive spatial learning, perhaps due to a difference in the way one establishes a reference frame for representing relative locations of objects.
Resumo:
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.
Resumo:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
This paper provides a preliminary analysis of an autonomous uncooperative collision avoidance strategy for unmanned aircraft using image-based visual control. Assuming target detection, the approach consists of three parts. First, a novel decision strategy is used to determine appropriate reference image features to track for safe avoidance. This is achieved by considering the current rules of the air (regulations), the properties of spiral motion and the expected visual tracking errors. Second, a spherical visual predictive control (VPC) scheme is used to guide the aircraft along a safe spiral-like trajectory about the object. Lastly, a stopping decision based on thresholding a cost function is used to determine when to stop the avoidance behaviour. The approach does not require estimation of range or time to collision, and instead relies on tuning two mutually exclusive decision thresholds to ensure satisfactory performance.
Resumo:
This paper presents a 100 Hz monocular position based visual servoing system to control a quadrotor flying in close proximity to vertical structures approximating a narrow, locally linear shape. Assuming the object boundaries are represented by parallel vertical lines in the image, detection and tracking is achieved using Plücker line representation and a line tracker. The visual information is fused with IMU data in an EKF framework to provide fast and accurate state estimation. A nested control design provides position and velocity control with respect to the object. Our approach is aimed at high performance on-board control for applications allowing only small error margins and without a motion capture system, as required for real world infrastructure inspection. Simulated and ground-truthed experimental results are presented.
Resumo:
We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.
Resumo:
In this chapter Knight & Dooley discuss arts learning and issues of educational authenticity via children’s engagement with iPads (O’Mara & Laidlaw 2011; Shifflet, Toledo & Mattoon 2012). The chapter begins by considering common perceptions about art and how these popular beliefs and conditions affect and influence how children’s art is defined and valorized. The art produced by children using iPads is then discussed through key observations and reflections, and the chapter concludes with some recommendations when selecting apps for making art.
Using a visually-based assignment to reinforce and assess design history knowledge and understanding
Resumo:
This paper presents a visual timeline-based assignment used in an undergraduate Industrial Design History, Theory and Critcism unit. The assignment was developed in order to find a better way of supporting design history learning than an exam or essay assessment. It was developed using constructive alignment and it allows design students to use their strong visual thinking skills to understand unfamiliar content, develop their visual literacy of design history, and think deeply about the links between the designs, styles, movements, events and people in their timeline. The task produced a variety of responses, from websites and electronic presentations to large paper timelines, scrolls and 3D models. These have been admired by peers and used for end of year shows and permanent displays. Questionnaires were issued to students to gain feedback about the assessment. Students stated that the visual nature of the assignment helped them to understand how different aspects of design history related to each other, assisted with retaining the information, and that it was more interesting and fun than a report or an exam. This paper explores the theories behind and the benefits of using such methods of assessment for design history courses.
Resumo:
Background At Queensland University of Technology, student radiation therapists receive regular feedback from clinical staff relating to clinical interpersonal skills. Although this is of great value, there is anecdotal evidence that students communicate differently with patients when under observation. Purpose The aim of this pilot was to counter this perceived observer effect by allowing patients to provide students with additional feedback. Materials and methods Radiotherapy patients from two departments were provided with anonymous feedback forms relating to aspects of student interpersonal skills. Clinical assessors, mentors and students were also provided with feedback forms, including questions about the role of patient feedback. Patient perceptions of student performance were correlated with staff feedback and assessment scores. Results Results indicated that the feedback was valued by both students and patients. Students reported that the additional dimension focused them on communication, set goals for development and increased motivation. These changes derived from both feedback and study participation, suggesting that the questionnaires could be a useful teaching tool. Patients scored more generously than mentors, although there was agreement in relative grading. Conclusions The anonymous questionnaire is a convenient and valuable method of gathering patient feedback on students. Future iterations will determine the optimum timing for this method of feedback.
Resumo:
A method for calculating visual odometry for ground vehicles with car-like kinematic motion constraints similar to Ackerman's steering model is presented. By taking advantage of this non-holonomic driving constraint we show a simple and practical solution to the odometry calculation by clever placement of a single camera. The method has been implemented successfully on a large industrial forklift and a Toyota Prado SUV. Results from our industrial test site is presented demonstrating the applicability of this method as a replacement for wheel encoder-based odometry for these vehicles.