991 resultados para Visual presentation
Resumo:
When people monitor the rapid serial visual presentation (RSVP) of stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset, a phenomenon known as the attentional blink (AB). We found that overall performance in an RSVP task was impaired by a concurrent short-term memory (STM) task and, furthermore, that this effect increased when STM load was higher and when its content was more task relevant. Loading visually defined stimuli and adding articulatory suppression further impaired performance on the RSVP task, but the size of the AB over time (i.e., T1-T2 lag) remained unaffected by load or content. This suggested that at least part of the performance in an RSVP task reflects interference between competing codes within STM, as interference models have held, whereas the AB proper reflects capacity limitations in the transfer to STM, as consolidation models have claimed.
Resumo:
The recency effect found in free recall can be accounted for almost entirely in terms of the recall of ordered sequences of items. It is such sequences, presented at the end of the stimulus list but recalled at the very beginning of the response protocol, which produce a recency effect. Such sequences are recalled at the beginning of the response protocol equally often following auditory and visual presentation. These same stimulus sequences are also frequently recalled other than initially in the response protocol following auditory presentation. However, such responses are rarely found following visual presentation. The modality effect in free recall, the advantage of auditory over visual presentation, can be substantially accounted for in these terms. Theoretical and procedural implications of these data are discussed.
Resumo:
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Resumo:
This paper aims to present the use of a learning object (CADILAG), developed to facilitate understanding data structure operations by using visual presentations and animations. The CADILAG allows visualizing the behavior of algorithms usually discussed during Computer Science and Information System courses. For each data structure it is possible visualizing its content and its operation dynamically. Its use was evaluated an the results are presented. © 2012 AISTI.
Resumo:
Digital repositories are currently being used by education and research institutions in Brazil as an alternative for propagating scientific and academic results, mainly to reach institutional memory and visibility. However, the way these results are presented may influence their use, affecting user-system interaction through the interface components. Thus, it is possible to state that an unique digital information environment can offer different ways of visual presentation, customizing informational and visual components for specific users communities. In this context, tools are being developed as a resource to make access and use of information easier, and to increase the usability of digital informational environments. One of these tools, Manakin, is presented in this paper, as well as its integration with the DSpace platform, in order to enable multiple visual presentations, stressing the importance of the differentiation and direction of interfaces by a single repository to the various knowledge fields. So, results and examples of repositories with multiple visual presentations are introduced, to facilitate the use of the presented tool, as well as to reinforce the importance of a differentiated visual identity by areas of knowledge in a single repository, by means of literary and exploratory analysis.
Resumo:
The main aim of this thesis is strongly interdisciplinary: it involves and presumes a knowledge on Neurophysiology, to understand the mechanisms that undergo the studied phenomena, a knowledge and experience on Electronics, necessary during the hardware experimental set-up to acquire neuronal data, on Informatics and programming to write the code necessary to control the behaviours of the subjects during experiments and the visual presentation of stimuli. At last, neuronal and statistical models should be well known to help in interpreting data. The project started with an accurate bibliographic research: until now the mechanism of perception of heading (or direction of motion) are still poorly known. The main interest is to understand how the integration of visual information relative to our motion with eye position information happens. To investigate the cortical response to visual stimuli in motion and the integration with eye position, we decided to study an animal model, using Optic Flow expansion and contraction as visual stimuli. In the first chapter of the thesis, the basic aims of the research project are presented, together with the reasons why it’s interesting and important to study perception of motion. Moreover, this chapter describes the methods my research group thought to be more adequate to contribute to scientific community and underlines my personal contribute to the project. The second chapter presents an overview on useful knowledge to follow the main part of the thesis: it starts with a brief introduction on central nervous system, on cortical functions, then it presents more deeply associations areas, which are the main target of our study. Furthermore, it tries to explain why studies on animal models are necessary to understand mechanism at a cellular level, that could not be addressed on any other way. In the second part of the chapter, basics on electrophysiology and cellular communication are presented, together with traditional neuronal data analysis methods. The third chapter is intended to be a helpful resource for future works in the laboratory: it presents the hardware used for experimental sessions, how to control animal behaviour during the experiments by means of C routines and a software, and how to present visual stimuli on a screen. The forth chapter is the main core of the research project and the thesis. In the methods, experimental paradigms, visual stimuli and data analysis are presented. In the results, cellular response of area PEc to visual stimuli in motion combined with different eye positions are shown. In brief, this study led to the identification of different cellular behaviour in relation to focus of expansion (the direction of motion given by the optic flow pattern) and eye position. The originality and importance of the results are pointed out in the conclusions: this is the first study aimed to investigate perception of motion in this particular cortical area. In the last paragraph, a neuronal network model is presented: the aim is simulating cellular pre-saccadic and post-saccadic response of neuron in area PEc, during eye movement tasks. The same data presented in chapter four, are further analysed in chapter fifth. The analysis started from the observation of the neuronal responses during 1s time period in which the visual stimulation was the same. It was clear that cells activities showed oscillations in time, that had been neglected by the previous analysis based on mean firing frequency. Results distinguished two cellular behaviour by their response characteristics: some neurons showed oscillations that changed depending on eye and optic flow position, while others kept the same oscillations characteristics independent of the stimulus. The last chapter discusses the results of the research project, comments the originality and interdisciplinary of the study and proposes some future developments.
Resumo:
OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Resumo:
Action selection and organization are very complex processes that need to exploit contextual information and the retrieval of previously memorized information, as well as the integration of these different types of data. On the basis of anatomical connection with premotor and parietal areas involved in action goal coding, and on the data about the literature it seems appropriate to suppose that one of the most candidate involved in the selection of neuronal pools for the selection and organization of intentional actions is the prefrontal cortex. We recorded single ventrolateral prefrontal (VLPF) neurons activity while monkeys performed simple and complex manipulative actions aimed at distinct final goals, by employing a modified and more strictly controlled version of the grasp-to-eat(a food pellet)/grasp-to-place(an object) paradigm used in previous studies on parietal (Fogassi et al., 2005) and premotor neurons (Bonini et al., 2010). With this task we have been able both to evaluate the processing and integration of distinct (visual and auditory) contextual sequentially presented information in order to select the forthcoming action to perform and to examine the possible presence of goal-related activity in this portion of cortex. Moreover, we performed an observation task to clarify the possible contribution of VLPF neurons to the understanding of others’ goal-directed actions. Simple Visuo Motor Task (sVMT). We found four main types of neurons: unimodal sensory-driven, motor-related, unimodal sensory-and-motor, and multisensory neurons. We found a substantial number of VLPF neurons showing both a motor-related discharge and a visual presentation response (sensory-and-motor neurons), with remarkable visuo-motor congruence for the preferred target. Interestingly the discharge of multisensory neurons reflected a behavioural decision independently from the sensory modality of the stimulus allowing the monkey to make it: some encoded a decision to act/refraining from acting (the majority), while others specified one among the four behavioural alternatives. Complex Visuo Motor Task (cVMT). The cVMT was similar to the sVMT, but included a further grasping motor act (grasping a lid in order to remove it, before grasping the target) and was run in two modalities: randomized and in blocks. Substantially, motor-related and sensory-and-motor neurons tested in the cVMTrandomized were activated already during the first grasping motor act, but the selectivity for one of the two graspable targets emerged only during the execution of the second grasping. In contrast, when the cVMT was run in block, almost all these neurons not only discharged during the first grasping motor act, but also displayed the same target selectivity showed in correspondence of the hand contact with the target. Observation Task (OT). A great part of the neurons active during the OT showed a firing rate modulation in correspondence with the action performed by the experimenter. Among them, we found neurons significantly activated during the observation of the experimenter’s action (action observation-related neurons) and neurons responding not only to the action observation, but also to the presented cue stimuli (sensory-and-action observation-related neurons. Among the neurons of the first set, almost the half displayed a target selectivity, with a not clear difference between the two presented targets; Concerning to the second neuronal set, sensory-and-action related neurons, we found a low target selectivity and a not strictly congruence between the selectivity exhibited in the visual response and in the action observation.
Resumo:
When a visual stimulus is continuously moved behind a small stationary window, the window appears displaced in the direction of motion of the stimulus. In this study we showed that the magnitude of this illusion is dependent on (i) whether a perceptual or visuomotor task is used for judging the location of the window, (ii) the directional signature of the stimulus, and (iii) whether or not there is a significant delay between the end of the visual presentation and the initiation of the localization measure. Our stimulus was a drifting sinusoidal grating windowed in space by a stationary, two-dimensional, Gaussian envelope (σ=1 cycle of sinusoid). Localization measures were made following either a short (200 ms) or long (4.2 s) post-stimulus delay. The visuomotor localization error was up to three times greater than the perceptual error for a short delay. However, the visuomotor and perceptual localization measures were similar for a long delay. Our results provide evidence in support of the hypothesis that separate cortical pathways exist for visual perception and visually guided action and that delayed actions rely on stored perceptual information.
Resumo:
We investigated the nature of resource limitations during visual target processing by imposing high temporal processing demands on the cognitive system. This was achieved by embedding target stimuli into rapid-serial-visual-presentation-streams (RSVP). In RSVP streams, it is difficult to report the second of two targets (T2) if the second follows the first (T1) within 500 ms. This effect is known as the attentional blink (AB). For the AB to occur, it is essential that T1 is followed by a mask, as without such a stimulus, the AB is significantly attenuated. Usually, it is thought that T1 processing is delayed by the mask, which in turn delays T2 processing, increasing the likelihood for T2 failures (AB). Predictions regarding amplitudes and latencies of cortical responses (M300, the magnetic counterpart to the P300) to targets were tested by investigating the neurophysiological effects of the post-T1 item (mask) by means of magnetoencephalography (MEG). Cortical M300 responses to targets drawn from prefrontal sources – areas associated with working memory – revealed accelerated T1 yet delayed T2 processing with an intervening mask. The explanation we are proposing assumes that “protection” of ongoing T1 processing necessitated by the occurrence of the mask suppresses other activation patterns, which boosts T1 yet also hinders further processing. Our data shed light on the mechanisms employed by the human brain for ensuring visual target processing under high temporal processing demands, which is hypothesized to occur at the expense of subsequently presented information.
Resumo:
Holistic face perception, i.e. the mandatory integration of featural information across the face, hasbeen considered to play a key role when recognizing emotional face expressions (e.g., Tanaka et al.,2002). However, despite their early onset holistic processing skills continue to improvethroughout adolescence (e.g., Schwarzer et al., 2010) and therefore might modulate theevaluation of facial expressions. We tested this hypothesis using an attentional blink (AB)paradigm to compare the impact of happy, fearful and neutral faces in adolescents (10–13 years)and adults on subsequently presented neutral target stimuli (animals, plants and objects) in a rapidserial visual presentation stream. Adolescents and adults were found to be equally reliable whenreporting the emotional expression of the face stimuli. However, the detection of emotional butnot neutral faces imposed a significantly stronger AB effect on the detection of the neutral targetsin adults compared to adolescents. In a control experiment we confirmed that adolescents ratedemotional faces lower in terms of valence and arousal than adults. The results suggest a protracteddevelopment of the ability to evaluate facial expressions that might be attributed to the latematuration of holistic processing skills.
Resumo:
MOVE is a composition for string quartet, piano, percussion and electronics of approximately 15-16 minutes duration in three movements. The work incorporates electronic samples either synthesized electronically by the composer or recorded from acoustic instruments. The work aims to use electronic sounds as an expansion of the tonal palette of the chamber group (rather like an extended percussion setup) as opposed to a dominating sonic feature of the music. This is done by limiting the use of electronics to specific sections of the work, and by prioritizing blend and sonic coherence in the synthesized samples. The work uses fixed electronics in such a way that allows for tempo variations in the music. Generally, a difficulty arises in that fixed “tape” parts don’t allow tempo variations; while truly “live” software algorithms sacrifice rhythmic accuracy. Sample pads, such as the Roland SPD-SX, provide an elegant solution. The latency of such a device is close enough to zero that individual samples can be triggered in real time at a range of tempi. The percussion setup in this work (vibraphone and sample pad) allows one player to cover both parts, eliminating the need for an external musician to trigger the electronics. Compositionally, momentum is used as a constructing principle. The first movement makes prominent use of ostinato and shifting meter. The second is a set of variations on a repeated harmonic pattern, with a polymetric middle section. The third is a type of passacaglia, wherein the bassline is not introduced right away, but becomes more significant later in the movement. Given the importance of visual presentation in the Internet age, the final goal of the project was to shoot HD video of a studio performance of the work for publication online. The composer recorded audio and video in two separate sessions and edited the production using Logic X and Adobe Premiere Pro. The final video presentation can be seen at geoffsheil.com/move.
Resumo:
A main prediction from the zoom lens model for visual attention is that performance is an inverse function of the size of the attended area. The "attention shift paradigm" developed by Sperling and Reeves (1980) was adapted here to study predictions from the zoom lens model. In two experiments two lists of items were simultaneously presented using the rapid serial visual presentation technique. Subjects were to report the first item he/she was able to identify in the series that did not include the target (the letter T) after he/she saw the target. In one condition, subjects knew in which list the target would appear, in another condition, they did not have this knowledge, having to attend to both positions in order to detect the target. The zoom lens model predicts an interaction between this variable and the distance separating the two positions where the lists are presented. In both experiments, this interaction was observed. The results are also discussed as a solution to the apparently contradictory results with regard to the analog movement model.
Resumo:
Testing contexts have been shown to critically influence experimental results in psychophysical studies. One of these contexts that show important modulation of the behavioral effects of different stimulatory conditions is the separate (blocked) or mixed presentation of these stimulatory conditions. The study presents evidence that the apparent discriminabilities of two target stimuli can change according to which of these two testing contexts is used. A cross inside a ring and a vertical line inside a ring were presented as go stimuli in a go/no-go reaction time task. In one experiment, each of these stimuli was presented to a different group of volunteers and in another experiment they were presented to the same group of volunteers, randomly mixed in the blocks of trials. Similar reaction times were obtained for the two stimuli in the first experiment, and different reaction times (faster for the cross) in the second experiment. The latter result indicates that the two stimuli have different discriminabilities from the no-go stimulus; the cross having greater discriminability. This difference is however masked, presumably by the adoption of specific compensatory attentional sets, in a separate testing context.