935 resultados para Visual Object Identification Task
Resumo:
Reaching and grasping an object is an action that can be performed in light, under visual guidance, as well as in darkness, under proprioceptive control only. Area V6A is a visuomotor area involved in the control of reaching movements. V6A, besides neurons activated by the execution of reaching movements, shows passive somatosensory and visual responses. This suggests fro V6A a multimodal capability of integrating sensory and motor-related information, We wanted to know whether this integration occurrs in reaching movements and in the present study we tested whether the visual feedback influenced the reaching activity of V6A neurons. In order to better address this question, we wanted to interpret the neural data in the light of the kinematic of reaching performance. We used an experimental paradigm that could examine V6A responses in two different visual backgrounds, light and dark. In these conditions, the monkey performed an istructed-delay reaching task moving the hand towards different target positions located in the peripersonal space. During the execution of reaching task, the visual feedback is processed in a variety of patterns of modulation, sometimes not expected. In fact, having already demonstrated in V6A reach-related discharges in absence of visual feedback, we expected two types of neural modulation: 1) the addition of light in the environment enhanced reach-related discharges recorded in the dark; 2) the light left the neural response unmodified. Unexpectedly, the results show a complex pattern of modulation that argues against a simple additive interaction between visual and motor-related signals.
Resumo:
Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.
Resumo:
We usually perform actions in a dynamic environment and changes in the location of a target for an upcoming action require both covert shifts of attention and motor planning update. In this study we tested whether, similarly to oculomotor areas that provide signals for overt and covert attention shifts, covert attention shifts modulate activity in cortical area V6A, which provides a bridge between visual signals and arm-motor control. We performed single cell recordings in monkeys trained to fixate straight-ahead while shifting attention outward to a peripheral cue and inward again to the fixation point. We found that neurons in V6A are influenced by spatial attention demonstrating that visual, motor, and attentional responses can occur in combination in single neurons of V6A. This modulation in an area primarily involved in visuo-motor transformation for reaching suggests that also reach-related regions could directly contribute in the shifts of spatial attention necessary to plan and control goal-directed arm movements. Moreover, to test whether V6A is causally involved in these processes, we have performed a human study using on-line repetitive transcranial magnetic stimulation over the putative human V6A (pV6A) during an attention and a reaching task requiring covert shifts of attention and reaching movements towards cued targets in space. We demonstrate that the pV6A is causally involved in attention reorienting to target detection and that this process interferes with the execution of reaching movements towards unattended targets. The current findings suggest the direct involvement of the action-related dorso-medial visual stream in attentional processes, and a more specific role of V6A in attention reorienting. Therefore, we propose that attention signals are used by the V6A to rapidly update the current motor plan or the ongoing action when a behaviorally relevant object unexpectedly appears at an unattended location.
Resumo:
In der vorliegenden Studie wurden verschiedene Techniken eingesetzt um drei Proben (4, 7, and 8) die aus denrnKorrosionsprodukten von aus dem Kosovo Krieg stammenden Munitionskugeln, bestehend aus abgereichertem Uranrn(Depleted Uranium - DU), zu untersuchen. Als erstes Verfahren wurde die Raman-Spektroskopie eingesetzt. Hierbeirnzeigte sichin den Proben, charakterisiert durch einen Doppelpeak, die Anwesenheit von Schoepitrn(UO2)8O2(OH)12(H2O)12. Der erste und zweite Peakzeigte sich im Spektralbereich von 840,3-842,5 cm-1rnbeziehungsweise 853,6-855,8 cm-1. Diese Werte stimmen mit den Literaturwerten für Raman-Peaks für Schoepitrnüberein. Des Weiteren wurde bei dieser Untersuchungsmethode Becquerelite Ca(UO2)6O4(OH)6(H2O)8 mit einemrnPeak im Bereich zwischen 829 to 836 cm-1 gefunden. Aufgrund des Fehlens des Becquerelitespektrums in derrnSpektralbibliothek wurde eine in der Natur vorkommende Variante analysiert und deren Peak bei 829 cm-1 bestimmt,rnwas mit den Ergebnissen in den Proben korrespondiert. Mittels Röntgenbeugung (X-Ray Diffraction, XRD) zeigtenrnsich in allen Proben ähnliche Spektren. Das lässt darauf schließen, dass das pulverisierte Material in allen Probenrndas gleiche ist. Hierbei zeigte sich eine sehr gute Übereinstimmung mit Schoepit und/oder meta-rnSchoepit(UO2)8O2(OH)12(H2O)10, sowie Becquerelite. Weiterhin war weder Autunit, Sabugalit noch Uranylphosphatrnanwesend, was die Ergebnisse einer anderen Studie, durchgeführt an denselben Proben, wiederlegt. DiernAnwesenheit von P, C oder Ca im Probenmaterial konnte ausgeschlossen werden. Im Falle von Calciumkann diesrnmit der Anwesenheit von Uran erklärt werden, welches aufgrund seines Atomradius bevorzugt in Becquerelite (1:6)rneingebaut wird. Die beiden Hauptpeaks für Uran lagen im Falle von U 4f 7/2 bei 382.0 eV und im Falle von U 4f 5/2rnbei 392 eV. Diese Werte mit den Literaturwerten für Schoepit und meta-Schoepitüberein. Die Ergebnissernelektronenmikroskopischen Untersuchung zeigen U, O, Ca, Ti als dominante Komponenten in allen Messungen.rnElemente wie Si, Al, Fe, S, Na, und C wurden ebenfalls detektiert; allerdings kann nicht ausgeschlossen werden,rndass diese Elemente aus dem Boden in der unmittelbaren Umgebung der Munitionsgeschosse stammen. Gold wurdernebenfalls gemessen, was aber auf die Goldarmierung in den Probenaufbereitungsbehältern zurückgeführt werdenrnkann. Die Elektronenmikroskopie zeigte außerdem einige Stellen in denen elementares Uran und Bodenmineralernsowie sekundäre Uranminerale auftraten. Die Elementübersicht zeigt einen direkten Zusammenhang zwischen U andrnCa und gleichzeitig keine Korrelation zwischen U und Si, oder Mg. Auf der anderen Seite zeigte sich aber einrnZusammenhang zwischen Si und Al da beide Konstituenten von Bodenmineralen darstellen. Eine mit Hilfe derrnElektronenstrahlmikroanalyse durchgeführte quantitative Analyse zeigte den Massenanteil von Uran bei ca. 78 - 80%,rnwas mit den 78,2% and 79,47% für Becquerelite beziehungsweise Schoepit aufgrund ihrer Summenformelrnkorrespondiert. Zusätzlich zeigt sich für Calcium ein Massenanteil von 2% was mit dem Wert in Becquerelite (2.19%)rnrecht gut übereinstimmt. Der Massenanteil von Ti lag in einigen Fällen bei 0,77%, was auf eine noch nicht korrodierternDU-Legierung zurückzuführen ist. Ein Lösungsexperiment wurde weiterhin durchgeführt, wobei eine 0,01 M NaClO4-rnLösung zum Einsatz kam in der die verbliebene Probensubstanz der Korrosionsprodukte gelöst wurde;rnNatriumperchlorate wurde hierbei genutzt um die Ionenstärke bei 0,01 zu halten. Um Verunreinigungen durchrnatmosphärisches CO2 zu vermeiden wurden die im Versuch für die drei Hauptproben genutzten 15 Probenbehälterrnmit Stickstoffgas gespült. Eine Modelkalkulation für den beschriebenen Versuchsaufbau wurde mit Visual MINTEQrnv.3.0 für die mittels vorgenannten Analysemethoden beschriebenen Mineralphasen im pH-Bereich von 6 – 10 imrnFalle von Becquerelite, und Schoepit berechnet. Die modellierten Lösungskurven wurden unter An- und Abwesenheitrnvon atmosphärischem CO2 kalkuliert. Nach dem Ende des Lösungsexperiments (Dauer ca. 6 Monate) zeigten diernKonzentrationen des gelösten Urans, gemessen mittels ICP-OES, gute Übereinstimmung mit den modelliertenrnSchoepit und Becquerelite Kurven. Auf Grund des ähnlichen Löslichkeitverhaltens war es nicht möglich zwichen denrnbeiden Mineralen zu unterscheiden. Schoepit kontrolliert im sauren Bereich die Löslichkeit des Urans, währendrnbecquerelit im basichen am wenigsten gelöst wird. Des Weiteren bleibt festzuhalten, dass ein Anteil an CO2 in diernverschlossenen Probenbehälter eingedrungen ist, was sich mit der Vorhersage der Modeldaten deckt. Die Löslichkeitrnvon Uran in der Lösung als Funktion des pH-Wertes zeigte die niedrigsten Konzentrationen im Falle einer Zunahmerndes pH-Wertes von 5 auf 7 (ungefähr 5,1 x 10-6 mol/l) und einer Zunahme des pH-Wertes auf 8 (ungefähr 1,5 x 10-6rnmol/l bei). Oberhalb dieses Bereichs resultiert jeder weitere Anstieg des pH-Wertes in einer Zunahme gelösten Uransrnin der Lösung. Der ph-Wert der Lösung wie auch deren pCO2-Wert kontrollieren hier die Menge des gelösten Urans.rnAuf der anderen Seite zeigten im Falle von Becquerelite die Ca-Konzentrationen höhere Werte als erwartet, wobeirnwahrscheinlich auf eine Vermischung der Proben mit Bodensubstanz zurückgeführt werden kann. Abschließendrnwurde, unter Berücksichtigung der oben genannten Ergebnisse, eine Fallstudie aus Basrah (Irak) diskutiert, wo inrnzwei militärischen Konflikten Uranmunition in zwei Regionen unter verschiedenen Umweltbedingungen eingesetztrnwurden.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Core networks for visual-concrete and abstract thought content: a brain electric microstate analysis
Resumo:
Commonality of activation of spontaneously forming and stimulus-induced mental representations is an often made but rarely tested assumption in neuroscience. In a conjunction analysis of two earlier studies, brain electric activity during visual-concrete and abstract thoughts was studied. The conditions were: in study 1, spontaneous stimulus-independent thinking (post-hoc, visual imagery or abstract thought were identified); in study 2, reading of single nouns ranking high or low on a visual imagery scale. In both studies, subjects' tasks were similar: when prompted, they had to recall the last thought (study 1) or the last word (study 2). In both studies, subjects had no instruction to classify or to visually imagine their thoughts, and accordingly were not aware of the studies' aim. Brain electric data were analyzed into functional topographic brain images (using LORETA) of the last microstate before the prompt (study 1) and of the word-type discriminating event-related microstate after word onset (study 2). Conjunction analysis across the two studies yielded commonality of activation of core networks for abstract thought content in left anterior superior regions, and for visual-concrete thought content in right temporal-posterior inferior regions. The results suggest that two different core networks are automatedly activated when abstract or visual-concrete information, respectively, enters working memory, without a subject task or instruction about the two classes of information, and regardless of internal or external origin, and of input modality. These core machineries of working memory thus are invariant to source or modality of input when treating the two types of information.
Resumo:
Perceptual closure refers to the coherent perception of an object under circumstances when the visual information is incomplete. Although the perceptual closure index observed in electroencephalography reflects that an object has been recognized, the full spatiotemporal dynamics of cortical source activity underlying perceptual closure processing remain unknown so far. To address this question, we recorded magnetoencephalographic activity in 15 subjects (11 females) during a visual closure task and performed beamforming over a sequence of successive short time windows to localize high-frequency gamma-band activity (60–100 Hz). Two-tone images of human faces (Mooney faces) were used to examine perceptual closure. Event-related fields exhibited a magnetic closure index between 250 and 325 ms. Time-frequency analyses revealed sustained high-frequency gamma-band activity associated with the processing of Mooney stimuli; closure-related gamma-band activity was observed between 200 and 300 ms over occipitotemporal channels. Time-resolved source reconstruction revealed an early (0–200 ms) coactivation of caudal inferior temporal gyrus (cITG) and regions in posterior parietal cortex (PPC). At the time of perceptual closure (200–400 ms), the activation in cITG extended to the fusiform gyrus, if a face was perceived. Our data provide the first electrophysiological evidence that perceptual closure for Mooney faces starts with an interaction between areas related to processing of three-dimensional structure from shading cues (cITG) and areas associated with the activation of long-term memory templates (PPC). Later, at the moment of perceptual closure, inferior temporal cortex areas specialized for the perceived object are activated, i.e., the fusiform gyrus related to face processing for Mooney stimuli.
Resumo:
In the present multi-modal study we aimed to investigate the role of visual exploration in relation to the neuronal activity and performance during visuospatial processing. To this end, event related functional magnetic resonance imaging er-fMRI was combined with simultaneous eye tracking recording and transcranial magnetic stimulation (TMS). Two groups of twenty healthy subjects each performed an angle discrimination task with different levels of difficulty during er-fMRI. The number of fixations as a measure of visual exploration effort was chosen to predict blood oxygen level-dependent (BOLD) signal changes using the general linear model (GLM). Without TMS, a positive linear relationship between the visual exploration effort and the BOLD signal was found in a bilateral fronto-parietal cortical network, indicating that these regions reflect the increased number of fixations and the higher brain activity due to higher task demands. Furthermore, the relationship found between the number of fixations and the performance demonstrates the relevance of visual exploration for visuospatial task solving. In the TMS group, offline theta bursts TMS (TBS) was applied over the right posterior parietal cortex (PPC) before the fMRI experiment started. Compared to controls, TBS led to a reduced correlation between visual exploration and BOLD signal change in regions of the fronto-parietal network of the right hemisphere, indicating a disruption of the network. In contrast, an increased correlation was found in regions of the left hemisphere, suggesting an intent to compensate functionality of the disturbed areas. TBS led to fewer fixations and faster response time while keeping accuracy at the same level, indicating that subjects explored more than actually needed.
Resumo:
Background: Visuoperceptual deficits in dementia are common and can reduce quality of life. Testing of visuoperceptual function is often confounded by impairments in other cognitive domains and motor dysfunction. We aimed to develop, pilot, and test a novel visuocognitive prototype test battery which addressed these issues, suitable for both clinical and functional imaging use. Methods: We recruited 23 participants (14 with dementia, 6 of whom had extrapyramidal motor features, and 9 age-matched controls). The novel Newcastle visual perception prototype battery (NEVIP-B-Prototype) included angle, color, face, motion and form perception tasks, and an adapted response system. It allows for individualized task difficulties. Participants were tested outside and inside the 3T functional magnetic resonance imaging (fMRI) scanner. Functional magnetic resonance imaging data were analyzed using SPM8. Results: All participants successfully completed the task inside and outside the scanner. Functional magnetic resonance imaging analysis showed activation regions corresponding well to the regional specializations of the visual association cortex. In both groups, there was significant activity in the ventral occipital-temporal region in the face and color tasks, whereas the motion task activated the V5 region. In the control group, the angle task activated the occipitoparietal cortex. Patients and controls showed similar levels of activation, except on the angle task for which occipitoparietal activation was lower in patients than controls. Conclusion: Distinct visuoperceptual functions can be tested in patients with dementia and extrapyramidal motor features when tests use individualized thresholds, adapted tasks, and specialized response systems.
Resumo:
Pain and the conscious mind (or the self) are experienced in our body. Both are intimately linked to the subjective quality of conscious experience. Here, we used virtual reality technology and visuo-tactile conflicts in healthy subjects to test whether experimentally induced changes of bodily self-consciousness (self-location; self-identification) lead to changes in pain perception. We found that visuo-tactile stroking of a virtual body but not of a control object led to increased pressure pain thresholds and self-location. This increase was not modulated by the synchrony of stroking as predicted based on earlier work. This differed for self-identification where we found as predicted that synchrony of stroking increased self-identification with the virtual body (but not a control object), and positively correlated with an increase in pain thresholds. We discuss the functional mechanisms of self-identification, self-location, and the visual perception of human bodies with respect to pain perception.
Resumo:
According to the broaden-and-build theory of positive emotions, positive emotions broaden while negative emotions narrow thought-action repertoires. These processes reflect changes in attentional scope, which is the focus of this research. The present study tested the hypothesis that participants in negative mood would be better able to focus on a target figure and separate it from its context in a perceptual task, and would also be better able to focus on the task amid a distracting environment than participants in a positive mood. An undergraduate sample of 77 participants watched video clips selected to induce either fear or amusement, and completed an Embedded Figures Test either in a quiet setting or in a noisy setting. A higher-order ANOVA revealed that Mood had a marginally significant effect on task performance, F(1, 73) = 3.94, p = .051, and that Distraction, F(1, 72) = 4.61, p = .035 and the Mood x Distraction interaction, F(1, 73) = 9.12, p = .003 did significantly affect task performance. However, contrary to the hypothesis, the effect of the distraction manipulation was greater for participants in a negative mood than it was for participants in a positive mood. The author suggests future directions to clarify the relationship between emotions, attentional scope, and susceptibility to environmental distraction.
Resumo:
To assess the reliability of radiologic identification using visual comparison of ante and post mortem paranasal sinus computed tomography (CT).
Resumo:
Searching for the neural correlates of visuospatial processing using functional magnetic resonance imaging (fMRI) is usually done in an event-related framework of cognitive subtraction, applying a paradigm comprising visuospatial cognitive components and a corresponding control task. Besides methodological caveats of the cognitive subtraction approach, the standard general linear model with fixed hemodynamic response predictors bears the risk of being underspecified. It does not take into account the variability of the blood oxygen level-dependent signal response due to variable task demand and performance on the level of each single trial. This underspecification may result in reduced sensitivity regarding the identification of task-related brain regions. In a rapid event-related fMRI study, we used an extended general linear model including single-trial reaction-time-dependent hemodynamic response predictors for the analysis of an angle discrimination task. In addition to the already known regions in superior and inferior parietal lobule, mapping the reaction-time-dependent hemodynamic response predictor revealed a more specific network including task demand-dependent regions not being detectable using the cognitive subtraction method, such as bilateral caudate nucleus and insula, right inferior frontal gyrus and left precentral gyrus.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.