998 resultados para Contextual visual localization
Resumo:
The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.
Resumo:
In patients diagnosed with pharmaco-resistant epilepsy, cerebral areas responsible for seizure generation can be defined by performing implantation of intracranial electrodes. The identification of the epileptogenic zone (EZ) is based on visual inspection of the intracranial electroencephalogram (IEEG) performed by highly qualified neurophysiologists. New computer-based quantitative EEG analyses have been developed in collaboration with the signal analysis community to expedite EZ detection. The aim of the present report is to compare different signal analysis approaches developed in four different European laboratories working in close collaboration with four European Epilepsy Centers. Computer-based signal analysis methods were retrospectively applied to IEEG recordings performed in four patients undergoing pre-surgical exploration of pharmaco-resistant epilepsy. The four methods elaborated by the different teams to identify the EZ are based either on frequency analysis, on nonlinear signal analysis, on connectivity measures or on statistical parametric mapping of epileptogenicity indices. All methods converge on the identification of EZ in patients that present with fast activity at seizure onset. When traditional visual inspection was not successful in detecting EZ on IEEG, the different signal analysis methods produced highly discordant results. Quantitative analysis of IEEG recordings complement clinical evaluation by contributing to the study of epileptogenic networks during seizures. We demonstrate that the degree of sensitivity of different computer-based methods to detect the EZ in respect to visual EEG inspection depends on the specific seizure pattern.
Resumo:
Meditation is a self-induced and willfully initiated practice that alters the state of consciousness. The meditation practice of Zazen, like many other meditation practices, aims at disregarding intrusive thoughts while controlling body posture. It is an open monitoring meditation characterized by detached moment-to-moment awareness and reduced conceptual thinking and self-reference. Which brain areas differ in electric activity during Zazen compared to task-free resting? Since scalp electroencephalography (EEG) waveforms are reference-dependent, conclusions about the localization of active brain areas are ambiguous. Computing intracerebral source models from the scalp EEG data solves this problem. In the present study, we applied source modeling using low resolution brain electromagnetic tomography (LORETA) to 58-channel scalp EEG data recorded from 15 experienced Zen meditators during Zazen and no-task resting. Zazen compared to no-task resting showed increased alpha-1 and alpha-2 frequency activity in an exclusively right-lateralized cluster extending from prefrontal areas including the insula to parts of the somatosensory and motor cortices and temporal areas. Zazen also showed decreased alpha and beta-2 activity in the left angular gyrus and decreased beta-1 and beta-2 activity in a large bilateral posterior cluster comprising the visual cortex, the posterior cingulate cortex and the parietal cortex. The results include parts of the default mode network and suggest enhanced automatic memory and emotion processing, reduced conceptual thinking and self-reference on a less judgmental, i.e., more detached moment-to-moment basis during Zazen compared to no-task resting.
Resumo:
Adult monkeys (Macaca mulatta) with lesions of the hippocampal formation, perirhinal cortex, areas TH/TF, as well as controls were tested on tasks of object, spatial and contextual recognition memory. ^ Using a visual paired-comparison (VPC) task, all experimental groups showed a lack of object recognition relative to controls, although this impairment emerged at 10 sec with perirhinal lesions, 30 sec with areas TH/TF lesions and 60 sec with hippocampal lesions. In contrast, only perirhinal lesions impaired performance on delayed nonmatching-to-sample (DNMS), another task of object recognition memory. All groups were tested on DNMS with distraction (dDNMS) to examine whether the use of active cognitive strategies during the delay period could enable good performance on DNMS in spite of impaired recognition memory (revealed by the VPC task). Distractors affected performance of animals with perirhinal lesions at the 10-sec delay (the only delay in which their DNMS performance was above chance). They did not affect performance of animals with areas TH/TF lesions. Hippocampectomized animals were impaired at the 600-sec delay (the only delay at which prevention of active strategies would likely affect their behavior). ^ While lesions of areas TH/TF impaired spatial location memory and object-in-place memory, hippocampal lesions impaired only object-in-place memory. The pattern of results for perirhinal cortex lesions on the different task conditions indicated that this cortical area is not critical for spatial memory. ^ Finally, all three lesions impaired contextual recognition memory processes. The pattern of impairment appeared to result from the formation of only a global representation of the object and background, and suggests that all three areas are recruited for associating information across sources. ^ These results support the view that (1) the perirhinal cortex maintains storage of information about object and the context in which it is learned for a brief period of time, (2) areas TH/TF maintain information about spatial location and form associations between objects and their spatial relationship (a process that likely requires additional time) and (3) the hippocampal formation mediates associations between objects, their spatial relationship and the general context in which these associations are formed (an integrative function that requires additional time). ^
Resumo:
The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields.
Resumo:
N-methyl-d-aspartate receptor (NMDAR) activation has been implicated in forms of synaptic plasticity involving long-term changes in neuronal structure, function, or protein expression. Transcriptional alterations have been correlated with NMDAR-mediated synaptic plasticity, but the problem of rapidly targeting new proteins to particular synapses is unsolved. One potential solution is synapse-specific protein translation, which is suggested by dendritic localization of numerous transcripts and subsynaptic polyribosomes. We report here a mechanism by which NMDAR activation at synapses may control this protein synthetic machinery. In intact tadpole tecta, NMDAR activation leads to phosphorylation of a subset of proteins, one of which we now identify as the eukaryotic translation elongation factor 2 (eEF2). Phosphorylation of eEF2 halts protein synthesis and may prepare cells to translate a new set of mRNAs. We show that NMDAR activation-induced eEF2 phosphorylation is widespread in tadpole tecta. In contrast, in adult tecta, where synaptic plasticity is reduced, this phosphorylation is restricted to short dendritic regions that process binocular information. Biochemical and anatomical evidence shows that this NMDAR activation-induced eEF2 phosphorylation is localized to subsynaptic sites. Moreover, eEF2 phosphorylation is induced by visual stimulation, and NMDAR blockade before stimulation eliminates this effect. Thus, NMDAR activation, which is known to mediate synaptic changes in the developing frog, could produce local postsynaptic alterations in protein synthesis by inducing eEF2 phosphorylation.
Resumo:
Event-related brain potentials (ERPs) provide high-resolution measures of the time course of neuronal activity patterns associated with perceptual and cognitive processes. New techniques for ERP source analysis and comparisons with data from blood-flow neuroimaging studies enable improved localization of cortical activity during visual selective attention. ERP modulations during spatial attention point toward a mechanism of gain control over information flow in extrastriate visual cortical pathways, starting about 80 ms after stimulus onset. Paying attention to nonspatial features such as color, motion, or shape is manifested by qualitatively different ERP patterns in multiple cortical areas that begin with latencies of 100–150 ms. The processing of nonspatial features seems to be contingent upon the prior selection of location, consistent with early selection theories of attention and with the hypothesis that spatial attention is “special.”
Resumo:
One of the fascinating properties of the central nervous system is its ability to learn: the ability to alter its functional properties adaptively as a consequence of the interactions of an animal with the environment. The auditory localization pathway provides an opportunity to observe such adaptive changes and to study the cellular mechanisms that underlie them. The midbrain localization pathway creates a multimodal map of space that represents the nervous system's associations of auditory cues with locations in visual space. Various manipulations of auditory or visual experience, especially during early life, that change the relationship between auditory cues and locations in space lead to adaptive changes in auditory localization behavior and to corresponding changes in the functional and anatomical properties of this pathway. Traces of this early learning persist into adulthood, enabling adults to reacquire patterns of connectivity that were learned initially during the juvenile period.
Resumo:
Action selection and organization are very complex processes that need to exploit contextual information and the retrieval of previously memorized information, as well as the integration of these different types of data. On the basis of anatomical connection with premotor and parietal areas involved in action goal coding, and on the data about the literature it seems appropriate to suppose that one of the most candidate involved in the selection of neuronal pools for the selection and organization of intentional actions is the prefrontal cortex. We recorded single ventrolateral prefrontal (VLPF) neurons activity while monkeys performed simple and complex manipulative actions aimed at distinct final goals, by employing a modified and more strictly controlled version of the grasp-to-eat(a food pellet)/grasp-to-place(an object) paradigm used in previous studies on parietal (Fogassi et al., 2005) and premotor neurons (Bonini et al., 2010). With this task we have been able both to evaluate the processing and integration of distinct (visual and auditory) contextual sequentially presented information in order to select the forthcoming action to perform and to examine the possible presence of goal-related activity in this portion of cortex. Moreover, we performed an observation task to clarify the possible contribution of VLPF neurons to the understanding of others’ goal-directed actions. Simple Visuo Motor Task (sVMT). We found four main types of neurons: unimodal sensory-driven, motor-related, unimodal sensory-and-motor, and multisensory neurons. We found a substantial number of VLPF neurons showing both a motor-related discharge and a visual presentation response (sensory-and-motor neurons), with remarkable visuo-motor congruence for the preferred target. Interestingly the discharge of multisensory neurons reflected a behavioural decision independently from the sensory modality of the stimulus allowing the monkey to make it: some encoded a decision to act/refraining from acting (the majority), while others specified one among the four behavioural alternatives. Complex Visuo Motor Task (cVMT). The cVMT was similar to the sVMT, but included a further grasping motor act (grasping a lid in order to remove it, before grasping the target) and was run in two modalities: randomized and in blocks. Substantially, motor-related and sensory-and-motor neurons tested in the cVMTrandomized were activated already during the first grasping motor act, but the selectivity for one of the two graspable targets emerged only during the execution of the second grasping. In contrast, when the cVMT was run in block, almost all these neurons not only discharged during the first grasping motor act, but also displayed the same target selectivity showed in correspondence of the hand contact with the target. Observation Task (OT). A great part of the neurons active during the OT showed a firing rate modulation in correspondence with the action performed by the experimenter. Among them, we found neurons significantly activated during the observation of the experimenter’s action (action observation-related neurons) and neurons responding not only to the action observation, but also to the presented cue stimuli (sensory-and-action observation-related neurons. Among the neurons of the first set, almost the half displayed a target selectivity, with a not clear difference between the two presented targets; Concerning to the second neuronal set, sensory-and-action related neurons, we found a low target selectivity and a not strictly congruence between the selectivity exhibited in the visual response and in the action observation.
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.
Resumo:
The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.
Resumo:
High-voltage-activated calcium channels are hetero-oligomeric protein complexes that mediate multiple cellular processes, including the influx of extracellular Ca2+, neurotransmitter release, gene transcription, and synaptic plasticity. These channels consist of a primary α1 pore-forming subunit, which is associated with an extracellular α2δ subunit and an intracellular β auxiliary subunit, which alter the gating properties and trafficking of the calcium channel. The cellular localization of the α2δ3 subunit in the mouse and rat retina is unknown. In this study using RT-PCR, a single band at ∼305 bp corresponding to the predicted size of the α2δ3 subunit fragment was found in mouse and rat retina and brain homogenates. Western blotting of rodent retina and brain homogenates showed a single 123-kDa band. Immunohistochemistry with an affinity-purified antibody to the α2δ3 subunit revealed immunoreactive cell bodies in the ganglion cell layer and inner nuclear layer and immunoreactive processes in the inner plexiform layer and the outer plexiform layer. α2δ3 immunoreactivity was localized to multiple cell types, including ganglion, amacrine, and bipolar cells and photoreceptors, but not horizontal cells. The expression of the α2δ3 calcium channel subunit to multiple cell types suggests that this subunit participates widely in Ca-channel-mediated signaling in the retina.
Resumo:
Atualmente os sistemas de pilotagem autónoma de quadricópteros estão a ser desenvolvidos de forma a efetuarem navegação em espaços exteriores, onde o sinal de GPS pode ser utilizado para definir waypoints de navegação, modos de position e altitude hold, returning home, entre outros. Contudo, o problema de navegação autónoma em espaços fechados sem que se utilize um sistema de posicionamento global dentro de uma sala, subsiste como um problema desafiante e sem solução fechada. Grande parte das soluções são baseadas em sensores dispendiosos, como o LIDAR ou como sistemas de posicionamento externos (p.ex. Vicon, Optitrack). Algumas destas soluções reservam a capacidade de processamento de dados dos sensores e dos algoritmos mais exigentes para sistemas de computação exteriores ao veículo, o que também retira a componente de autonomia total que se pretende num veículo com estas características. O objetivo desta tese pretende, assim, a preparação de um sistema aéreo não-tripulado de pequeno porte, nomeadamente um quadricóptero, que integre diferentes módulos que lhe permitam simultânea localização e mapeamento em espaços interiores onde o sinal GPS ´e negado, utilizando, para tal, uma câmara RGB-D, em conjunto com outros sensores internos e externos do quadricóptero, integrados num sistema que processa o posicionamento baseado em visão e com o qual se pretende que efectue, num futuro próximo, planeamento de movimento para navegação. O resultado deste trabalho foi uma arquitetura integrada para análise de módulos de localização, mapeamento e navegação, baseada em hardware aberto e barato e frameworks state-of-the-art disponíveis em código aberto. Foi também possível testar parcialmente alguns módulos de localização, sob certas condições de ensaio e certos parâmetros dos algoritmos. A capacidade de mapeamento da framework também foi testada e aprovada. A framework obtida encontra-se pronta para navegação, necessitando apenas de alguns ajustes e testes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The spatial character of our reaching movements is extremely sensitive to potential obstacles in the workspace. We recently found that this sensitivity was retained by most patients with left visual neglect when reaching between two objects, despite the fact that they tended to ignore the leftward object when asked to bisect the space between them. This raises the possibility that obstacle avoidance does not require a conscious awareness of the obstacle avoided. We have now tested this hypothesis in a patient with visual extinction following right temporoparietal damage. Extinction is an attentional disorder in which patients fail to report stimuli on the side of space opposite a brain lesion under conditions of bilateral stimulation. Our patient avoided obstacles during reaching, to exactly the same degree, regardless of whether he was able to report their presence. This implicit processing of object location, which may depend on spared superior parietal-lobe pathways, demonstrates that conscious awareness is not necessary for normal obstacle avoidance.