780 resultados para Primary Visual-cortex
Resumo:
903 páginas, bibliografía en páginas 854-895, glosario en páginas 896-903
Resumo:
O objetivo do presente trabalho foi avaliar in vivo a detecção de cárie através do exame visual ICDAS, transiluminação por fibra ótica combinado ao ICDAS e exame radiográfico. Um total de 2.279 superfícies proximais e cicatrículas e fissuras em incisivos superiores, pré-molares e molares permanentes e 272 superfícies em molares decíduos em72 escolares (8 a 18 anos) foram avaliadas por um examinador treinado. Os sete escores para detecção de cárie primária do sistema visual ICDAS foram aplicados. Dois equipamentos de transiluminação por fibra ótica foram avaliados: FOTI Schott (SCH), com ponta de fibra ótica com 0,5mm de diâmetro, e FOTI Microlux (MIC), com diâmetro da ponta 3 mm. Durante o exame combinado FOTI/ICDAS, a fibra ótica era utilizada tanto para iluminar quanto para transiluminar a superfície sob avaliação. O exame radiográfico (RX) consistiu de radiografias interproximais posteriores e periapicais anteriores. Os exames foram realizados em consultório odontológico após escovação supervisionada. No primeiro dia de exame, o exame visual utilizando o ICDAS era realizado e em seguida, o exame combinado ao MIC ou SCH. Logo após era realizado o exame radiográfico. Após uma semana, novamente o ICDAS era realizado, e em seguida o exame combinado com o equipamento de FOTI não utilizado na semana anterior. Os exames foram repetidos em 10 pacientes após intervalo mínimo de uma semana para avaliação da reprodutibilidade intra-examinador, a qual apresentou valores de 0,95 (ICDAS), 0,94 (MIC), 0,95 (SCH) e 0,99 (RX) pelo kappa ponderado. Em cicatrículas e fissuras de permanentes, o RX julgou que um número maior de superfícies apresentava lesão em dentina (53) do que os outros métodos (34 a 36); porém não detectou nenhuma lesão em esmalte, as quais foram identificadas pelo ICDAS (94), SCH (107) e MIC (91). Em proximais permanentes, a transiluminação por fibra ótica identificou maior número de proximais como lesão em esmalte - 150 (SCH) e 139 (MIC) - do que o exame visual (106), enquanto o RX identificou somente 43. Em oclusais de decíduos, os quatro métodos julgaram um número aproximadamente similar de superfícies sem lesão (52 a 59) ou com lesão em dentina (21 a 26), assim como para lesões proximais em dentina (31 a 36). Entretanto um número reduzido de lesões proximais decíduas em esmalte foi julgado pelo exame radiográfico (3) em comparação com os outros métodos (15 a 16). Em decíduos, o ICDAS e o FOTI combinado ao exame visual julgaram maior número de lesões proximais em esmalte que o exame radiográfico, sendo que número similar de lesões em dentina foram classificadas pelos quatro métodos em oclusais e proximais de molares decíduos. Em cicatrículas e fissuras de permanentes, tanto o exame visual ICDAS quanto sua combinação aos dois equipamentos de transiluminação apresentaram maior similaridade de superfícies julgadas como lesão em esmalte ou como lesão em dentina, enquanto o exame radiográfico classificou mais superfícies como lesão em dentina e nenhuma como lesão em esmalte. A adição da transiluminação por fibra ótica ao exame visual aumentou em um terço a detecção das lesões cariosas proximais julgadas em dentina pelo ICDAS isoladamente e aproximadamente quadruplicou o número daquelas assim classificadas pela avaliação radiográfica em permanentes.
Resumo:
In the last decade, research efforts into directly interfacing with the neurons of individuals with motor deficits have increased. The goal of such research is clear: Enable individuals affected by paralysis or amputation to regain control of their environments by manipulating external devices with thought alone. Though the motor cortices are the usual brain areas upon which neural prosthetics depend, research into the parietal lobe and its subregions, primarily in non-human primates, has uncovered alternative areas that could also benefit neural interfaces. Similar to the motor cortical areas, parietal regions can supply information about the trajectories of movements. In addition, the parietal lobe also contains cognitive signals like movement goals and intentions. But, these areas are also known to be tuned to saccadic eye movements, which could interfere with the function of a prosthetic designed to capture motor intentions only. In this thesis, we develop and examine the functionality of a neural prosthetic with a non-human primate model using the superior parietal lobe to examine the effectiveness of such an interface and the effects of unconstrained eye movements in a task that more closely simulates clinical applications. Additionally, we examine methods for improving usability of such interfaces.
The parietal cortex is also believed to contain neural signals relating to monitoring of the state of the limbs through visual and somatosensory feedback. In one of the world’s first clinical neural prosthetics based on the human parietal lobe, we examine the extent to which feedback regarding the state of a movement effector alters parietal neural signals and what the implications are for motor neural prosthetics and how this informs our understanding of this area of the human brain.
Resumo:
Cerebral prefrontal function is one of the important aspects in neurobiology. Based on the experimental results of neuroanatomy, neurophysiology, behavioral sciences, and the principles of cybernetics and information theory after constructed a simple model simulating prefrontal control function, this paper simulated the behavior of Macaca mulatta completing delayed tasks both before and after its cerebral prefrontal cortex being damaged. The results indicated that there is an obvious difference in the capacity of completing delayed response tasks for the normal monkeys and those of prefrontal cortex cut away. The results are agreement with experiments. The authors suggest that the factors of affecting complete delayed response tasks might be in information keeping and extracting of memory including information storing, keeping and extracting procedures rather than in information storing process.
Resumo:
Monkeys with lesions of areas 9 and 46 performed three variants of the spatial delayed response (SDR) task. There were no impairments in allocentric spatial memory in which geometrical relationships between environmental cues were used to identify spatial location; thus, memory of a 3D environmental map is intact. In contrast, there were severe impairments in egocentric spatial memory guided by visual or tactile cues that monkeys can relate to their viewing perspective during testing. These results strongly suggest that dorsolateral prefrontal cortex selectively mediates spatial memory tasks that are solved by referencing the location of targets to the body's orientation. (C) 2003 Lippincott Williams Wilkins.
On the generality of crowding: visual crowding in size, saturation, and hue compared to orientation.
Resumo:
Perception of peripherally viewed shapes is impaired when surrounded by similar shapes. This phenomenon is commonly referred to as "crowding". Although studied extensively for perception of characters (mainly letters) and, to a lesser extent, for orientation, little is known about whether and how crowding affects perception of other features. Nevertheless, current crowding models suggest that the effect should be rather general and thus not restricted to letters and orientation. Here, we report on a series of experiments investigating crowding in the following elementary feature dimensions: size, hue, and saturation. Crowding effects in these dimensions were benchmarked against those in the orientation domain. Our primary finding is that all features studied show clear signs of crowding. First, identification thresholds increase with decreasing mask spacing. Second, for all tested features, critical spacing appears to be roughly half the viewing eccentricity and independent of stimulus size, a property previously proposed as the hallmark of crowding. Interestingly, although critical spacings are highly comparable, crowding magnitude differs across features: Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for saturation and hue. We suggest that future theories and models of crowding should be able to accommodate these differences in crowding effects.
Resumo:
A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical "sensory data" a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term "active knowledge". A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.
Resumo:
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.
Resumo:
Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain’s form and motion systems that address such situations. Because the model’s stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse “feature tracking signals” from, e.g., line ends, are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and longrange cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
Humans are metacognitive: they monitor and control their cognition. Our hypothesis was that neuronal correlates of metacognition reside in the same brain areas responsible for cognition, including frontal cortex. Recent work demonstrated that nonhuman primates are capable of metacognition, so we recorded from single neurons in the frontal eye field, dorsolateral prefrontal cortex, and supplementary eye field of monkeys (Macaca mulatta) that performed a metacognitive visual-oculomotor task. The animals made a decision and reported it with a saccade, but received no immediate reward or feedback. Instead, they had to monitor their decision and bet whether it was correct. Activity was correlated with decisions and bets in all three brain areas, but putative metacognitive activity that linked decisions to appropriate bets occurred exclusively in the SEF. Our results offer a survey of neuronal correlates of metacognition and implicate the SEF in linking cognitive functions over short periods of time.