4 resultados para Special effects
em Boston University Digital Common
Resumo:
The perception of a glossy surface in a static monochromatic image can occur when a bright highlight is embedded in a compatible context of shading and a bounding contour. Some images naturally give rise to the impression that a surface has a uniform reflectance, characteristic of a shiny object, even though the highlight may only cover a small portion of the surface. Nonetheless, an observer may adopt an attitude of scrutiny in viewing a glossy surface, whereby the impression of gloss is partial and nonuniform at image regions outside of a higlight. Using a rating scale and small probe points to indicate image locations, differential perception of gloss within a single object is investigate in the present study. Observers' gloss ratings are not uniform across the surface, but decrease as a function of distance from highlight. When, by design, the distance from a highlight is uncoupled from the luminance value at corresponding probe points, the decrease in rated gloss correlates more with the distance than with the luminance change. Experiments also indicate that gloss ratings change as a function of estimated surface distance, rather than as a function of image distance. Surface continuity affects gloss ratings, suggesting that apprehension of 3D surface structure is crucial for gloss perception.
Resumo:
The giant cholinergic interneurons of the striatum are tonically active neurons (TANs) that respond with characteristic pauses to novel events and to appetitive and aversive conditioned stimuli. Fluctuations in acetylcholine release by TANs modulate performance- and learning-related dynamics in the striatum. Whereas tonic activity emerges from intrinsic properties of these neurons, glutamatergic inputs from thalamic centromedian-parafascicular nuclei, and dopaminergic inputs from midbrain, are required for the generation of pause responses. No prior computational models encompass both intrinsic and synaptically-gated dynamics. We present a mathematical model that robustly accounts for behavior-related electrophysiological properties of TANs in terms of their intrinsic physiological properties and known afferents. In the model, balanced intrinsic hyperpolarizing and depolarizing currents engender tonic firing, and glutamatergic inputs from thalamus (and cortex) both directly excite and indirectly inhibit TANs. If the latter inhibition, presumably mediated by GABAergic interneurons, exceeds a threshold, its effect is amplified by a KIR current to generate a prolonged pause. In the model, the intrinsic mechanisms and external inputs are both modulated by learning-dependent dopamine (DA) signals and our simulations revealed that many learning-dependent behaviors of TANs are explicable without recourse to learning-dependent changes in synapses onto TANs. The "teaching signal" that modulates reinforcement learning at cortico-striatal synapses may be a sequence composed of an adaptively scaled DA burst, a brief ACh burst, and a scaled ACh pause. Such an interpretation is consistent with recent data on cholinergic control of LTD of cortical synapses onto striatal spiny projection neurons.
Resumo:
We present a neural network that adapts and integrates several preexisting or new modules to categorize events in short term memory (STM), encode temporal order in working memory, evaluate timing and probability context in medium and long term memory. The model shows how processed contextual information modulates event recognition and categorization, focal attention and incentive motivation. The model is based on a compendium of Event Related Potentials (ERPs) and behavioral results either collected by the authors or compiled from the classical ERP literature. Its hallmark is, at the functional level, the interplay of memory registers endowed with widely different dynamical ranges, and at the structural level, the attempt to relate the different modules to known anatomical structures.
Resumo:
This paper studies several applications of genetic algorithms (GAs) within the neural networks field. After generating a robust GA engine, the system was used to generate neural network circuit architectures. This was accomplished by using the GA to determine the weights in a fully interconnected network. The importance of the internal genetic representation was shown by testing different approaches. The effects in speed of optimization of varying the constraints imposed upon the desired network were also studied. It was observed that relatively loose constraints provided results comparable to a fully constrained system. The type of neural network circuits generated were recurrent competitive fields as described by Grossberg (1982).