8 resultados para nativity scene

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although people do not normally try to remember associations between faces and physical contexts, these associations are established automatically, as indicated by the difficulty of recognizing familiar faces in different contexts ("butcher-on-the-bus" phenomenon). The present fMRI study investigated the automatic binding of faces and scenes. In the face-face (F-F) condition, faces were presented alone during both encoding and retrieval, whereas in the face/scene-face (FS-F) condition, they were presented overlaid on scenes during encoding but alone during retrieval (context change). Although participants were instructed to focus only on the faces during both encoding and retrieval, recognition performance was worse in the FS-F than in the F-F condition ("context shift decrement" [CSD]), confirming automatic face-scene binding during encoding. This binding was mediated by the hippocampus as indicated by greater subsequent memory effects (remembered > forgotten) in this region for the FS-F than the F-F condition. Scene memory was mediated by right parahippocampal cortex, which was reactivated during successful retrieval when the faces were associated with a scene during encoding (FS-F condition). Analyses using the CSD as a regressor yielded a clear hemispheric asymmetry in medial temporal lobe activity during encoding: Left hippocampal and parahippocampal activity was associated with a smaller CSD, indicating more flexible memory representations immune to context changes, whereas right hippocampal/rhinal activity was associated with a larger CSD, indicating less flexible representations sensitive to context change. Taken together, the results clarify the neural mechanisms of context effects on face recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An event memory is a mental construction of a scene recalled as a single occurrence. It therefore requires the hippocampus and ventral visual stream needed for all scene construction. The construction need not come with a sense of reliving or be made by a participant in the event, and it can be a summary of occurrences from more than one encoding. The mental construction, or physical rendering, of any scene must be done from a specific location and time; this introduces a "self" located in space and time, which is a necessary, but need not be a sufficient, condition for a sense of reliving. We base our theory on scene construction rather than reliving because this allows the integration of many literatures and because there is more accumulated knowledge about scene construction's phenomenology, behavior, and neural basis. Event memory differs from episodic memory in that it does not conflate the independent dimensions of whether or not a memory is relived, is about the self, is recalled voluntarily, or is based on a single encoding with whether it is recalled as a single occurrence of a scene. Thus, we argue that event memory provides a clearer contrast to semantic memory, which also can be about the self, be recalled voluntarily, and be from a unique encoding; allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real-world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Autobiographical memories may be recalled from two different perspectives: Field memories in which the person seems to remember the scene from his/her original point of view and observer memories in which the rememberer sees him/herself in the memory image. Here, 122 undergraduates participated in an experiment examining the relation between field vs. observer perspective in memory for 10 different emotional states, including both positive and negative emotions and emotions associated with high vs. low intensity. Observer perspective was associated with reduced sensory and emotional reliving across all emotions. This effect was observed for naturally occurring memory perspective and when participants were instructed to change their perspective from field to observer, but not when participants were instructed to change perspective from observer to field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2015 Young, Smith, Coutlee and Huettel.Individuals with autistic spectrum disorders exhibit distinct personality traits linked to attentional, social, and affective functions, and those traits are expressed with varying levels of severity in the neurotypical and subclinical population. Variation in autistic traits has been linked to reduced functional and structural connectivity (i.e., underconnectivity, or reduced synchrony) with neural networks modulated by attentional, social, and affective functions. Yet, it remains unclear whether reduced synchrony between these neural networks contributes to autistic traits. To investigate this issue, we used functional magnetic resonance imaging to record brain activation while neurotypical participants who varied in their subclinical scores on the Autism-Spectrum Quotient (AQ) viewed alternating blocks of social and nonsocial stimuli (i.e., images of faces and of landscape scenes). We used independent component analysis (ICA) combined with a spatiotemporal regression to quantify synchrony between neural networks. Our results indicated that decreased synchrony between the executive control network (ECN) and a face-scene network (FSN) predicted higher scores on the AQ. This relationship was not explained by individual differences in head motion, preferences for faces, or personality variables related to social cognition. Our findings build on clinical reports by demonstrating that reduced synchrony between distinct neural networks contributes to a range of subclinical autistic traits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The image on the retina may move because the eyes move, or because something in the visual scene moves. The brain is not fooled by this ambiguity. Even as we make saccades, we are able to detect whether visual objects remain stable or move. Here we test whether this ability to assess visual stability across saccades is present at the single-neuron level in the frontal eye field (FEF), an area that receives both visual input and information about imminent saccades. Our hypothesis was that neurons in the FEF report whether a visual stimulus remains stable or moves as a saccade is made. Monkeys made saccades in the presence of a visual stimulus outside of the receptive field. In some trials, the stimulus remained stable, but in other trials, it moved during the saccade. In every trial, the stimulus occupied the center of the receptive field after the saccade, thus evoking a reafferent visual response. We found that many FEF neurons signaled, in the strength and timing of their reafferent response, whether the stimulus had remained stable or moved. Reafferent responses were tuned for the amount of stimulus translation, and, in accordance with human psychophysics, tuning was better (more prevalent, stronger, and quicker) for stimuli that moved perpendicular, rather than parallel, to the saccade. Tuning was sometimes present as well for nonspatial transaccadic changes (in color, size, or both). Our results indicate that FEF neurons evaluate visual stability during saccades and may be general purpose detectors of transaccadic visual change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability,” quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.