22 resultados para Bag-of-visual Words
em Duke University
Resumo:
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.
Resumo:
Successful interaction with the world depends on accurate perception of the timing of external events. Neurons at early stages of the primate visual system represent time-varying stimuli with high precision. However, it is unknown whether this temporal fidelity is maintained in the prefrontal cortex, where changes in neuronal activity generally correlate with changes in perception. One reason to suspect that it is not maintained is that humans experience surprisingly large fluctuations in the perception of time. To investigate the neuronal correlates of time perception, we recorded from neurons in the prefrontal cortex and midbrain of monkeys performing a temporal-discrimination task. Visual time intervals were presented at a timescale relevant to natural behavior (<500 ms). At this brief timescale, neuronal adaptation--time-dependent changes in the size of successive responses--occurs. We found that visual activity fluctuated with timing judgments in the prefrontal cortex but not in comparable midbrain areas. Surprisingly, only response strength, not timing, predicted task performance. Intervals perceived as longer were associated with larger visual responses and shorter intervals with smaller responses, matching the dynamics of adaptation. These results suggest that the magnitude of prefrontal activity may be read out to provide temporal information that contributes to judging the passage of time.
Resumo:
Our percept of visual stability across saccadic eye movements may be mediated by presaccadic remapping. Just before a saccade, neurons that remap become visually responsive at a future field (FF), which anticipates the saccade vector. Hence, the neurons use corollary discharge of saccades. Many of the neurons also decrease their response at the receptive field (RF). Presaccadic remapping occurs in several brain areas including the frontal eye field (FEF), which receives corollary discharge of saccades in its layer IV from a collicular-thalamic pathway. We studied, at two levels, the microcircuitry of remapping in the FEF. At the laminar level, we compared remapping between layers IV and V. At the cellular level, we compared remapping between different neuron types of layer IV. In the FEF in four monkeys (Macaca mulatta), we identified 27 layer IV neurons with orthodromic stimulation and 57 layer V neurons with antidromic stimulation from the superior colliculus. With the use of established criteria, we classified the layer IV neurons as putative excitatory (n = 11), putative inhibitory (n = 12), or ambiguous (n = 4). We found that just before a saccade, putative excitatory neurons increased their visual response at the RF, putative inhibitory neurons showed no change, and ambiguous neurons increased their visual response at the FF. None of the neurons showed presaccadic visual changes at both RF and FF. In contrast, neurons in layer V showed full remapping (at both the RF and FF). Our data suggest that elemental signals for remapping are distributed across neuron types in early cortical processing and combined in later stages of cortical microcircuitry.
Resumo:
Neuronal receptive fields (RFs) provide the foundation for understanding systems-level sensory processing. In early visual areas, investigators have mapped RFs in detail using stochastic stimuli and sophisticated analytical approaches. Much less is known about RFs in prefrontal cortex. Visual stimuli used for mapping RFs in prefrontal cortex tend to cover a small range of spatial and temporal parameters, making it difficult to understand their role in visual processing. To address these shortcomings, we implemented a generalized linear model to measure the RFs of neurons in the macaque frontal eye field (FEF) in response to sparse, full-field stimuli. Our high-resolution, probabilistic approach tracked the evolution of RFs during passive fixation, and we validated our results against conventional measures. We found that FEF neurons exhibited a surprising level of sensitivity to stimuli presented as briefly as 10 ms or to multiple dots presented simultaneously, suggesting that FEF visual responses are more precise than previously appreciated. FEF RF spatial structures were largely maintained over time and between stimulus conditions. Our results demonstrate that the application of probabilistic RF mapping to FEF and similar association areas is an important tool for clarifying the neuronal mechanisms of cognition.
Resumo:
Many neurons in the frontal eye field (FEF) exhibit visual responses and are thought to play important roles in visuosaccadic behavior. The FEF, however, is far removed from striate cortex. Where do the FEF's visual signals come from? Usually they are reasonably assumed to enter the FEF through afferents from extrastriate cortex. Here we show that, surprisingly, visual signals also enter the FEF through a subcortical route: a disynaptic, ascending pathway originating in the intermediate layers of the superior colliculus (SC). We recorded from identified neurons at all three stages of this pathway (n=30-40 in each sample): FEF recipient neurons, orthodromically activated from the SC; mediodorsal thalamus (MD) relay neurons, antidromically activated from FEF and orthodromically activated from SC; and SC source neurons, antidromically activated from MD. We studied the neurons while monkeys performed delayed saccade tasks designed to temporally resolve visual responses from presaccadic discharges. We found, first, that most neurons at every stage in the pathway had visual responses, presaccadic bursts, or both. Second, we found marked similarities between the SC source neurons and MD relay neurons: in both samples, about 15% of the neurons had only a visual response, 10% had only a presaccadic burst, and 75% had both. In contrast, FEF recipient neurons tended to be more visual in nature: 50% had only a visual response, none had only a presaccadic burst, and 50% had both a visual response and a presaccadic burst. This suggests that in addition to their subcortical inputs, these FEF neurons also receive other visual inputs, e.g. from extrastriate cortex. We conclude that visual activity in the FEF results not only from cortical afferents but also from subcortical inputs. Intriguingly, this implies that some of the visual signals in FEF are pre-processed by the SC.
Resumo:
Young infants' learning of words for abstract concepts like 'all gone' and 'eat,' in contrast to their learning of more concrete words like 'apple' and 'shoe,' may follow a relatively protracted developmental course. We examined whether infants know such abstract words. Parents named one of two events shown in side-by-side videos while their 6-16-month-old infants (n=98) watched. On average, infants successfully looked at the named video by 10 months, but not earlier, and infants' looking at the named referent increased robustly at around 14 months. Six-month-olds already understand concrete words in this task (Bergelson & Swingley, 2012). A video-corpus analysis of unscripted mother-infant interaction showed that mothers used the tested abstract words less often in the presence of their referent events than they used concrete words in the presence of their referent objects. We suggest that referential uncertainty in abstract words' teaching conditions may explain the later acquisition of abstract than concrete words, and we discuss the possible role of changes in social-cognitive abilities over the 6-14 month period.
Resumo:
Visual inspection with Acetic Acid (VIA) and Visual Inspection with Lugol’s Iodine (VILI) are increasingly recommended in various cervical cancer screening protocols in low-resource settings. Although VIA is more widely used, VILI has been advocated as an easier and more specific screening test. VILI has not been well-validated as a stand-alone screening test, compared to VIA or validated for use in HIV-infected women. We carried out a randomized clinical trial to compare the diagnostic accuracy of VIA and VILI among HIV-infected women. Women attending the Family AIDS Care and Education Services (FACES) clinic in western Kenya were enrolled and randomized to undergo either VIA or VILI with colposcopy. Lesions suspicious for cervical intraepithelial neoplasia 2 or greater (CIN2+) were biopsied. Between October 2011 and June 2012, 654 were randomized to undergo VIA or VILI. The test positivity rates were 26.2% for VIA and 30.6% for VILI (p = 0.22). The rate of detection of CIN2+ was 7.7% in the VIA arm and 11.5% in the VILI arm (p = 0.10). There was no significant difference in the diagnostic performance of VIA and VILI for the detection of CIN2+. Sensitivity and specificity were 84.0% and 78.6%, respectively, for VIA and 84.2% and 76.4% for VILI. The positive and negative predictive values were 24.7% and 98.3% for VIA, and 31.7% and 97.4% for VILI. Among women with CD4+ count < 350, VILI had a significantly decreased specificity (66.2%) compared to VIA in the same group (83.9%, p = 0.02) and compared to VILI performed among women with CD4+ count ≥ 350 (79.7%, p = 0.02). VIA and VILI had similar diagnostic accuracy and rates of CIN2+ detection among HIV-infected women.
Resumo:
Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed.
Resumo:
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Resumo:
Recently, a number of investigators have examined the neural loci of psychological processes enabling the control of visual spatial attention using cued-attention paradigms in combination with event-related functional magnetic resonance imaging. Findings from these studies have provided strong evidence for the involvement of a fronto-parietal network in attentional control. In the present study, we build upon this previous work to further investigate these attentional control systems. In particular, we employed additional controls for nonattentional sensory and interpretative aspects of cue processing to determine whether distinct regions in the fronto-parietal network are involved in different aspects of cue processing, such as cue-symbol interpretation and attentional orienting. In addition, we used shorter cue-target intervals that were closer to those used in the behavioral and event-related potential cueing literatures. Twenty participants performed a cued spatial attention task while brain activity was recorded with functional magnetic resonance imaging. We found functional specialization for different aspects of cue processing in the lateral and medial subregions of the frontal and parietal cortex. In particular, the medial subregions were more specific to the orienting of visual spatial attention, while the lateral subregions were associated with more general aspects of cue processing, such as cue-symbol interpretation. Additional cue-related effects included differential activations in midline frontal regions and pretarget enhancements in the thalamus and early visual cortical areas.
Resumo:
We have isolated and sequenced a cDNA encoding the human beta 2-adrenergic receptor. The deduced amino acid sequence (413 residues) is that of a protein containing seven clusters of hydrophobic amino acids suggestive of membrane-spanning domains. While the protein is 87% identical overall with the previously cloned hamster beta 2-adrenergic receptor, the most highly conserved regions are the putative transmembrane helices (95% identical) and cytoplasmic loops (93% identical), suggesting that these regions of the molecule harbor important functional domains. Several of the transmembrane helices also share lesser degrees of identity with comparable regions of select members of the opsin family of visual pigments. We have localized the gene for the beta 2-adrenergic receptor to q31-q32 on chromosome 5. This is the same position recently determined for the gene encoding the receptor for platelet-derived growth factor and is adjacent to that for the FMS protooncogene, which encodes the receptor for the macrophage colony-stimulating factor.
Resumo:
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Resumo:
BACKGROUND: Few educational resources have been developed to inform patients' renal replacement therapy (RRT) selection decisions. Patients progressing toward end stage renal disease (ESRD) must decide among multiple treatment options with varying characteristics. Complex information about treatments must be adequately conveyed to patients with different educational backgrounds and informational needs. Decisions about treatment options also require family input, as families often participate in patients' treatment and support patients' decisions. We describe the development, design, and preliminary evaluation of an informational, evidence-based, and patient-and family-centered decision aid for patients with ESRD and varying levels of health literacy, health numeracy, and cognitive function. METHODS: We designed a decision aid comprising a complementary video and informational handbook. We based our development process on data previously obtained from qualitative focus groups and systematic literature reviews. We simultaneously developed the video and handbook in "stages." For the video, stages included (1) directed interviews with culturally appropriate patients and families and preliminary script development, (2) video production, and (3) screening the video with patients and their families. For the handbook, stages comprised (1) preliminary content design, (2) a mixed-methods pilot study among diverse patients to assess comprehension of handbook material, and (3) screening the handbook with patients and their families. RESULTS: The video and handbook both addressed potential benefits and trade-offs of treatment selections. The 50-minute video consisted of demographically diverse patients and their families describing their positive and negative experiences with selecting a treatment option. The video also incorporated health professionals' testimonials regarding various considerations that might influence patients' and families' treatment selections. The handbook was comprised of written words, pictures of patients and health care providers, and diagrams describing the findings and quality of scientific studies comparing treatments. The handbook text was written at a 4th to 6th grade reading level. Pilot study results demonstrated that a majority of patients could understand information presented in the handbook. Patient and families screening the nearly completed video and handbook reviewed the materials favorably. CONCLUSIONS: This rigorously designed decision aid may help patients and families make informed decisions about their treatment options for RRT that are well aligned with their values.
Resumo:
Research on future episodic thought has produced compelling theories and results in cognitive psychology, cognitive neuroscience, and clinical psychology. In experiments aimed to integrate these with basic concepts and methods from autobiographical memory research, 76 undergraduates remembered past and imagined future positive and negative events that had or would have a major impact on them. Correlations of the online ratings of visual and auditory imagery, emotion, and other measures demonstrated that individuals used the same processes to the same extent to remember past and construct future events. These measures predicted the theoretically important metacognitive judgment of past reliving and future "preliving" in similar ways. On standardized tests of reactions to traumatic events, scores for future negative events were much higher than scores for past negative events. The scores for future negative events were in the range that would qualify for a diagnosis of posttraumatic stress disorder (PTSD); the test was replicated (n = 52) to check for order effects. Consistent with earlier work, future events had less sensory vividness. Thus, the imagined symptoms of future events were unlikely to be caused by sensory vividness. In a second experiment, to confirm this, 63 undergraduates produced numerous added details between 2 constructions of the same negative future events; deficits in rated vividness were removed with no increase in the standardized tests of reactions to traumatic events. Neuroticism predicted individuals' reactions to negative past events but did not predict imagined reactions to future events. This set of novel methods and findings is interpreted in the contexts of the literatures of episodic future thought, autobiographical memory, PTSD, and classic schema theory.
Resumo:
We investigated the effects of visual input at encoding and retrieval on the phenomenology of memory. In Experiment 1, participants took part in events with and without wearing blindfolds, and later were shown a video of the events. Blindfolding, as well as later viewing of the video, both tended to decrease recollection. In Experiment 2, participants were played videos, with and without the visual component, of events involving other people. Events listened to without visual input were recalled with less recollection; later adding of the visual component increased recollection. In Experiment 3, participants were provided with progressively more information about events that they had experienced, either in the form of photographs that they had taken of the events or narrative descriptions of those photographs. In comparison with manipulations at encoding, the addition of more visual or narrative cues at recall had similar but smaller effects on recollection.