86 resultados para Image Processing, Visual Prostheses, Visual Information, Artificial Human Vision, Visual Perception
Resumo:
OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
Resumo:
Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.
Resumo:
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.
Resumo:
Whether different brain networks are involved in generating unimanual responses to a simple visual stimulus presented in the ipsilateral versus contralateral hemifield remains a controversial issue. Visuo-motor routing was investigated with event-related functional magnetic resonance imaging (fMRI) using the Poffenberger reaction time task. A 2 hemifield x 2 response hand design generated the "crossed" and "uncrossed" conditions, describing the spatial relation between these factors. Both conditions, with responses executed by the left or right hand, showed a similar spatial pattern of activated areas, including striate and extrastriate areas bilaterally, SMA, and M1 contralateral to the responding hand. These results demonstrated that visual information is processed bilaterally in striate and extrastriate visual areas, even in the "uncrossed" condition. Additional analyses based on sorting data according to subjects' reaction times revealed differential crossed versus uncrossed activity only for the slowest trials, with response strength in infero-temporal cortices significantly correlating with crossed-uncrossed differences (CUD) in reaction times. Collectively, the data favor a parallel, distributed model of brain activation. The presence of interhemispheric interactions and its consequent bilateral activity is not determined by the crossed anatomic projections of the primary visual and motor pathways. Distinct visuo-motor networks need not be engaged to mediate behavioral responses for the crossed visual field/response hand condition. While anatomical connectivity heavily influences the spatial pattern of activated visuo-motor pathways, behavioral and functional parameters appear to also affect the strength and dynamics of responses within these pathways.
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Resumo:
Recent findings suggest that the visuo-spatial sketchpad (VSSP) may be divided into two sub-components processing dynamic or static visual information. This model may be useful to elucidate the confusion of data concerning the functioning of the VSSP in schizophrenia. The present study examined patients with schizophrenia and matched controls in a new working memory paradigm involving dynamic (the Ball Flight Task - BFT) or static (the Static Pattern Task - SPT) visual stimuli. In the BFT, the responses of the patients were apparently based on the retention of the last set of segments of the perceived trajectory, whereas control subjects relied on a more global strategy. We assume that the patients' performances are the result of a reduced capacity in chunking visual information since they relied mainly on the retention of the last set of segments. This assumption is confirmed by the poor performance of the patients in the static task (SPT), which requires a combination of stimulus components into object representations. We assume that the static/dynamic distinction may help us to understand the VSSP deficits in schizophrenia. This distinction also raises questions about the hypothesis that visuo-spatial working memory can simply be dissociated into visual and spatial sub-components.
Resumo:
The current state of empirical investigations refers to consciousness as an all-or-none phenomenon. However, a recent theoretical account opens up this perspective by proposing a partial level (between nil and full) of conscious perception. In the well-studied case of single-word reading, short-lived exposure can trigger incomplete word-form recognition wherein letters fall short of forming a whole word in one's conscious perception thereby hindering word-meaning access and report. Hence, the processing from incomplete to complete word-form recognition straightforwardly mirrors a transition from partial to full-blown consciousness. We therefore hypothesized that this putative functional bottleneck to consciousness (i.e. the perceptual boundary between partial and full conscious perception) would emerge at a major key hub region for word-form recognition during reading, namely the left occipito-temporal junction. We applied a real-time staircase procedure and titrated subjective reports at the threshold between partial (letters) and full (whole word) conscious perception. This experimental approach allowed us to collect trials with identical physical stimulation, yet reflecting distinct perceptual experience levels. Oscillatory brain activity was monitored with magnetoencephalography and revealed that the transition from partial-to-full word-form perception was accompanied by alpha-band (7-11 Hz) power suppression in the posterior left occipito-temporal cortex. This modulation of rhythmic activity extended anteriorly towards the visual word form area (VWFA), a region whose selectivity for word-forms in perception is highly debated. The current findings provide electrophysiological evidence for a functional bottleneck to consciousness thereby empirically instantiating a recently proposed partial perspective on consciousness. Moreover, the findings provide an entirely new outlook on the functioning of the VWFA as a late bottleneck to full-blown conscious word-form perception.
Resumo:
Rats, like other crepuscular animals, have excellent auditory capacities and they discriminate well between different sounds [Heffner HE, Heffner RS, Hearing in two cricetid rodents: wood rats (Neotoma floridana) and grasshopper mouse (Onychomys leucogaster). J Comp Psychol 1985;99(3):275-88]. However, most experimental literature concerning spatial orientation almost exclusively emphasizes the use of visual landmarks [Cressant A, Muller RU, Poucet B. Failure of centrally placed objects to control the firing fields of hippocampal place cells. J Neurosci 1997;17(7):2531-42; and Goodridge JP, Taube JS. Preferential use of the landmark navigational system by head direction cells in rats. Behav Neurosci 1995;109(1):49-61]. To address the important issue of whether rats are able to achieve a place navigation task relative to auditory beacons, we designed a place learning task in the water maze. We controlled cue availability by conducting the experiment in total darkness. Three auditory cues did not allow place navigation whereas three visual cues in the same positions did support place navigation. One auditory beacon directly associated with the goal location did not support taxon navigation (a beacon strategy allowing the animal to find the goal just by swimming toward the cue). Replacing the auditory beacons by one single visual beacon did support taxon navigation. A multimodal configuration of two auditory cues and one visual cue allowed correct place navigation. The deletion of the two auditory or of the one visual cue did disrupt the spatial performance. Thus rats can combine information from different sensory modalities to achieve a place navigation task. In particular, auditory cues support place navigation when associated with a visual one.
Resumo:
BACKGROUND AND OBJECTIVES: The thalamus exerts a pivotal role in pain processing and cortical excitability control, and migraine is characterized by repeated pain attacks and abnormal cortical habituation to excitatory stimuli. This work aimed at studying the microstructure of the thalamus in migraine patients using an innovative multiparametric approach at high-field magnetic resonance imaging (MRI). DESIGN: We examined 37 migraineurs (22 without aura, MWoA, and 15 with aura, MWA) as well as 20 healthy controls (HC) in a 3-T MRI equipped with a 32-channel coil. We acquired whole-brain T1 relaxation maps and computed magnetization transfer ratio (MTR), generalized fractional anisotropy, and T2* maps to probe microstructural and connectivity integrity and to assess iron deposition. We also correlated the obtained parametric values with the average monthly frequency of migraine attacks and disease duration. RESULTS: T1 relaxation time was significantly shorter in the thalamus of MWA patients compared with MWoA (P < 0.001) and HC (P ≤ 0.01); in addition, MTR was higher and T2* relaxation time was shorter in MWA than in MWoA patients (P < 0.05, respectively). These data reveal broad microstructural alterations in the thalamus of MWA patients compared with MWoA and HC, suggesting increased iron deposition and myelin content/cellularity. However, MWA and MWoA patients did not show any differences in the thalamic nucleus involved in pain processing in migraine. CONCLUSIONS: There are broad microstructural alterations in the thalamus of MWA patients that may underlie abnormal cortical excitability control leading to cortical spreading depression and visual aura.
Resumo:
Whether the somatosensory system, like its visual and auditory counterparts, is comprised of parallel functional pathways for processing identity and spatial attributes (so-called what and where pathways, respectively) has hitherto been studied in humans using neuropsychological and hemodynamic methods. Here, electrical neuroimaging of somatosensory evoked potentials (SEPs) identified the spatio-temporal mechanisms subserving vibrotactile processing during two types of blocks of trials. What blocks varied stimuli in their frequency (22.5 Hz vs. 110 Hz) independently of their location (left vs. right hand). Where blocks varied the same stimuli in their location independently of their frequency. In this way, there was a 2x2 within-subjects factorial design, counterbalancing the hand stimulated (left/right) and trial type (what/where). Responses to physically identical somatosensory stimuli differed within 200 ms post-stimulus onset, which is within the same timeframe we previously identified for audition (De Santis, L., Clarke, S., Murray, M.M., 2007. Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9-17.). Initially (100-147 ms), responses to each hand were stronger to the what than where condition in a statistically indistinguishable network within the hemisphere contralateral to the stimulated hand, arguing against hemispheric specialization as the principal basis for somatosensory what and where pathways. Later (149-189 ms) responses differed topographically, indicative of the engagement of distinct configurations of brain networks. A common topography described responses to the where condition irrespective of the hand stimulated. By contrast, different topographies accounted for the what condition and also as a function of the hand stimulated. Parallel, functionally specialized pathways are observed across sensory systems and may be indicative of a computationally advantageous organization for processing spatial and identity information.
Resumo:
Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.