934 resultados para unsupervised feature learning
Resumo:
Recent findings in neuroscience suggest that adult brain structure changes in response to environmental alterations and skill learning. Whereas much is known about structural changes after intensive practice for several months, little is known about the effects of single practice sessions on macroscopic brain structure and about progressive (dynamic) morphological alterations relative to improved task proficiency during learning for several weeks. Using T1-weighted and diffusion tensor imaging in humans, we demonstrate significant gray matter volume increases in frontal and parietal brain areas following only two sessions of practice in a complex whole-body balancing task. Gray matter volume increase in the prefrontal cortex correlated positively with subject's performance improvements during a 6 week learning period. Furthermore, we found that microstructural changes of fractional anisotropy in corresponding white matter regions followed the same temporal dynamic in relation to task performance. The results make clear how marginal alterations in our ever changing environment affect adult brain structure and elucidate the interrelated reorganization in cortical areas and associated fiber connections in correlation with improvements in task performance.
Learning-induced plasticity in auditory spatial representations revealed by electrical neuroimaging.
Resumo:
Auditory spatial representations are likely encoded at a population level within human auditory cortices. We investigated learning-induced plasticity of spatial discrimination in healthy subjects using auditory-evoked potentials (AEPs) and electrical neuroimaging analyses. Stimuli were 100 ms white-noise bursts lateralized with varying interaural time differences. In three experiments, plasticity was induced with 40 min of discrimination training. During training, accuracy significantly improved from near-chance levels to approximately 75%. Before and after training, AEPs were recorded to stimuli presented passively with a more medial sound lateralization outnumbering a more lateral one (7:1). In experiment 1, the same lateralizations were used for training and AEP sessions. Significant AEP modulations to the different lateralizations were evident only after training, indicative of a learning-induced mismatch negativity (MMN). More precisely, this MMN at 195-250 ms after stimulus onset followed from differences in the AEP topography to each stimulus position, indicative of changes in the underlying brain network. In experiment 2, mirror-symmetric locations were used for training and AEP sessions; no training-related AEP modulations or MMN were observed. In experiment 3, the discrimination of trained plus equidistant untrained separations was tested psychophysically before and 0, 6, 24, and 48 h after training. Learning-induced plasticity lasted <6 h, did not generalize to untrained lateralizations, and was not the simple result of strengthening the representation of the trained lateralizations. Thus, learning-induced plasticity of auditory spatial discrimination relies on spatial comparisons, rather than a spatial anchor or a general comparator. Furthermore, cortical auditory representations of space are dynamic and subject to rapid reorganization.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.