2 resultados para visual world eye-tracking
em Illinois Digital Environment for Access to Learning and Scholarship Repository
Resumo:
Eye-tracking was used to examine how younger and older adults use syntactic and semantic information to disambiguate noun/verb (NV) homographs (e.g., park). We find that young adults exhibit inflated first fixations to NV-homographs when only syntactic cues are available for disambiguation (i.e., in syntactic prose). This effect is eliminated with the addition of disambiguating semantic information. Older adults (60+) as a group fail to show the first fixation effect in syntactic prose; they instead reread NV homographs longer. This pattern mirrors that in prior event-related potential work (Lee & Federmeier, 2009, 2011), which reported a sustained frontal negativity to NV-homographs in syntactic prose for young adults, which was eliminated by semantic constraints. The frontal negativity was not observed in older adults as a group, although older adults with high verbal fluency showed the young-like pattern. Analyses of individual differences in eye-tracking patterns revealed a similar effect of verbal fluency in both young and older adults: high verbal fluency groups of both ages show larger first fixation effects, while low verbal fluency groups show larger downstream costs (rereading and/or refixating NV homographs). Jointly, the eye-tracking and ERP data suggest that effortful meaning selection recruits frontal brain areas important for suppressing contextually inappropriate meanings, which also slows eye movements. Efficacy of fronto-temporal circuitry, as captured by verbal fluency, predicts the success of engaging these mechanisms in both young and older adults. Failure to recruit these processes requires compensatory rereading or leads to comprehension failures (Lee & Federmeier, in press).
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.