918 resultados para Visual Word-recognition
Resumo:
This paper studies the effectiveness of the recorded books and teaching method developed by Dr. Marie Carbo in the aural habilitation of pre-lingual deaf children with cochlear implants.
Resumo:
The ability for individuals with hearing loss to accurately recognize correct versus incorrect verbal responses during traditional word recognition testing across four different listening conditions was assessed.
Resumo:
This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.
Resumo:
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied in frequency (low/high) and number of phonological onset neighbors (low/high density). Adolescents with ALI required more speech input to initially identify low-frequency words with low competitor density than those with SLI and those with TLD, who did not differ. These differences may be due to less well specified word form representations in ALI.
Resumo:
Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
Vision extracts useful information from images. Reconstructing the three-dimensional structure of our environment and recognizing the objects that populate it are among the most important functions of our visual system. Computer vision researchers study the computational principles of vision and aim at designing algorithms that reproduce these functions. Vision is difficult: the same scene may give rise to very different images depending on illumination and viewpoint. Typically, an astronomical number of hypotheses exist that in principle have to be analyzed to infer a correct scene description. Moreover, image information might be extracted at different levels of spatial and logical resolution dependent on the image processing task. Knowledge of the world allows the visual system to limit the amount of ambiguity and to greatly simplify visual computations. We discuss how simple properties of the world are captured by the Gestalt rules of grouping, how the visual system may learn and organize models of objects for recognition, and how one may control the complexity of the description that the visual system computes.
Resumo:
Automaticity (in this essay defined as short response time) and fluency in language use are closely connected to each other and some research has been conducted regarding some of the aspects involved. In fact, the notion of automaticity is still debated and many definitions and opinions on what automaticity is have been suggested (Andersson,1987, 1992, 1993, Logan, 1988, Segalowitz, 2010). One aspect that still needs more research is the correlation between vocabulary proficiency (a person’s knowledge about words and ability to use them correctly) and response time in word recognition. Therefore, the aim of this study has been to investigate this correlation using two different tests; one vocabulary size test (Paul Nation) and one lexical decision task (SuperLab) that measures both response time and accuracy. 23 Swedish students partaking in the English 7 course in upper secondary Swedish school were tested. The data were analyzed using a quantitative method where the average values and correlations from the test were used to compare the results. The correlations were calculated using Pearson’s Coefficient Correlations Calculator. The empirical study indicates that vocabulary proficiency is not strongly correlated with shorter response times in word recognition. Rather, the data indicate that L2 learners instead are sensitive to the frequency levels of the vocabulary. The accuracy (number of correct recognized words) and response times correlate with the frequency level of the tested words. This indicates that factors other than vocabulary proficiency are important for the ability to recognize words quickly.
Resumo:
The aim of this work is to evaluate the roles of age and emotional valence in word recognition in terms of ex-Gaussian distribution components. In order to do that, a word recognition task was carried out with two age groups, in which emotional valence was manipulated. Older participants did not present a clear trend for reaction times. The younger participants showed significant statistical differences in negative words for target and distracting conditions. Addressing the ex-Gaussian tau parameter, often related to attentional demands in the literature, age-related differences in emotional valence seem not to have an effect for negative words. Focusing on emotional valence for each group, the younger participants only showed an effect on negative distracting words. The older participants showed an effect regarding negative and positive target words, and negative distracting words. This suggests that the attentional demand is higher for emotional words, in particular, for the older participants.
Resumo:
What helps us determine whether a word is a noun or a verb, without conscious awareness? We report on cues in the way individual English words are spelled, and, for the first time, identify their neural correlates via functional magnetic resonance imaging (fMRI). We used a lexical decision task with trisyllabic nouns and verbs containing orthographic cues that are either consistent or inconsistent with the spelling patterns of words from that grammatical category. Significant linear increases in response times and error rates were observed as orthography became less consistent, paralleled by significant linear decreases in blood oxygen level dependent (BOLD) signal in the left supramarginal gyrus of the left inferior parietal lobule, a brain region implicated in visual word recognition. A similar pattern was observed in the left superior parietal lobule. These findings align with an emergentist view of grammatical category processing which results from sensitivity to multiple probabilistic cues.
Resumo:
Does language-specific orthography help language detection and lexical access in naturalistic bilingual contexts? This study investigates how L2 orthotactic properties influence bilingual language detection in bilingual societies and the extent to which it modulates lexical access and single word processing. Language specificity of naturalistically learnt L2 words was manipulated by including bigram combinations that could be either L2 language-specific or common in the two languages known by bilinguals. A group of balanced bilinguals and a group of highly proficient but unbalanced bilinguals who grew up in a bilingual society were tested, together with a group of monolinguals (for control purposes). All the participants completed a speeded language detection task and a progressive demasking task. Results showed that the use of the information of orthotactic rules across languages depends on the task demands at hand, and on participants' proficiency in the second language. The influence of language orthotactic rules during language detection, lexical access and word identification are discussed according to the most prominent models of bilingual word recognition.
Resumo:
Based on the theoretical framework of Dressler and Dziubalska-Kołaczyk (2006a,b), the Strong Morphonotactic Hypothesis will be tested. It assumes that phonotactics helps in decomposition of words into morphemes: if a certain sequence occurs only or only by default over a morpheme boundary and is thus a prototypical morphonotactic sequence, it should be processed faster and more accurately than a purely phonotactic sequence. Studies on typical and atypical first language acquisition in English, Lithuanian and Polish have shown significant differences between the acquisition of morphonotactic and phonotactic consonant clusters: Morphonotactic clusters are acquired earlier and faster by typically developing children, but are more problematic for children with Specific Language Impairment. However, results on acquisition are less clear for German. The focus of this contribution is whether and how German-speaking adults differentiate between morphonotactic and phonotactic consonant clusters and vowel-consonant sequences in visual word recognition. It investigates whether sub-lexical letter sequences are found faster when the target sequence is separated from the word stem by a morphological boundary than when it is a part of a morphological root. An additional factor that is addressed concerns the position of the target cluster in the word. Due to the bathtub effect, sequences in peripheral positions in a word are more salient and thus facilitate processing more than word-internal positions. Moreover, for adults the primacy effect most favors word-initial position (whereas for young children the recency effect most favors word- final position). Our study discusses effects of phonotactic vs. morphonotactic cluster status and of position within the word.
Resumo:
Background: Few studies have investigated how individuals diagnosed with post-stroke Broca’s aphasia decompose words into their constituent morphemes in real-time processing. Previous research has focused on morphologically complex words in non-time-constrained settings or in syntactic frames, but not in the lexicon. Aims: We examined real-time processing of morphologically complex words in a group of five Greek-speaking individuals with Broca’s aphasia to determine: (1) whether their morphological decomposition mechanisms are sensitive to lexical (orthography and frequency) vs. morphological (stem-suffix combinatory features) factors during visual word recognition, (2) whether these mechanisms are different in inflected vs. derived forms during lexical access, and (3) whether there is a preferred unit of lexical access (syllables vs. morphemes) for inflected vs. derived forms. Methods & Procedures: The study included two real-time experiments. The first was a semantic judgment task necessitating participants’ categorical judgments for high- and low-frequency inflected real words and pseudohomophones of the real words created by either an orthographic error at the stem or a homophonous (but incorrect) inflectional suffix. The second experiment was a letter-priming task at the syllabic or morphemic boundary of morphologically transparent inflected and derived words whose stems and suffixes were matched for length, lemma and surface frequency. Outcomes & Results: The majority of the individuals with Broca’s aphasia were sensitive to lexical frequency and stem orthography, while ignoring the morphological combinatory information encoded in the inflectional suffix that control participants were sensitive to. The letter-priming task, on the other hand, showed that individuals with aphasia—in contrast to controls—showed preferences with regard to the unit of lexical access, i.e., they were overall faster on syllabically than morphemically parsed words and their morphological decomposition mechanisms for inflected and derived forms were modulated by the unit of lexical access. Conclusions: Our results show that in morphological processing, Greek-speaking persons with aphasia rely mainly on stem access and thus are only sensitive to orthographic violations of the stem morphemes, but not to illegal morphological combinations of stems and suffixes. This possibly indicates an intact orthographic lexicon but deficient morphological decomposition mechanisms, possibly stemming from an underspecification of inflectional suffixes in the participants’ grammar. Syllabic information, however, appears to facilitate lexical access and elicits repair mechanisms that compensate for deviant morphological parsing procedures.
Resumo:
Neuronal models predict that retrieval of specific event information reactivates brain regions that were active during encoding of this information. Consistent with this prediction, this positron-emission tomography study showed that remembering that visual words had been paired with sounds at encoding activated some of the auditory brain regions that were engaged during encoding. After word-sound encoding, activation of auditory brain regions was also observed during visual word recognition when there was no demand to retrieve auditory information. Collectively, these observations suggest that information about the auditory components of multisensory event information is stored in auditory responsive cortex and reactivated at retrieval, in keeping with classical ideas about “redintegration,” that is, the power of part of an encoded stimulus complex to evoke the whole experience.