968 resultados para 380207 Linguistic Structures (incl. Grammar, Phonology, Lexicon, Semantics)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is sup-ported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and ‘breaking out of a rut’. When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which high-lights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies of orthographic skills transfer between languages focus mostly on working memory (WM) ability in alphabetic first language (L1) speakers when learning another, often alphabetically congruent, language. We report two studies that, instead, explored the transferability of L1 orthographic processing skills in WM in logographic-L1 and alphabetic-L1 speakers. English-French bilingual and English monolingual (alphabetic-L1) speakers, and Chinese-English (logographic-L1) speakers, learned a set of artificial logographs and associated meanings (Study 1). The logographs were used in WM tasks with and without concurrent articulatory or visuo-spatial suppression. The logographic-L1 bilinguals were markedly less affected by articulatory suppression than alphabetic-L1 monolinguals (who did not differ from their bilingual peers). Bilinguals overall were less affected by spatial interference, reflecting superior phonological processing skills or, conceivably, greater executive control. A comparison of span sizes for meaningful and meaningless logographs (Study 2) replicated these findings. However, the logographic-L1 bilinguals’ spans in L1 were measurably greater than those of their alphabetic-L1 (bilingual and monolingual) peers; a finding unaccounted for by faster articulation rates or differences in general intelligence. The overall pattern of results suggests an advantage (possibly perceptual) for logographic-L1 speakers, over and above the bilingual advantage also seen elsewhere in third language (L3) acquisition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term “vagueness” describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno’s sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib’s and Pelletier’s (2011) theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substan- tial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fear of imminent information overload predates the World Wide Web by decades. Yet, that fear has never abated. Worse, as the World Wide Web today takes the lion’s share of the information we deal with, both in amount and in time spent gathering it, the situation has only become more precarious. This chapter analyses new issues in information overload that have emerged with the advent of the Web, which emphasizes written communication, defined in this context as the exchange of ideas expressed informally, often casually, as in verbal language. The chapter focuses on three ways to mitigate these issues. First, it helps us, the users, to be more specific in what we ask for. Second, it helps us amend our request when we don't get what we think we asked for. And third, since only we, the human users, can judge whether the information received is what we want, it makes retrieval techniques more effective by basing them on how humans structure information. This chapter reports on extensive experiments we conducted in all three areas. First, to let users be more specific in describing an information need, they were allowed to express themselves in an unrestricted conversational style. This way, they could convey their information need as if they were talking to a fellow human instead of using the two or three words typically supplied to a search engine. Second, users were provided with effective ways to zoom in on the desired information once potentially relevant information became available. Third, a variety of experiments focused on the search engine itself as the mediator between request and delivery of information. All examples that are explained in detail have actually been implemented. The results of our experiments demonstrate how a human-centered approach can reduce information overload in an area that grows in importance with each day that passes. By actually having built these applications, I present an operational, not just aspirational approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or nonverbal intelligence. However, a hierarchical cluster analysis based on spoken nonword repetition and sentence repetition identified two clusters within the combined group of ASD and TD participants. The first cluster (N = 6) presented with significantly poorer performances than the second cluster (N = 29) on both of the clustering variables in addition to single word and nonword reading. The significant differences between the two clusters occur within a context of Cluster 1 having language impairment and a tendency towards more severe autistic symptomatology. Differences between the oral language abilities of the first and second clusters are considered in light of diagnosis, attention and verbal short term memory skills and reading impairment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous studies have found that the lateral posterior fusiform gyri respond more robustly to pictures of animals than pictures of manmade objects and suggested that these regions encode the visual properties characteristic of animals. We suggest that such effects actually reflect processing demands arising when items with similar representations must be finely discriminated. In a positron emission tomography (PET) study of category verification with colored photographs of animals and vehicles, there was robust animal-specific activation in the lateral posterior fusiform gyri when stimuli were categorized at an intermediate level of specificity (e.g., dog or car). However, when the same photographs were categorized at a more specific level (e.g., Labrador or BMW), these regions responded equally strongly to animals and vehicles. We conclude that the lateral posterior fusiform does not encode domain-specific representations of animals or visual properties characteristic of animals. Instead, these regions are strongly activated whenever an item must be discriminated from many close visual or semantic competitors. Apparent category effects arise because, at an intermediate level of specificity, animals have more visual and semantic competitors than do artifacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asking why is an important foundation of inquiry and fundamental to the development of reasoning skills and learning. Despite this, and despite the relentless and often disruptive nature of innovations in information and communications technology (ICT), sophisticated tools that directly support this basic act of learning appear to be undeveloped, not yet recognized, or in the very early stages of development. Why is this so? To this question, there is no single factual answer. In response, however, plausible explanations and further questions arise, and such responses are shown to be typical consequences of why-questioning. A range of contemporary scenarios are presented to highlight the problem. Consideration of the various inputs into the evolution of digital learning is introduced to provide historical context and this serves to situate further discussion regarding innovation that supports inquiry-based learning. This theme is further contextualized by narratives on openness in education, in which openness is also shown to be an evolving construct. Explanatory and descriptive contents are differentiated in order to scope out the kinds of digital tools that might support inquiry instigated by why-questioning and which move beyond the search paradigm. Probing why from a linguistic perspective reveals versatile and ambiguous semantics. The why dimension—asking, learning, knowing, understanding, and explaining why—is introduced as a construct that highlights challenges and opportunities for ICT innovation. By linking reflective practice and dialogue with cognitive engagement, this chapter points to specific frontiers for the design and development of digital learning tools, frontiers in which inquiry may find new openings for support.