856 resultados para Linguistic typology
Resumo:
The advanced era of knowledge-based urban development has led to an unprecedented increase in mobility of people and the subsequent growth in the new typology of agglomerated enclaves of knowledge such as urban knowledge precincts. A new role has been assigned to contemporary public spaces of these precincts to attract and retain the mobile knowledge workforce for long by creating a sense of place for them. This paper sheds light over the place making in the globalised knowledge economy world which develops a sense of permanence spatio-temporally to knowledge workers displaying a set of particular characteristics and simultaneously is process-dependent getting developed by the internal and external flows and contributing substantially in the development of the broader context it stands in relation with. The paper highlights the observations from Australia’s new world city Brisbane to outline the application of urban design as a tool to create and sustain this bipartite place making in urban knowledge precincts, which caters diverse range of social, cultural and democratic needs. It seeks to analyse the modified permeable typology of public spaces that makes it more viable and adaptive as per the changing needs of the contemporary globalised or in other words knowledge society. This research has taken an overall process-based approach reflecting how urban design is an assemblage of the encompassing processes that underlay the resultant place making. It explores how the permeable design typology of these contemporary precincts in Brisbane develops a progressive sense of place that makes them stimulating, effervescent and vibrant.
Resumo:
The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or nonverbal intelligence. However, a hierarchical cluster analysis based on spoken nonword repetition and sentence repetition identified two clusters within the combined group of ASD and TD participants. The first cluster (N = 6) presented with significantly poorer performances than the second cluster (N = 29) on both of the clustering variables in addition to single word and nonword reading. The significant differences between the two clusters occur within a context of Cluster 1 having language impairment and a tendency towards more severe autistic symptomatology. Differences between the oral language abilities of the first and second clusters are considered in light of diagnosis, attention and verbal short term memory skills and reading impairment.
Resumo:
The provision of visual support to individuals with an autism spectrum disorder (ASD) is widely recommended. We explored one mechanism underlying the use of visual supports: efficiency of language processing. Two groups of children, one with and one without an ASD, participated. The groups had comparable oral and written language skills and nonverbal cognitive abilities. In two semantic priming experiments, prime modality and prime–target relatedness were manipulated. Response time and accuracy of lexical decisions on the spoken word targets were measured. In the first uni-modal experiment, both groups demonstrated significant priming effects. In the second experiment which was cross-modal, no effect for relatedness or group was found. This result is considered in the light of the attentional capacity required for access to the lexicon via written stimuli within the developing semantic system. These preliminary findings are also considered with respect to the use of visual support for children with ASD.
Resumo:
It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
Previous studies have found that the lateral posterior fusiform gyri respond more robustly to pictures of animals than pictures of manmade objects and suggested that these regions encode the visual properties characteristic of animals. We suggest that such effects actually reflect processing demands arising when items with similar representations must be finely discriminated. In a positron emission tomography (PET) study of category verification with colored photographs of animals and vehicles, there was robust animal-specific activation in the lateral posterior fusiform gyri when stimuli were categorized at an intermediate level of specificity (e.g., dog or car). However, when the same photographs were categorized at a more specific level (e.g., Labrador or BMW), these regions responded equally strongly to animals and vehicles. We conclude that the lateral posterior fusiform does not encode domain-specific representations of animals or visual properties characteristic of animals. Instead, these regions are strongly activated whenever an item must be discriminated from many close visual or semantic competitors. Apparent category effects arise because, at an intermediate level of specificity, animals have more visual and semantic competitors than do artifacts.
Resumo:
Studies of semantic impairment arising from brain disease suggest that the anterior temporal lobes are critical for semantic abilities in humans; yet activation of these regions is rarely reported in functional imaging studies of healthy controls performing semantic tasks. Here, we combined neuropsychological and PET functional imaging data to show that when healthy subjects identify concepts at a specific level, the regions activated correspond to the site of maximal atrophy in patients with relatively pure semantic impairment. The stimuli were color photographs of common animals or vehicles, and the task was category verification at specific (e.g., robin), intermediate (e.g., bird), or general (e.g., animal) levels. Specific, relative to general, categorization activated the antero-lateral temporal cortices bilaterally, despite matching of these experimental conditions for difficulty. Critically, in patients with atrophy in precisely these areas, the most pronounced deficit was in the retrieval of specific semantic information.
Resumo:
The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
Resumo:
This paper examines a practically ubiquitous, yet largely overlooked, source of city marketing, the official city homepage. The extent to which local governments use the Web as a marketing tool is explored through a comparative analysis of the images featured on the city, convention, and visitors bureau homepages in large and medium-sized U.S. cities. The article goes on to analyze the ways in which the city homepages reflect the population, geography, and built environment of a city and, through a typology of marketing themes found on the city homepages, to suggest the range of ways they may package images of city spaces to communicate a brand identity. The research contributes to an understanding of the ways in which municipalities may attempt to represent the city and suggests that most city homepage imagery is oriented toward marketing goals of tourism and attracting and retaining residents and businesses.
Resumo:
The data-oriented empirical research on the Chinese adverb “ke” has led to the conclusion that the semantics of the word as a modal adverb is always two-fold: it marks both “contrast” and “emphasis”. “Adversativity” as used in literature on “ke” is but one type of contrast marked by “ke”. Other types of contrast marked by “ke” in declarative sentences include: a) what is assumed by the hearer and what the truth of a matter is; b) what the sentence literally talks about and what it also implicitly conveys; and c) the original wishful nature of the stated action and its final realization. In all declarative sentences, what the adverb emphasizes is the “factuality” of what is stated. Chinese Abstract [提要] 对外汉语教学的实践表明,汉语副词“可”是教学中的难点,这跟我们对其语义内涵缺乏全面准确的认识有关。为了全面揭示副词“可”的核心语义,本作者以电视连续剧《渴望》前二十集为主要语料,并结合其他一些电视剧、电视节目以及文献里已有的语料,对出现在各种语境中的“可”进行了大量的考察和归纳性研究。研究结果表明,作为语气副词的“可”其核心语义不是单一的,它总是在标示“对比”(即“不同”)的同时表示强调。它所强调的是所述内容的“事实性”或“终然性”。由于篇幅所限,本文仅对陈述句中的语气副词“可”加以讨论