171 resultados para Linguistic alternance
Resumo:
We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
The affective communication patterns of conversations on Twitter can provide insights into the culture of online communities. In this paper we apply a combined quantitative and qualitative approach to investigate the structural make-up and emotional content of tweeting activity around the hashtag #auspol (for Australian politics) in order to highlight the polarity and conservativism that characterise this highly active community of politically engaged individuals. We document the centralised structure of this particular community, which is based around a deeply committed core of contributors. Through in-depth content analysis of the tweets of participants to the online debate we explore the communicative tone, patterns of engagement and thematic drivers that shape the affective character of the community and their effect on its cohesiveness. In this way we provide a comprehensive account of the complex techno-social, linguistic and cultural factors involved in conversations that are shaped in the Twittersphere.
Resumo:
Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
Guanxi has become a common term in the wider business environment and has attracted the increasing attention of researchers. Despite this, a consistent understanding of the concept continues to prove elusive. We review the extant business literature to highlight the major inconsistencies in the way guanxi is currently conceptualized: the breadth, linguistic-cultural depth, temporality, and level of analysis. We conclude with a clearer conceptualization of guanxi which separates the core elements from antecedents and consequences of guanxi. Furthermore, we compare and contrast guanxi with western correlates such as social networks and social capitals to further consolidate our understanding of guanxi.
Resumo:
It is well established that there are inherent difficulties involved in communicating across cultural boundaries. When these difficulties are encountered within the justice system the innocent can be convicted and witnesses undermined. A large amount of research has been undertaken regarding the implications of miscommunication within the courtroom but far less has been carried out on language and interactions between police and Indigenous Australians. It is necessary that officers of the law be made aware of linguistic issues to ensure they conduct their investigations in a fair, effective and therefore ethical manner. This paper draws on Cultural Schema Theory to illustrate how this could be achieved. The justice system is reliant upon the skills and knowledge of the police, therefore, this paper highlights the need for research to focus on the linguistic and non‐verbal differences between Australian Aboriginal English and Australian Standard English in order to develop techniques to facilitate effective communication.
Resumo:
The primary aim of this paper was to investigate heterogeneity in language abilities of children with a confirmed diagnosis of an ASD (N = 20) and children with typical development (TD; N = 15). Group comparisons revealed no differences between ASD and TD participants on standard clinical assessments of language ability, reading ability or nonverbal intelligence. However, a hierarchical cluster analysis based on spoken nonword repetition and sentence repetition identified two clusters within the combined group of ASD and TD participants. The first cluster (N = 6) presented with significantly poorer performances than the second cluster (N = 29) on both of the clustering variables in addition to single word and nonword reading. The significant differences between the two clusters occur within a context of Cluster 1 having language impairment and a tendency towards more severe autistic symptomatology. Differences between the oral language abilities of the first and second clusters are considered in light of diagnosis, attention and verbal short term memory skills and reading impairment.
Resumo:
The provision of visual support to individuals with an autism spectrum disorder (ASD) is widely recommended. We explored one mechanism underlying the use of visual supports: efficiency of language processing. Two groups of children, one with and one without an ASD, participated. The groups had comparable oral and written language skills and nonverbal cognitive abilities. In two semantic priming experiments, prime modality and prime–target relatedness were manipulated. Response time and accuracy of lexical decisions on the spoken word targets were measured. In the first uni-modal experiment, both groups demonstrated significant priming effects. In the second experiment which was cross-modal, no effect for relatedness or group was found. This result is considered in the light of the attentional capacity required for access to the lexicon via written stimuli within the developing semantic system. These preliminary findings are also considered with respect to the use of visual support for children with ASD.
Resumo:
It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
Previous studies have found that the lateral posterior fusiform gyri respond more robustly to pictures of animals than pictures of manmade objects and suggested that these regions encode the visual properties characteristic of animals. We suggest that such effects actually reflect processing demands arising when items with similar representations must be finely discriminated. In a positron emission tomography (PET) study of category verification with colored photographs of animals and vehicles, there was robust animal-specific activation in the lateral posterior fusiform gyri when stimuli were categorized at an intermediate level of specificity (e.g., dog or car). However, when the same photographs were categorized at a more specific level (e.g., Labrador or BMW), these regions responded equally strongly to animals and vehicles. We conclude that the lateral posterior fusiform does not encode domain-specific representations of animals or visual properties characteristic of animals. Instead, these regions are strongly activated whenever an item must be discriminated from many close visual or semantic competitors. Apparent category effects arise because, at an intermediate level of specificity, animals have more visual and semantic competitors than do artifacts.