866 resultados para Lexical semantics
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.
Resumo:
We present three natural language marking strategies based on fast and reliable shallow parsing techniques, and on widely available lexical resources: lexical substitution, adjective conjunction swaps, and relativiser switching. We test these techniques on a random sample of the British National Corpus. Individual candidate marks are checked for goodness of structural and semantic fit, using both lexical resources, and the web as a corpus. A representative sample of marks is given to 25 human judges to evaluate for acceptability and preservation of meaning. This establishes a correlation between corpus based felicity measures and perceived quality, and makes qualified predictions. Grammatical acceptability correlates with our automatic measure strongly (Pearson's r = 0.795, p = 0.001), allowing us to account for about two thirds of variability in human judgements. A moderate but statistically insignificant (Pearson's r = 0.422, p = 0.356) correlation is found with judgements of meaning preservation, indicating that the contextual window of five content words used for our automatic measure may need to be extended. © 2007 SPIE-IS&T.
Resumo:
Both embodied and symbolic accounts of conceptual organization would predict partial sharing and partial differentiation between the neural activations seen for concepts activated via different stimulus modalities. But cross-participant and cross-session variability in BOLD activity patterns makes analyses of such patterns with MVPA methods challenging. Here, we examine the effect of cross-modal and individual variation on the machine learning analysis of fMRI data recorded during a word property generation task. We present the same set of living and non-living concepts (land-mammals, or work tools) to a cohort of Japanese participants in two sessions: the first using auditory presentation of spoken words; the second using visual presentation of words written in Japanese characters. Classification accuracies confirmed that these semantic categories could be detected in single trials, with within-session predictive accuracies of 80-90%. However cross-session prediction (learning from auditory-task data to classify data from the written-word-task, or vice versa) suffered from a performance penalty, achieving 65-75% (still individually significant at p « 0.05). We carried out several follow-on analyses to investigate the reason for this shortfall, concluding that distributional differences in neither time nor space alone could account for it. Rather, combined spatio-temporal patterns of activity need to be identified for successful cross-session learning, and this suggests that feature selection strategies could be modified to take advantage of this.
Resumo:
Most studies of conceptual knowledge in the brain focus on a narrow range of concrete conceptual categories, rely on the researchers' intuitions about which object belongs to these categories, and assume a broadly taxonomic organization of knowledge. In this fMRI study, we focus on concepts with a variety of concreteness levels; we use a state of the art lexical resource (WordNet 3.1) as the source for a relatively large number of category distinctions and compare a taxonomic style of organization with a domain-based model (associating concepts with scenarios). Participants mentally simulated situations associated with concepts when cued by text stimuli. Using multivariate pattern analysis, we find evidence that all Taxonomic categories and Domains can be distinguished from fMRI data and also observe a clear concreteness effect: Tools and Locations can be reliably predicted for unseen participants, but less concrete categories (e.g., Attributes, Communications, Events, Social Roles) can only be reliably discriminated within participants. A second concreteness effect relates to the interaction of Domain and Taxonomic category membership: Domain (e.g., relation to Law vs. Music) can be better predicted for less concrete categories. We repeated the analysis within anatomical regions, observing discrimination between all/most categories in the left middle occipital and temporal gyri, and more specialized discrimination for concrete categories Tool and Location in the left precentral and fusiform gyri, respectively. Highly concrete/abstract Taxonomic categories and Domain were segregated in frontal regions. We conclude that both Taxonomic and Domain class distinctions are relevant for interpreting neural structuring of concrete and abstract concepts.
Resumo:
A core activity in information systems development involves understanding the
conceptual model of the domain that the information system supports. Any conceptual model is ultimately created using a conceptual-modeling (CM) grammar. Accordingly, just as high quality conceptual models facilitate high quality systems development, high quality CM grammars facilitate high quality conceptual modeling. This paper seeks to provide a new perspective on improving the quality of CM grammar semantics. For the past twenty years, the leading approach to this topic has drawn on ontological theory. However, the ontological approach captures just half of the story. It needs to be coupled with a logical approach. We show how ontological quality and logical quality interrelate and we outline three contributions of a logical approach: the ability to see familiar conceptualmodeling problems in simpler ways, the illumination of new problems, and the ability to prove the benefit of modifying CM grammars.
Resumo:
University classroom talk is a collaborative struggle to make meaning. Taking the perspectival nature of interaction as central, this paper presents an investigation of the genre of spoken academic discourse and in particular the types of activities which are orientated to the goal of collaborative ideas or tasks, such as seminars, tutorials, workshops. The purpose of the investigation was to identify examples of dialogicality through an examination of stance-taking. The data used in this study is a spoken corpus of academic English created from recordings of a range of subject discipline classrooms at a UK university. A frequency-based approach to recurrent word sequences (lexical bundles) was used to identify signals of epistemic and attitudinal stance and to initiate an exploration of the features of elaboration. Findings of quantitative and qualitative analyses reveal some similarities and differences between this study and those of US based classroom contexts in relation to the use and frequency of lexical bundles. Findings also highlight the process that elaboration plays in grounding perspectives and negotiating alignment of interactants. Elaboration seems to afford the space for the enactment of student stance in relation to the tutor embodiment of discipline knowledge.
Resumo:
We present the results of exploratory experiments using lexical valence extracted from brain using electroencephalography (EEG) for sentiment analysis. We selected 78 English words (36 for training and 42 for testing), presented as stimuli to 3 English native speakers. EEG signals were recorded from the subjects while they performed a mental imaging task for each word stimulus. Wavelet decomposition was employed to extract EEG features from the time-frequency domain. The extracted features were used as inputs to a sparse multinomial logistic regression (SMLR) classifier for valence classification, after univariate ANOVA feature selection. After mapping EEG signals to sentiment valences, we exploited the lexical polarity extracted from brain data for the prediction of the valence of 12 sentences taken from the SemEval-2007 shared task, and compared it against existing lexical resources.
Resumo:
The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of Can(Plan) into Can(Plan)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent's beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.
Resumo:
Over the past decade the concept of ‘resilience’ has been mobilised across an increasingly wide range of policy arenas. For example, it has featured prominently within recent discussions on the nature of warfare, the purpose of urban and regional planning, the effectiveness of development policies, the intent of welfare reform and the stability of the international financial system. The term’s origins can be traced back to the work of the ecologist Crawford S. Holling and his formulation of a science of complexity. This paper reflects on the origins of these ideas and their travels from the field of natural resource management, which it now dominates, to contemporary social practices and policy arenas. It reflects on the ways in which a lexicon of complex adaptive systems, grounded in an epistemology of limited knowledge and uncertain futures, seeks to displace ongoing ‘dependence’ on professionals by valorising self-reliance and responsibility as techniques to be applied by subjects in the making of the resilient self. In so doing, resilience is being mobilised to govern a wide range of threats and sources of uncertainty, from climate change, financial crises and terrorism, to the sustainability of development, the financing of welfare and providing for an aging population. As such, ‘resilience’ risks becoming a measure of its subjects’ ‘fitness’ to survive in what are pre-figured as natural, turbulent orders of things.
Resumo:
This book provides a comprehensive tutorial on similarity operators. The authors systematically survey the set of similarity operators, primarily focusing on their semantics, while also touching upon mechanisms for processing them effectively.
The book starts off by providing introductory material on similarity search systems, highlighting the central role of similarity operators in such systems. This is followed by a systematic categorized overview of the variety of similarity operators that have been proposed in literature over the last two decades, including advanced operators such as RkNN, Reverse k-Ranks, Skyline k-Groups and K-N-Match. Since indexing is a core technology in the practical implementation of similarity operators, various indexing mechanisms are summarized. Finally, current research challenges are outlined, so as to enable interested readers to identify potential directions for future investigations.
In summary, this book offers a comprehensive overview of the field of similarity search operators, allowing readers to understand the area of similarity operators as it stands today, and in addition providing them with the background needed to understand recent novel approaches.
Resumo:
Although Answer Set Programming (ASP) is a powerful framework for declarative problem solving, it cannot in an intuitive way handle situations in which some rules are uncertain, or in which it is more important to satisfy some constraints than others. Possibilistic ASP (PASP) is a natural extension of ASP in which certainty weights are associated with each rule. In this paper we contrast two different views on interpreting the weights attached to rules. Under the first view, weights reflect the certainty with which we can conclude the head of a rule when its body is satisfied. Under the second view, weights reflect the certainty that a given rule restricts the considered epistemic states of an agent in a valid way, i.e. it is the certainty that the rule itself is correct. The first view gives rise to a set of weighted answer sets, whereas the second view gives rise to a weighted set of classical answer sets.