992 resultados para Semantic space
Resumo:
Presentation about information modelling and artificial intelligence, semantic structure, cognitive processing and quantum theory.
Resumo:
Vector Space Models (VSMs) of Semantics are useful tools for exploring the semantics of single words, and the composition of words to make phrasal meaning. While many methods can estimate the meaning (i.e. vector) of a phrase, few do so in an interpretable way. We introduce a new method (CNNSE) that allows word and phrase vectors to adapt to the notion of composition. Our method learns a VSM that is both tailored to support a chosen semantic composition operation, and whose resulting features have an intuitive interpretation. Interpretability allows for the exploration of phrasal semantics, which we leverage to analyze performance on a behavioral task.
Resumo:
In this paper, we compare a well-known semantic spacemodel, Latent Semantic Analysis (LSA) with another model, Hyperspace Analogue to Language (HAL) which is widely used in different area, especially in automatic query refinement. We conduct this comparative analysis to prove our hypothesis that with respect to ability of extracting the lexical information from a corpus of text, LSA is quite similar to HAL. We regard HAL and LSA as black boxes. Through a Pearsonrsquos correlation analysis to the outputs of these two black boxes, we conclude that LSA highly co-relates with HAL and thus there is a justification that LSA and HAL can potentially play a similar role in the area of facilitating automatic query refinement. This paper evaluates LSA in a new application area and contributes an effective way to compare different semantic space models.
Resumo:
Consider a person searching electronic health records, a search for the term ‘cracked skull’ should return documents that contain the term ‘cranium fracture’. A information retrieval systems is required that matches concepts, not just keywords. Further more, determining relevance of a query to a document requires inference – its not simply matching concepts. For example a document containing ‘dialysis machine’ should align with a query for ‘kidney disease’. Collectively we describe this problem as the ‘semantic gap’ – the difference between the raw medical data and the way a human interprets it. This paper presents an approach to semantic search of health records by combining two previous approaches: an ontological approach using the SNOMED CT medical ontology; and a distributional approach using semantic space vector space models. Our approach will be applied to a specific problem in health informatics: the matching of electronic patient records to clinical trials.
Resumo:
Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.
Resumo:
Semantic space models of word meaning derived from co-occurrence statistics within a corpus of documents, such as the Hyperspace Analogous to Language (HAL) model, have been proposed in the past. While word similarity can be computed using these models, it is not clear how semantic spaces derived from different sets of documents can be compared. In this paper, we focus on this problem, and we revisit the proposal of using semantic subspace distance measurements [1]. In particular, we outline the research questions that still need to be addressed to investigate and validate these distance measures. Then, we describe our plans for future research.
Resumo:
Semantic Space models, which provide a numerical representation of words’ meaning extracted from corpus of documents, have been formalized in terms of Hermitian operators over real valued Hilbert spaces by Bruza et al. [1]. The collapse of a word into a particular meaning has been investigated applying the notion of quantum collapse of superpositional states [2]. While the semantic association between words in a Semantic Space can be computed by means of the Minkowski distance [3] or the cosine of the angle between the vector representation of each pair of words, a new procedure is needed in order to establish relations between two or more Semantic Spaces. We address the question: how can the distance between different Semantic Spaces be computed? By representing each Semantic Space as a subspace of a more general Hilbert space, the relationship between Semantic Spaces can be computed by means of the subspace distance. Such distance needs to take into account the difference in the dimensions between subspaces. The availability of a distance for comparing different Semantic Subspaces would enable to achieve a deeper understanding about the geometry of Semantic Spaces which would possibly translate into better effectiveness in Information Retrieval tasks.
Resumo:
Alzheimer's disease (AD) is characterized by an impairment of the semantic memory responsible for processing meaning-related knowledge. This study was aimed at examining how Finnish-speaking healthy elderly subjects (n = 30) and mildly (n=20) and moderately (n = 20) demented AD patients utilize semantic knowledge to performa semantic fluency task, a method of studying semantic memory. In this task subjects are typically given 60 seconds to generate words belonging to the semantic category of animals. Successful task performance requires fast retrieval of subcategory exemplars in clusters (e.g., farm animals: 'cow', 'horse', 'sheep') and switching between subcategories (e.g., pets, water animals, birds, rodents). In this study, thescope of the task was extended to cover various noun and verb categories. The results indicated that, compared with normal controls, both mildly and moderately demented AD patients showed reduced word production, limited clustering and switching, narrowed semantic space, and an increase in errors, particularly perseverations. However, the size of the clusters, the proportion of clustered words, and the frequency and prototypicality of words remained relatively similar across the subject groups. Although the moderately demented patients showed a poor eroverall performance than the mildly demented patients in the individual categories, the error analysis appeared unaffected by the severity of AD. The results indicate a semantically rather coherent performance but less specific, effective, and flexible functioning of the semantic memory in mild and moderate AD patients. The findings are discussed in relation to recent theories of word production and semantic representation. Keywords: semantic fluency, clustering, switching, semantic category, nouns, verbs, Alzheimer's disease
Semantic Discriminant mapping for classification and browsing of remote sensing textures and objects
Resumo:
We present a new approach based on Discriminant Analysis to map a high dimensional image feature space onto a subspace which has the following advantages: 1. each dimension corresponds to a semantic likelihood, 2. an efficient and simple multiclass classifier is proposed and 3. it is low dimensional. This mapping is learnt from a given set of labeled images with a class groundtruth. In the new space a classifier is naturally derived which performs as well as a linear SVM. We will show that projecting images in this new space provides a database browsing tool which is meaningful to the user. Results are presented on a remote sensing database with eight classes, made available online. The output semantic space is a low dimensional feature space which opens perspectives for other recognition tasks. © 2005 IEEE.
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.
Broadly speaking: vocabulary in semantic dementia shifts towards general, semantically diverse words
Resumo:
One of the cardinal features of semantic dementia (SD) is a steady reduction in expressive vocabulary. We investigated the nature of this breakdown by assessing the psycholinguistic characteristics of words produced spontaneously by SD patients during an autobiographical memory interview. Speech was analysed with respect to frequency and imageability, and a recently-developed measure called semantic diversity. This measure quantifies the degree to which a word can be used in a broad range of different linguistic contexts. We used this measure in a formal exploration of the tendency for SD patients to replace specific terms with more vague and general words, on the assumption that more specific words are used in a more constrained set of contexts. Relative to healthy controls, patients were less likely to produce low-frequency, high-imageability words, and more likely to produce highly frequent, abstract words. These changes in the lexical-semantic landscape were related to semantic diversity: the highly frequent and abstract words most prevalent in the patients' speech were also the most semantically diverse. In fact, when the speech samples of healthy controls were artificially engineered such that low semantic diversity words (e.g., garage, spanner) were replaced with broader terms (e.g., place, thing), the characteristics of their speech production came to closely resemble that of SD patients. A similar simulation in which low-frequency words were replaced was less successful in replicating the patient data. These findings indicate systematic biases in the deterioration of lexical-semantic space in SD. As conceptual knowledge degrades, speech increasingly consists of general terms that can be applied in a broad range of linguistic contexts and convey less specific information.
Resumo:
Recent empirical work on the semantics of emotion terms across many different cultures and languages, using a theoretical componential approach, suggested that four dimensions are needed to parsimoniously describe the semantic space of the emotion domain as reflected in emotion terms (Fontaine, Scherer, Roesch, & Ellsworth, 2007; Fontaine, Scherer, & Soriano, 2013). In addition to valence, power, and arousal, a novelty dimension was discovered that mostly differentiated surprise from other emotions. Here, we further explore the existence and nature of the fourth dimension in semantic emotion space using a much larger and much more representative set of emotion terms. A group of 156 participants each rated 10 out of a set of 80 French emotion terms with respect to semantic meaning. The meaning of an emotion term was evaluated with respect to 68 emotion features representing the appraisal, action tendency, bodily reaction, expression, and feeling components of the emotion process. A principal component analysis confirmed the four-dimensional valence, power, arousal, and novelty structure. Moreover, this larger and much more representative set of emotion terms revealed that the novelty dimension not only differentiates surprise terms from other emotion terms, but also identifies substantial variation within the fear and joy emotion families.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
In this paper, we propose an unsupervised methodology to automatically discover pairs of semantically related words by highlighting their local environment and evaluating their semantic similarity in local and global semantic spaces. This proposal di®ers from previous research as it tries to take the best of two different methodologies i.e. semantic space models and information extraction models. It can be applied to extract close semantic relations, it limits the search space and it is unsupervised.