35 resultados para Semantic annotations
Resumo:
Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.
Resumo:
We describe an approach aimed at addressing the issue of joint exploitation of control (stream) and data parallelism in a skeleton based parallel programming environment, based on annotations and refactoring. Annotations drive efficient implementation of a parallel computation. Refactoring is used to transform the associated skeleton tree into a more efficient, functionally equivalent skeleton tree. In most cases, cost models are used to drive the refactoring process. We show how sample use case applications/kernels may be optimized and discuss preliminary experiments with FastFlow assessing the theoretical results. © 2013 Springer-Verlag.
Resumo:
In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional
models of semantics that demonstrate cognitive plausibility. We find that word representations
learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,
effective, and highly interpretable. To the best of our knowledge, this is the first approach which
yields semantic representation of words satisfying these three desirable properties. Though extensive
experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority
of semantic models learned by NNSE over other state-of-the-art baselines.
Resumo:
Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002; Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept-property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach. Copyright © 2009 Cognitive Science Society, Inc. All rights reserved.
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.
Resumo:
Many studies suggest a large capacity memory for briefly presented pictures of whole scenes. At the same time, visual working memory (WM) of scene elements is limited to only a few items. We examined the role of retroactive interference in limiting memory for visual details. Participants viewed a scene for 5?s and then, after a short delay containing either a blank screen or 10 distracter scenes, answered questions about the location, color, and identity of objects in the scene. We found that the influence of the distracters depended on whether they were from a similar semantic domain, such as "kitchen" or "airport." Increasing the number of similar scenes reduced, and eventually eliminated, memory for scene details. Although scene memory was firmly established over the initial study period, this memory was fragile and susceptible to interference. This may help to explain the discrepancy in the literature between studies showing limited visual WM and those showing a large capacity memory for scenes.
Resumo:
In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.
Resumo:
The Supreme Court of the United States in Feist v. Rural (Feist, 1991) specified that compilations or databases, and other works, must have a minimal degree of creativity to be copyrightable. The significance and global diffusion of the decision is only matched by the difficulties it has posed for interpretation. The judgment does not specify what is to be understood by creativity, although it does give a full account of the negative of creativity, as ‘so mechanical or routine as to require no creativity whatsoever’ (Feist, 1991, p.362). The negative of creativity as highly mechanical has particularly diffused globally.
A recent interpretation has correlated ‘so mechanical’ (Feist, 1991) with an automatic mechanical procedure or computational process, using a rigorous exegesis fully to correlate the two uses of mechanical. The negative of creativity is then understood as an automatic computation and as a highly routine process. Creativity is itself is conversely understood as non-computational activity, above a certain level of routinicity (Warner, 2013).
The distinction between the negative of creativity and creativity is strongly analogous to an independently developed distinction between forms of mental labour, between semantic and syntactic labour. Semantic labour is understood as human labour motivated by considerations of meaning and syntactic labour as concerned solely with patterns. Semantic labour is distinctively human while syntactic labour can be directly humanly conducted or delegated to machine, as an automatic computational process (Warner, 2005; 2010, pp.33-41).
The value of the analogy is to greatly increase the intersubjective scope of the distinction between semantic and syntactic mental labour. The global diffusion of the standard for extreme absence of copyrightability embodied in the judgment also indicates the possibility that the distinction fully captures the current transformation in the distribution of mental labour, where syntactic tasks which were previously humanly performed are now increasingly conducted by machine.
The paper has substantive and methodological relevance to the conference themes. Substantively, it is concerned with human creativity, with rationality as not reducible to computation, and has relevance to the language myth, through its indirect endorsement of a non-computable or not mechanical semantics. These themes are supported by the underlying idea of technology as a human construction. Methodologically, it is rooted in the humanities and conducts critical thinking through exegesis and empirically tested theoretical development
References
Feist. (1991). Feist Publications, Inc. v. Rural Tel. Service Co., Inc. 499 U.S. 340.
Warner, J. (2005). Labor in information systems. Annual Review of Information Science and Technology. 39, 2005, pp.551-573.
Warner, J. (2010). Human Information Retrieval (History and Foundations of Information Science Series). Cambridge, MA: MIT Press.
Warner, J. (2013). Creativity for Feist. Journal of the American Society for Information Science and Technology. 64, 6, 2013, pp.1173-1192.
Resumo:
No abstract available
Resumo:
Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies.