898 resultados para Latent Semantic Indexing
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.
Resumo:
Ontologies play a core role to provide shared knowledge models to semantic-driven applications targeted by Semantic Web. Ontology metrics become an important area because they can help ontology engineers to assess ontology and better control project management and development of ontology based systems, and therefore reduce the risk of project failures. In this paper, we propose a set of ontology cohesion metrics which focuses on measuring (possibly inconsistent) ontologies in the context of dynamic and changing Web. They are: Number of Ontology Partitions (NOP), Number of Minimally Inconsistent Subsets (NMIS) and Average Value of Axiom Inconsistencies (AVAI). These ontology metrics are used to measure ontological semantics rather than ontological structure. They are theoretically validated for ensuring their theoretical soundness, and further empirically validated by a standard test set of debugging ontologies. The related algorithms to compute these ontology metrics also are discussed. These metrics proposed in this paper can be used as a very useful complementarity of existing ontology cohesion metrics.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
A simple and environment friendly chemical route for detecting latent fingermarks by one-step single-metal nanoparticles deposition method (SND) was achieved successfully on several non-porous items. Gold nanoparticles (AuNPs) synthesized using sodium borohydride as reducing agent in the presence of glucose, were used as working solution for latent fingermarks detection. The SND technique just needs one step to obtain clear ridge details in a wide pH range (2.5-5.0), whereas the standard multi-metal deposition (MMD) technique requires six baths in a narrow pH range (2.5-2.8). The SND is very convenient to detect latent fingermarks in forensic scene or laboratory for forensic operators. The SND technique provided sharp and clear development of latent fingermarks, without background staining, dramatically diminished the bath steps.
Resumo:
This paper describes a natural language system START. The system analyzes English text and automatically transforms it into an appropriate representation, the knowledge base, which incorporates the information found in the text. The user gains access to information stored in the knowledge base by querying it in English. The system analyzes the query and decides through a matching process what information in the knowledge base is relevant to the question. Then it retrieves this information and formulates its response also in English.
Resumo:
Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naﶥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.
Resumo:
SIR is a computer system, programmed in the LISP language, which accepts information and answers questions expressed in a restricted form of English. This system demonstrates what can reasonably be called an ability to "understand" semantic information. SIR's semantic and deductive ability is based on the construction of an internal model, which uses word associations and property lists, for the relational information normally conveyed in conversational statements. A format-matching procedure extracts semantic content from English sentences. If an input sentence is declarative, the system adds appropriate information to the model. If an input sentence is a question, the system searches the model until it either finds the answer or determines why it cannot find the answer. In all cases SIR reports its conclusions. The system has some capacity to recognize exceptions to general rules, resolve certain semantic ambiguities, and modify its model structure in order to save computer memory space. Judging from its conversational ability, SIR, is a first step toward intelligent man-machine communication. The author proposes a next step by describing how to construct a more general system which is less complex and yet more powerful than SIR. This proposed system contains a generalized version of the SIR model, a formal logical system called SIR1, and a computer program for testing the truth of SIR1 statements with respect to the generalized model by using partial proof procedures in the predicate calculus. The thesis also describes the formal properties of SIR1 and how they relate to the logical structure of SIR.
Resumo:
The research reported here concerns the principles used to automatically generate three-dimensional representations from line drawings of scenes. The computer programs involved look at scenes which consist of polyhedra and which may contain shadows and various kinds of coincidentally aligned scene features. Each generated description includes information about edge shape (convex, concave, occluding, shadow, etc.), about the type of illumination for each region (illuminated, projected shadow, or oriented away from the light source), and about the spacial orientation of regions. The methods used are based on the labeling schemes of Huffman and Clowes; this research provides a considerable extension to their work and also gives theoretical explanations to the heuristic scene analysis work of Guzman, Winston, and others.
Resumo:
Eckerdal, A. McCartney, R. Mostr?m, J.E. Ratcliffe, M. Zander, C. Comparing Student Software Designs Using Semantic Categorization. Proceedings of the Fifth Finnish/Baltic Sea Conference on Computer Science Education, 2005
Resumo:
To be presented at SIG/ISMB07 ontology workshop: http://bio-ontologies.org.uk/index.php To be published in BMC Bioinformatics. Sponsorship: JISC
Resumo:
Rothwell, W., 'Synonymity and Semantic Variability in Medieval French and Middle English', Modern Language Review (2007) 102(2) pp.363-380 RAE2008
Resumo:
Two new notions of reduction for terms of the λ-calculus are introduced and the question of whether a λ-term is beta-strongly normalizing is reduced to the question of whether a λ-term is merely normalizing under one of the new notions of reduction. This leads to a new way to prove beta-strong normalization for typed λ-calculi. Instead of the usual semantic proof style based on Girard's "candidats de réductibilité'', termination can be proved using a decreasing metric over a well-founded ordering in a style more common in the field of term rewriting. This new proof method is applied to the simply-typed λ-calculus and the system of intersection types.