880 resultados para Feature ontology
Resumo:
This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.
Resumo:
This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.
Resumo:
"How do you film a punch?" This question can be posed by actors, make-up artists, directors and cameramen. Though they can all ask the same question, they are not all seeking the same answer. Within a given domain, based on the roles they play, agents of the domain have different perspectives and they want the answers to their question from their perspective. In this example, an actor wants to know how to act when filming a scene involving a punch. A make-up artist is interested in how to do the make-up of the actor to show bruises that may result from the punch. Likewise, a director wants to know how to direct such a scene and a cameraman is seeking guidance on how best to film such a scene. This role-based difference in perspective is the underpinning of the Loculus framework for information management for the Motion Picture Industry. The Loculus framework exploits the perspective of agent for information extraction and classification within a given domain. The framework uses the positioning of the agent’s role within the domain ontology and its relatedness to other concepts in the ontology to determine the perspective of the agent. Domain ontology had to be developed for the motion picture industry as the domain lacked one. A rule-based relatedness score was developed to calculate the relative relatedness of concepts with the ontology, which were then used in the Loculus system for information exploitation and classification. The evaluation undertaken to date have yielded promising results and have indicated that exploiting perspective can lead to novel methods of information extraction and classifications.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the captured iris images, which significantly degrades iris recognition performance. Superresolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, all existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values. This paper considers transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. This is the first paper to investigate the possibility of feature domain super-resolution for iris recognition, and experiments confirm the validity of the proposed approach.
Resumo:
Review of Suicide : Foucault, History and Truth, by Ian Marsh
Resumo:
Automated analysis of the sentiments presented in online consumer feedbacks can facilitate both organizations’ business strategy development and individual consumers’ comparison shopping. Nevertheless, existing opinion mining methods either adopt a context-free sentiment classification approach or rely on a large number of manually annotated training examples to perform context sensitive sentiment classification. Guided by the design science research methodology, we illustrate the design, development, and evaluation of a novel fuzzy domain ontology based contextsensitive opinion mining system. Our novel ontology extraction mechanism underpinned by a variant of Kullback-Leibler divergence can automatically acquire contextual sentiment knowledge across various product domains to improve the sentiment analysis processes. Evaluated based on a benchmark dataset and real consumer reviews collected from Amazon.com, our system shows remarkable performance improvement over the context-free baseline.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.
Resumo:
Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment.
Resumo:
The journalism revolution is upon us. In a world where we are constantly being told that everyone can be a publisher and challenges are emerging from bloggers, Twitterers and podcasters, journalism educators are inevitably reassessing what skills we now need to teach to keep our graduates ahead of the game. QUT this year tackled that question head-on as a curriculum review and program restructure resulted in a greater emphasis on online journalism. The author spent a week in the online newsrooms of each of two of the major players – ABC online news and thecouriermail.com to watch, listen and interview some of the key players. This, in addition to interviews with industry leaders from Fairfax and news.com, lead to the conclusion that while there are some new skills involved in new media much of what the industry is demanding is in fact good old fashioned journalism. Themes of good spelling, grammar, accuracy and writing skills and a nose for news recurred when industry players were asked what it was that they would like to see in new graduates. While speed was cited as one of the big attributes needed in online journalism, the conclusion of many of the players was that the skills of a good down-table sub or a journalist working for wire service were not unlike those most used in online newsrooms.
Resumo:
Despite many arguments to the contrary, the three-act story structure, as propounded and refined by Hollywood continues to dominate the blockbuster and independent film markets. Recent successes in post-modern cinema could indicate new directions and opportunities for low-budget national cinemas.
Resumo:
As a model for knowledge description and formalization, ontologies are widely used to represent user profiles in personalized web information gathering. However, when representing user profiles, many models have utilized only knowledge from either a global knowledge base or a user local information. In this paper, a personalized ontology model is proposed for knowledge representation and reasoning over user profiles. This model learns ontological user profiles from both a world knowledge base and user local instance repositories. The ontology model is evaluated by comparing it against benchmark models in web information gathering. The results show that this ontology model is successful.
Resumo:
The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.
Resumo:
Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging and represent those in a form of ontology, but the application of the learned ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.