46 resultados para Work Domain Ontology (WDO)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tagging has become one of the key activities in next generation websites which allow users selecting short labels to annotate, manage, and share multimedia information such as photos, videos and bookmarks. Tagging does not require users any prior training before participating in the annotation activities as they can freely choose any terms which best represent the semantic of contents without worrying about any formal structure or ontology. However, the practice of free-form tagging can lead to several problems, such as synonymy, polysemy and ambiguity, which potentially increase the complexity of managing the tags and retrieving information. To solve these problems, this research aims to construct a lightweight indexing scheme to structure tags by identifying and disambiguating the meaning of terms and construct a knowledge base or dictionary. News has been chosen as the primary domain of application to demonstrate the benefits of using structured tags for managing the rapidly changing and dynamic nature of news information. One of the main outcomes of this work is an automatically constructed vocabulary that defines the meaning of each named entity tag, which can be extracted from a news article (including person, location and organisation), based on experts suggestions from major search engines and the knowledge from public database such as Wikipedia. To demonstrate the potential applications of the vocabulary, we have used it to provide more functionalities in an online news website, including topic-based news reading, intuitive tagging, clipping and sharing of interesting news, as well as news filtering or searching based on named entity tags. The evaluation results on the impact of disambiguating tags have shown that the vocabulary can help to significantly improve news searching performance. The preliminary results from our user study have demonstrated that users can benefit from the additional functionalities on the news websites as they are able to retrieve more relevant news, clip and share news with friends and families effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ways in which the "traditional" tension between words and artwork can be perceived has huge implications for understanding the relationship between critical or theoretical interpretation, art and practice, and research. Within the practice-led PhD this can generate a strange sense of disjuncture for the artist-researcher particularly when engaged in writing the exegesis. This paper aims to explore this tension through an introductory investigation of the work of the philosopher Andrew Benjamin. For Benjamin criticism completes the work of art. Criticism is, with the artwork, at the centre of our experience and theoretical understanding of art – in this way the work of art and criticism are co-productive. The reality of this co-productivity can be seen in three related articles on the work of American painter Marcia Hafif. In each of these articles there are critical negotiations of just how the work of art operates as art and theoretically, within the field of art. This focus has important ramifications for the writing and reading of the exegesis within the practice-led research higher degree. By including art as a significant part of the research reporting process the artist-researcher is also staking a claim as to the critical value of their work. Rather than resisting the tension between word and artwork the practice-led artist-researcher need to embrace the co-productive nature of critical word and creative work to more completely articulate their practice and its significance as research. The ideal venue and opportunity for this is the exegesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Even though there is substantial agreement about the nature of rural contexts, practice principles, and factors influencing practice we still do not have a framework for organising this knowledge in a way that can directly inform the practitioner in their day-to-day work. In this paper, we introduce the concepts 'practice domains', 'domain location', and 'domain alignment' that, taken together, provide such a framework. We suggest that each practitioner works within a number of practice domains. A domain is a discourse about practice comprising narratives about how a social worker should practise and which factors they should take most account of in their practice decision making. Each practitioner, and each practice process, can be located somewhere within each domain (domain location) and also situated amongst domains according to their relative alignment with each of them (domain alignment). In this paper, we present this framework and show how it is useful for practitioners in understanding practice, identifying factors influencing it, and making practice decisions in immediate, concrete situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This presentation explores molarization and overcoding of social machines and relationality within an assemblage consisting of empirical data of immigrant families in Australia. Immigration is key to sustainable development of Western societies like Australia and Canada. Newly arrived immigrants enter a country and are literally taken over by the Ministry of Immigration regarding housing, health, education and accessing job possibilities. If the immigrants do not know the official language(s) of the country, they enroll in language classes for new immigrants. Language classes do more than simply teach language. Language is presented in local contexts (celebrating the national day, what to do to get a job) and in control societies, language classes foreground values of a nation state in order for immigrants to integrate. In the current project, policy documents from Australia reveal that while immigration is the domain of government, the subject/immigrant is nevertheless at the core of policy. While support is provided, it is the subject/immigrant transcendent view that prevails. The onus remains on the immigrant to “succeed”. My perspective lies within transcendental empiricism and deploys Deleuzian ontology, how one might live in order to examine how segmetary lines of power (pouvoir) reflected in policy documents and operationalized in language classes rupture into lines of flight of nomad immigrants. The theoretical framework is Multiple Literacies Theory (MLT); reading is intensive and immanent. The participants are one Korean and one Sudanese family and their children who have recently immigrated to Australia. Observations in classrooms were obtained and followed by interviews based on the observations. Families also borrowed small video cameras and they filmed places, people and things relevant to them in terms of becoming citizen and immigrating to and living in a different country. Interviews followed. Rhizoanalysis informs the process of reading data. Rhizoanalysis is a research event and performed with an assemblage (MLT, data/vignettes, researcher, etc.). It is a way to work with transgressive data. Based on the concept of the rhizome, a bloc of data has no beginning, no ending. A researcher enters in the middle and exists somewhere in the middle, an intermezzo suggesting that the challenges to molar immigration lie in experimenting and creating molecular processes of becoming citizen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generalized fractional partial differential equations have now found wide application for describing important physical phenomena, such as subdiffusive and superdiffusive processes. However, studies of generalized multi-term time and space fractional partial differential equations are still under development. In this paper, the multi-term time-space Caputo-Riesz fractional advection diffusion equations (MT-TSCR-FADE) with Dirichlet nonhomogeneous boundary conditions are considered. The multi-term time-fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0, 1], [1, 2] and [0, 2], respectively. These are called respectively the multi-term time-fractional diffusion terms, the multi-term time-fractional wave terms and the multi-term time-fractional mixed diffusion-wave terms. The space fractional derivatives are defined as Riesz fractional derivatives. Analytical solutions of three types of the MT-TSCR-FADE are derived with Dirichlet boundary conditions. By using Luchko's Theorem (Acta Math. Vietnam., 1999), we proposed some new techniques, such as a spectral representation of the fractional Laplacian operator and the equivalent relationship between fractional Laplacian operator and Riesz fractional derivative, that enabled the derivation of the analytical solutions for the multi-term time-space Caputo-Riesz fractional advection-diffusion equations. © 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-term time-fractional differential equations have been used for describing important physical phenomena. However, studies of the multi-term time-fractional partial differential equations with three kinds of nonhomogeneous boundary conditions are still limited. In this paper, a method of separating variables is used to solve the multi-term time-fractional diffusion-wave equation and the multi-term time-fractional diffusion equation in a finite domain. In the two equations, the time-fractional derivative is defined in the Caputo sense. We discuss and derive the analytical solutions of the two equations with three kinds of nonhomogeneous boundary conditions, namely, Dirichlet, Neumann and Robin conditions, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generic sentiment lexicons have been widely used for sentiment analysis these days. However, manually constructing sentiment lexicons is very time-consuming and it may not be feasible for certain application domains where annotation expertise is not available. One contribution of this paper is the development of a statistical learning based computational method for the automatic construction of domain-specific sentiment lexicons to enhance cross-domain sentiment analysis. Our initial experiments show that the proposed methodology can automatically generate domain-specific sentiment lexicons which contribute to improve the effectiveness of opinion retrieval at the document level. Another contribution of our work is that we show the feasibility of applying the sentiment metric derived based on the automatically constructed sentiment lexicons to predict product sales of certain product categories. Our research contributes to the development of more effective sentiment analysis system to extract business intelligence from numerous opinionated expressions posted to the Web

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the explosive growth of the Web, the domain of Web personalization has gained great momentum both in the research and commercial areas. One of the most popular web personalization systems is recommender systems. In recommender systems choosing user information that can be used to profile users is very crucial for user profiling. In Web 2.0, one facility that can help users organize Web resources of their interest is user tagging systems. Exploring user tagging behavior provides a promising way for understanding users’ information needs since tags are given directly by users. However, free and relatively uncontrolled vocabulary makes the user self-defined tags lack of standardization and semantic ambiguity. Also, the relationships among tags need to be explored since there are rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach for learning tag ontology based on the widely used lexical database WordNet for capturing the semantics and the structural relationships of tags. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. To personalize further, clustering of users is performed to generate a more accurate ontology for a particular group of users. In order to evaluate the usefulness of the tag ontology, we use the tag ontology in a pilot tag recommendation experiment for improving the recommendation performance by exploiting the semantic information in the tag ontology. The initial result shows that the personalized information has improved the accuracy of the tag recommendation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a graph-based method to weight medical concepts in documents for the purposes of information retrieval. Medical concepts are extracted from free-text documents using a state-of-the-art technique that maps n-grams to concepts from the SNOMED CT medical ontology. In our graph-based concept representation, concepts are vertices in a graph built from a document, edges represent associations between concepts. This representation naturally captures dependencies between concepts, an important requirement for interpreting medical text, and a feature lacking in bag-of-words representations. We apply existing graph-based term weighting methods to weight medical concepts. Using concepts rather than terms addresses vocabulary mismatch as well as encapsulates terms belonging to a single medical entity into a single concept. In addition, we further extend previous graph-based approaches by injecting domain knowledge that estimates the importance of a concept within the global medical domain. Retrieval experiments on the TREC Medical Records collection show our method outperforms both term and concept baselines. More generally, this work provides a means of integrating background knowledge contained in medical ontologies into data-driven information retrieval approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.