846 resultados para Semantic neighbour discovery


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article outlines the impact that a conspiracy of silence and denial of difference has had on some adopted and donor conceived persons who have been lied to or misled about their origins. Factors discussed include deceit - expressed as a central secret which undermines the fabric of a family and through distortion mystifies communication processes; the shock of discovery - often revealed accidentally and the associated sense of betrayal when this occurs; and a series of losses, for example, kinship, medical history, culture and agency which result in having to rebuild personal identity. By providing those affected with a voice, validation and vindication healing can begin. Any feelings of disregard, of betrayal of trust, of anger, frustration, sorrow or loss, need to be regarded as real, expected, and above all, a valid reaction to what has occurred. The author is a 'late discoverer' of her adoption and draws on the information from her doctoral research on the same topic which was completed in 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some children adopted under the now discredited period of closed adoption were never told of their adoptive status until it was revealed to them in adulthood. Yet to date, this ‘late-discovery’ experience has received little research attention. Now a new generation of ‘late discoverers’ is emerging as a result of (heterosexual couple) donor insemination (DI) practices. This study of 25 late-discovery participants of either adoptive or (heterosexual couple) DI offspring status reveals ethical concerns particular to the lateness of discovery. Most of the participants were Australian, with the remainder from the UK, USA and Canada. All were asked to give an ‘open’ account of their experience, with four themes or suggestions provided on request. These accounts were added to those available in relevant publications. The analysis employed a hermeneutic phenomenological methodology and all accounts were analysed using an ethical perspective developed by Walker (2006, 2007). The main themes that emerged were: disrupted personal autonomy, betrayal of deep levels of trust and feelings of injustice and diminished self-worth. The lack of recognition of concerns particular to late discovery has resulted in late discoverers (i) feeling unable to regain a sense of personal control, (ii) significantly disrupted relationships with those closest to them and others, including community and institutions, and (iii) feelings of diminished value and self-worth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Text categorisation is challenging, due to the complex structure with heterogeneous, changing topics in documents. The performance of text categorisation relies on the quality of samples, effectiveness of document features, and the topic coverage of categories, depending on the employing strategies; supervised or unsupervised; single labelled or multi-labelled. Attempting to deal with these reliability issues in text categorisation, we propose an unsupervised multi-labelled text categorisation approach that maps the local knowledge in documents to global knowledge in a world ontology to optimise categorisation result. The conceptual framework of the approach consists of three modules; pattern mining for feature extraction; feature-subject mapping for categorisation; concept generalisation for optimised categorisation. The approach has been promisingly evaluated by compared with typical text categorisation methods, based on the ground truth encoded by human experts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an overview of NTCIR-10 Cross-lingual Link Discovery (CrossLink-2) task. For the task, we continued using the evaluation framework developed for the NTCIR-9 CrossLink-1 task. Overall, recommended links were evaluated at two levels (file-to-file and anchor-to-file); and system performance was evaluated with metrics: LMAP, R-Prec and P@N.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To enhance the therapeutic efficacy and reduce the adverse effects of traditional Chinese medicine, practitioners often prescribe combinations of plant species and/or minerals, called formulae. Unfortunately, the working mechanisms of most of these compounds are difficult to determine and thus remain unknown. In an attempt to address the benefits of formulae based on current biomedical approaches, we analyzed the components of Yinchenhao Tang, a classical formula that has been shown to be clinically effective for treating hepatic injury syndrome. The three principal components of Yinchenhao Tang are Artemisia annua L., Gardenia jasminoids Ellis, and Rheum Palmatum L., whose major active ingredients are 6,7-dimethylesculetin (D), geniposide (G), and rhein (R), respectively. To determine the mechanisms underlying the efficacy of this formula, we conducted a systematic analysis of the therapeutic effects of the DGR compound using immunohistochemistry, biochemistry, metabolomics, and proteomics. Here, we report that the DGR combination exerts a more robust therapeutic effect than any one or two of the three individual compounds by hitting multiple targets in a rat model of hepatic injury. Thus, DGR synergistically causes intensified dynamic changes in metabolic biomarkers, regulates molecular networks through target proteins, has a synergistic/additive effect, and activates both intrinsic and extrinsic pathways.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Web is a steadily evolving resource comprising much more than mere HTML pages. With its ever-growing data sources in a variety of formats, it provides great potential for knowledge discovery. In this article, we shed light on some interesting phenomena of the Web: the deep Web, which surfaces database records as Web pages; the Semantic Web, which de�nes meaningful data exchange formats; XML, which has established itself as a lingua franca for Web data exchange; and domain-speci�c markup languages, which are designed based on XML syntax with the goal of preserving semantics in targeted domains. We detail these four developments in Web technology, and explain how they can be used for data mining. Our goal is to show that all these areas can be as useful for knowledge discovery as the HTML-based part of the Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is a study for automatic discovery of text features for describing user information needs. It presents an innovative data-mining approach that discovers useful knowledge from both relevance and non-relevance feedback information. The proposed approach can largely reduce noises in discovered patterns and significantly improve the performance of text mining systems. This study provides a promising method for the study of Data Mining and Web Intelligence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presently organisations engage in what is termed as Global Business Transformation Projects [GBTPs], for consolidating, innovating, transforming and restructuring their processes and business strategies while undergoing fundamental change. Culture plays an important role in global business transformation projects as these involve people of different cultural backgrounds and span across countries, industries and disciplinary boundaries. Nevertheless, there is scant empirical research on how culture is conceptualised beyond national and organisational cultures but also on how culture is to be taken into account and dealt with within global business transformation projects. This research is situated in a business context and discovers a theory that aids in describing and dealing with culture. It draws on the lived experiences of thirty-two senior management practitioners, reporting on more than sixty-one global business transformation projects in which they were actively involved. The research method used is a qualitative and interpretive one and applies a grounded theory approach, with rich data generated through interviews. In addition, vignettes were developed to illustrate the derived theoretical models. The findings from this study contribute to knowledge in multiple ways. First, it provides a holistic account of global business transformation projects that describe the construct of culture by the elements of culture types, cultural differences and cultural diversity. A typology of culture types has been developed which enlarges the view of culture beyond national and organisational culture including an industry culture, professional service firm culture and 'theme' culture. The amalgamation of the culture types instantiated in a global business transformation project compromises its project culture. Second, the empirically grounded process for managing culture in global business transformation projects integrates the stages of recognition, understanding and management as well as the enablement providing a roadmap for dealing with culture in global business transformation projects. Third, this study identified contextual variables to global business transformation projects, which provide the means of describing the environment global business transformation projects are situated, influence the construct of culture and inform the process for managing culture. Fourth, the contribution to the research method is the positioning of interview research as a strategy for data generation and the detailed documentation applying grounded theory to discover theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vivo small molecules as necessary intermediates are involved in numerous critical metabolic pathways and biological processes associated with many essential biological functions and events. There is growing evidence that MS-based metabolomics is emerging as a powerful tool to facilitate the discovery of functional small molecules that can better our understanding of development, infection, nutrition, disease, toxicity, drug therapeutics, gene modifications and host-pathogen interaction from metabolic perspectives. However, further progress must still be made in MS-based metabolomics because of the shortcomings in the current technologies and knowledge. This technique-driven review aims to explore the discovery of in vivo functional small molecules facilitated by MS-based metabolomics and to highlight the analytic capabilities and promising applications of this discovery strategy. Moreover, the biological significance of the discovery of in vivo functional small molecules with different biological contexts is also interrogated at a metabolic perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Guaranteeing the quality of extracted features that describe relevant knowledge to users or topics is a challenge because of the large number of extracted features. Most popular existing term-based feature selection methods suffer from noisy feature extraction, which is irrelevant to the user needs (noisy). One popular method is to extract phrases or n-grams to describe the relevant knowledge. However, extracted n-grams and phrases usually contain a lot of noise. This paper proposes a method for reducing the noise in n-grams. The method first extracts more specific features (terms) to remove noisy features. The method then uses an extended random set to accurately weight n-grams based on their distribution in the documents and their terms distribution in n-grams. The proposed approach not only reduces the number of extracted n-grams but also improves the performance. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms the state-of-art methods underpinned by Okapi BM25, tf*idf and Rocchio.