851 resultados para Topic discovery


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O uso combinado de algoritmos para a descoberta de tópicos em coleções de documentos com técnicas orientadas à visualização da evolução daqueles tópicos no tempo permite a exploração de padrões temáticos em corpora extensos a partir de representações visuais compactas. A pesquisa em apresentação investigou os requisitos de visualização do dado sobre composição temática de documentos obtido através da modelagem de tópicos – o qual é esparso e possui multiatributos – em diferentes níveis de detalhe, através do desenvolvimento de uma técnica de visualização própria e pelo uso de uma biblioteca de código aberto para visualização de dados, de forma comparativa. Sobre o problema estudado de visualização do fluxo de tópicos, observou-se a presença de requisitos de visualização conflitantes para diferentes resoluções dos dados, o que levou à investigação detalhada das formas de manipulação e exibição daqueles. Dessa investigação, a hipótese defendida foi a de que o uso integrado de mais de uma técnica de visualização de acordo com a resolução do dado amplia as possibilidades de exploração do objeto em estudo em relação ao que seria obtido através de apenas uma técnica. A exibição dos limites no uso dessas técnicas de acordo com a resolução de exploração do dado é a principal contribuição desse trabalho, no intuito de dar subsídios ao desenvolvimento de novas aplicações.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discovering proper search intents is a vi- tal process to return desired results. It is constantly a hot research topic regarding information retrieval in recent years. Existing methods are mainly limited by utilizing context-based mining, query expansion, and user profiling techniques, which are still suffering from the issue of ambiguity in search queries. In this pa- per, we introduce a novel ontology-based approach in terms of a world knowledge base in order to construct personalized ontologies for identifying adequate con- cept levels for matching user search intents. An iter- ative mining algorithm is designed for evaluating po- tential intents level by level until meeting the best re- sult. The propose-to-attempt approach is evaluated in a large volume RCV1 data set, and experimental results indicate a distinct improvement on top precision after compared with baseline models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we examine automated Chinese to English link discovery in Wikipedia and the effects of Chinese segmentation and Chinese to English translation on the hyperlink recommendation. Our experimental results show that the implemented link discovery framework can effectively recommend Chinese-to-English cross-lingual links. The techniques described here can assist bi-lingual users where a particular topic is not covered in Chinese, is not equally covered in both languages, or is biased in one language; as well as for language learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article outlines the impact that a conspiracy of silence and denial of difference has had on some adopted and donor conceived persons who have been lied to or misled about their origins. Factors discussed include deceit - expressed as a central secret which undermines the fabric of a family and through distortion mystifies communication processes; the shock of discovery - often revealed accidentally and the associated sense of betrayal when this occurs; and a series of losses, for example, kinship, medical history, culture and agency which result in having to rebuild personal identity. By providing those affected with a voice, validation and vindication healing can begin. Any feelings of disregard, of betrayal of trust, of anger, frustration, sorrow or loss, need to be regarded as real, expected, and above all, a valid reaction to what has occurred. The author is a 'late discoverer' of her adoption and draws on the information from her doctoral research on the same topic which was completed in 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity - past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens "from all angles" and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. ("Natural-colour" is used to contrast with "false-colour", i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research, education, art, entertainment, biodiversity assessment and biosecurity control. © 2014 Nguyen et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past five years have seen many scientific and biological discoveries made through the experimental design of genome-wide association studies (GWASs). These studies were aimed at detecting variants at genomic loci that are associated with complex traits in the population and, in particular, at detecting associations between common single-nucleotide polymorphisms (SNPs) and common diseases such as heart disease, diabetes, auto-immune diseases, and psychiatric disorders. We start by giving a number of quotes from scientists and journalists about perceived problems with GWASs. We will then briefly give the history of GWASs and focus on the discoveries made through this experimental design, what those discoveries tell us and do not tell us about the genetics and biology of complex traits, and what immediate utility has come out of these studies. Rather than giving an exhaustive review of all reported findings for all diseases and other complex traits, we focus on the results for auto-immune diseases and metabolic diseases. We return to the perceived failure or disappointment about GWASs in the concluding section. © 2012 The American Society of Human Genetics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common challenge that users of academic databases face is making sense of their query outputs for knowledge discovery. This is exacerbated by the size and growth of modern databases. PubMed, a central index of biomedical literature, contains over 25 million citations, and can output search results containing hundreds of thousands of citations. Under these conditions, efficient knowledge discovery requires a different data structure than a chronological list of articles. It requires a method of conveying what the important ideas are, where they are located, and how they are connected; a method of allowing users to see the underlying topical structure of their search. This paper presents VizMaps, a PubMed search interface that addresses some of these problems. Given search terms, our main backend pipeline extracts relevant words from the title and abstract, and clusters them into discovered topics using Bayesian topic models, in particular the Latent Dirichlet Allocation (LDA). It then outputs a visual, navigable map of the query results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sickle Cell Disease (SCD) is one of the most prevalent hematological diseases in the world. Despite the immense progress in molecular knowledge about SCD in last years few therapeutical sources are currently available. Nowadays the treatment is performed mainly with drugs such as hydroxyurea or other fetal hemoglobin inducers and chelating agents. This review summarizes current knowledge about the treatment and the advancements in drug design in order to discover more effective and safe drugs. Patient monitoring methods in SCD are also discussed. © 2011 Bentham Science Publishers Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grazie alla costante evoluzione tecnologica, negli ultimi anni sempre più oggetti di vita quotidiana stanno accedendo ad Internet. Il proliferare dei dispositivi “smart” ha dato il via ad una nuova rivoluzione tecnologica: quella di Internet of Things (IoT), che sta portando nelle mani degli utenti un elevatissimo numero di informazioni in grado di offrire notevoli benefici alla vita di ogni giorno. Per poter accedere ai dati messi a disposizione risulterà necessario realizzare un servizio in grado di consentire la scoperta, l’accesso e l’interazione con i nodi della rete che si occuperanno della gestione delle informazioni. In letteratura sono già disponibili alcuni di questi meccanismi, ma essi presentano dei difetti che verrebbero ancor più accentuati dalle ridotte capacità computazionali dei terminali IoT. In questo progetto di tesi verrà presentato un servizio di discovery per gateway IoT Kura-based, pensato, grazie all’utilizzo del protocollo di messaggistica MQTT, per operare con terminali dalle performance limitate ed in situazioni di scarsa connettività. Il servizio realizzato prevede che degli smartphone Android richiedano a tutti i gateway in una determinata località i parametri per entrare nel loro network. La richiesta verrà inviata mediante un messaggio MQTT pubblicato in un topic location-specific su un broker remoto. I gateway che riceveranno il messaggio, se interessati alle caratteristiche del client, gli risponderanno comunicando i dati di accesso al network in modo che il dispositivo possa auto-configurarsi per accedervi. Ad accesso avvenuto client e gateway comunicheranno in modo diretto attraverso un broker locale. In fase di testing si valuteranno le performance del servizio analizzando i tempi di risposta e l’utilizzo di risorse lato gateway, e l’assorbimento di potenza lato client.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer reviewed