938 resultados para Question-Answering System
Resumo:
In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.
Resumo:
This paper provides a summary of the Social Media and Linked Data for Emergency Response (SMILE) workshop, co-located with the Extended Semantic Web Conference, at Montpellier, France, 2013. Following paper presentations and question answering sessions, an extensive discussion and roadmapping session was organised which involved the workshop chairs and attendees. Three main topics guided the discussion - challenges, opportunities and showstoppers. In this paper, we present our roadmap towards effectively exploiting social media and semantic web techniques for emergency response and crisis management.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
[EU]Hizkuntzaren prozesamenduan testu koherenteetan kausa taldeko erlazioak (KAUSA, ONDORIOA eta HELBURUA) automatikoki hautematea eta bereiztea erabilgarria da galdera-erantzun automatikoko sistemak eraikitzerako orduan. Horretarako Egitura Erretorikoaren Teoria (Rhetorical Structure Theory, aurrerantzean RST) eta bere erlazioak erabiliko ditugu, corpus bezala RST Treebank -a (Iruskieta et al., 2013) hartuta, zientziako laburpen-testuz osatutako corpusa, hain zuzen ere. Corpus hori XML formatuan deskargatu eta hortik XPATH tresnaren bidez informazio garrantzitsuena eskuratzen dugu. Lan honek 3 helburu nagusi ditu: lehendabizi, kausa taldeko erlazioak elkarren artean bereiztea, bigarrenez, kausa taldeko erlazio hauek beste erlazio guztiekin bereiztea, eta azkenik, EBALUAZIOA eta INTERPRETAZIOA erlazioak bereiztea sentimendu analisian aplikatu ahal izateko. Ataza horiek egiteko, RhetDB tresnarekin eskuratu diren patroi ensaguratsuenak erabili eta bi aplikazio garatu ditugu. Alde batetik, bilatu nahi ditugun patroiak adierazi eta erlazio-egitura duen edonolako testuetan bilaketak egiten dituen bilatzailea, eta bestetik, patroi esanguratsuenak emanda erlazioak etiketatzen dituen etiketatzailea. Bi aplikazio hauek gainera, ahalik eta modu parametrizagarrienean erabiltzeko garatu ditugu, kodea aldatu gabe edonork erabili ahal izateko antzeko atazak egiteko. Etiketatzaileak ebaluatu ondoren, identifikatzeko erlaziorik errazena HELBURUA erlazioa dela ikusi dugu eta KAUSA eta ONDORIOA bereizteko arazo gehiago dauzkagula ere ondorioztatu dugu. Modu berean, EBALUAZIOA eta INTERPRETAZIOA ere elkarren artean bereiz dezakegula ikusi dugu.
Resumo:
Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.
Resumo:
There is a dearth of evidence focusing on student preferences for computer-based testing versus
testing via student response systems for summative assessment in undergraduate education.
This quantitative study compared the preference and acceptability of computer-based testing
and a student response system for completing multiple choice questions in undergraduate
nursing education. After using both computer-based testing and a student response system to
complete multiple choice questions, 192 first year undergraduate nursing students rated their
preferences and attitudes towards using computer-based testing and a student response system.
Results indicated that seventy four percent felt the student response system was easy to use.
Fifty six percent felt the student response system took more time than the computer-based testing
to become familiar with. Sixty Percent felt computer-based testing was more users friendly.
Seventy Percent of students would prefer to take a multiple choice question summative exam
via computer-based testing, although Fifty percent would be happy to take using student response
system. Results are useful for undergraduate educators in relation to student’s preference
for using computer-based testing or student response system to undertake a summative
multiple choice question exam
Resumo:
In 2013, an opportunity arose in England to develop an agri-environment package for wild pollinators, as part of the new Countryside Stewardship scheme launched in 2015. It can be understood as a 'policy window', a rare and time-limited opportunity to change policy, supported by a narrative about pollinator decline and widely supported mitigating actions. An agri-environment package is a bundle of management options that together supply sufficient resources to support a target group of species. This paper documents information that was available at the time to develop such a package for wild pollinators. Four questions needed answering: (1) Which pollinator species should be targeted? (2) Which resources limit these species in farmland? (3) Which management options provide these resources? (4) What area of each option is needed to support populations of the target species? Focussing on wild bees, we provide tentative answers that were used to inform development of the package. There is strong evidence that floral resources can limit wild bee populations, and several sources of evidence identify a set of agri-environment options that provide flowers and other resources for pollinators. The final question could only be answered for floral resources, with a wide range of uncertainty. We show that the areas of some floral resource options in the basic Wild Pollinator and Farmland Wildlife Package (2% flower-rich habitat and 1 km flowering hedgerow), are sufficient to supply a set of six common pollinator species with enough pollen to feed their larvae at lowest estimates, using minimum values for estimated parameters where a range was available. We identify key sources of uncertainty, and stress the importance of keeping the Package flexible, so it can be revised as new evidence emerges about how to achieve the policy aim of supporting pollinators on farmland.
Resumo:
Objectives: Despite the growing use of online databases by clinicians, there has been very little research documenting how effectively they are used. This study assessed the ability of medical and nurse-practitioner students to answer clinical questions using an information retrieval system. It also attempted to identify the demographic, experience, cognitive, personality, search mechanics, and user-satisfaction factors associated with successful use of a retrieval system.