820 resultados para Best match
Resumo:
The World Wide Web has become a medium for people to share information. People use Web-based collaborative tools such as question answering (QA) portals, blogs/forums, email and instant messaging to acquire information and to form online-based communities. In an online QA portal, a user asks a question and other users can provide answers based on their knowledge, with the question usually being answered by many users. It can become overwhelming and/or time/resource consuming for a user to read all of the answers provided for a given question. Thus, there exists a need for a mechanism to rank the provided answers so users can focus on only reading good quality answers. The majority of online QA systems use user feedback to rank users’ answers and the user who asked the question can decide on the best answer. Other users who didn’t participate in answering the question can also vote to determine the best answer. However, ranking the best answer via this collaborative method is time consuming and requires an ongoing continuous involvement of users to provide the needed feedback. The objective of this research is to discover a way to recommend the best answer as part of a ranked list of answers for a posted question automatically, without the need for user feedback. The proposed approach combines both a non-content-based reputation method and a content-based method to solve the problem of recommending the best answer to the user who posted the question. The non-content method assigns a score to each user which reflects the users’ reputation level in using the QA portal system. Each user is assigned two types of non-content-based reputations cores: a local reputation score and a global reputation score. The local reputation score plays an important role in deciding the reputation level of a user for the category in which the question is asked. The global reputation score indicates the prestige of a user across all of the categories in the QA system. Due to the possibility of user cheating, such as awarding the best answer to a friend regardless of the answer quality, a content-based method for determining the quality of a given answer is proposed, alongside the non-content-based reputation method. Answers for a question from different users are compared with an ideal (or expert) answer using traditional Information Retrieval and Natural Language Processing techniques. Each answer provided for a question is assigned a content score according to how well it matched the ideal answer. To evaluate the performance of the proposed methods, each recommended best answer is compared with the best answer determined by one of the most popular link analysis methods, Hyperlink-Induced Topic Search (HITS). The proposed methods are able to yield high accuracy, as shown by correlation scores: Kendall correlation and Spearman correlation. The reputation method outperforms the HITS method in terms of recommending the best answer. The inclusion of the reputation score with the content score improves the overall performance, which is measured through the use of Top-n match scores.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.
Resumo:
This report concerns the provisions and practices on betting-related match fixing in sports
within the 28 Member States. Carried out in late 2013/early 2014, respondents in each Member
State reported on that state’s gambling-related provisions in respect of football and tennis and
(in each country) a third sport determined on the basis of either its popularity (in terms of
participation or television viewing) or the existence of betting-related “scandals” in that sport
within that particular jurisdiction. Those reports helped the authors to compare the Member
States’ regulatory and self-regulatory frameworks relating to risk assessment and conflict of
interest management, with a view to indicating areas of best practice, identifying particularly
good legislative frameworks and highlighting areas where change was either desirable or
necessary. While some individual Member States have legislation which might provide
templates that others could adapt for their own use, the authors were not convinced that “more
law”, whether at the national or European level, was desirable. Rather, more effective
cooperation among the stakeholders was identified as being more likely to provide tangible
benefits than would new legal frameworks.
Resumo:
The Kasparov-World match was initiated by Microsoft with sponsorship from the bank First USA. The concept was that Garry Kasparov as White would play the rest of the world on the Web: one ply would be played per day and the World Team was to vote for its move. The Kasparov-World game was a success from many points of view. It certainly gave thousands the feeling of facing the world’s best player across the board and did much for the future of the game. Described by Kasparov as “phenomenal ... the most complex in chess history”, it is probably a worthy ‘Greatest Game’ candidate. Computer technology has given chess a new mode of play and taken it to new heights: the experiment deserves to be repeated. We look forward to another game and experience of this quality although it will be difficult to surpass the event we have just enjoyed. We salute and thank all those who contributed - sponsors, moderator, coaches, unofficial analysts, organisers, technologists, voters and our new friends.
Resumo:
The performances of two parametrized functionals (namely B3LYP and B2PYLP) have been compared with those of two non-parametrized functionals (PBE0 and PBE0-DH) on a relatively large benchmark set when three different types of dispersion corrections are applied [namely the D2, D3 and D3(BJ) models]. Globally, the MAD computed using non-parametrized functionals decreases when adding dispersion terms although the accuracy not necessarily increases with the complexity of the model of dispersion correction used. In particular, the D2 correction is found to improve the performances of both PBE0 and PBE0-DH, while no systematic improvement is observed going from D2 to D3 or D3(BJ) corrections. Indeed when including dispersion, the number of sets for which PBE0-DH is the best performing functional decreases at the benefit of B2PLYP. Overall, our results clearly show that inclusion of dispersion corrections is more beneficial to parametrized double-hybrid functionals than to non-parametrized ones. The same conclusions globally hold for the corresponding global hybrids, showing that the marriage between non-parametrized functionals and empirical corrections may be a difficult deal.
Resumo:
Waterfalls attract tourists because they are aesthetically appealing landscape features that are not part of everyday experience. It is generally understood that falls are usually seen at their best when there is a copious flow of water, especially after heavy rain. Guidebooks often contain this observation when referring to waterfalls, sometimes warning readers that the flow may be severely reduced during dry periods. Indeed, many visitors are disappointed when they see falls at such times. Some are saddened when the discharge of a waterfall has been depleted by the abstraction of water upstream for power generation or other purposes. While, for those in search of the Sublime or merely the superlative, size is often important, small waterfalls can give great pleasure to lovers of landscape beauty. According to guidebooks, however, even these falls are usually best seen after rain. Drawing on tourist and travel literature and personal journals from the eighteenth century to the present, and with reference to examples from different parts of the world, this paper discusses the importance of discharge in the tourist experience of waterfalls.
Resumo:
This paper outlines a process for fleet safety training based on research and management development programmes undertaken at the University of Huddersfield in the UK (www.hud.ac.uk/sas/trans/transnews.htm) and CARRS-Q in Australia (www.carrsq.qut.edu.au/staff/Murray.jsp) over the past 10 years.
Who Should Bear the Risk - The Party Least Able to Refuse or the Party Best Able to Manage the Risk?