938 resultados para Page Rank
Resumo:
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
In this paper, the author describes recent developments in the assessment of research activity and publication in Australia. Of particular interest to readers will be the move to rank academic journals. Educational Philosophy and Theory (EPAT) received the highest possible ranking, however, the process is far from complete. Some implications for the field, for this journal and particularly, for the educational foundations are discussed.
Resumo:
This chapter considers the ways in which contemporary children’s literature depicts reading in changing times, with a particular eye on the cultural definitions of ‘reading’ being offered to young people in the age of the tablet computer. A number of picture books, in codex and app form, speak to changing times for reading by their emphasis on the value of books and reading as technologies of literature and of the self. Attending to valuations of literacy and literature within children’s texts provides insight into anxieties about books in the electronic age.
Resumo:
This paper argues that governments around the world need to take immediate coordinated action to reverse the 'book famine.' There are over 129 million book titles in the world, but persons with print disabilities can obtain less than 7% of these titles in formats that they can read. The situation is most acute in developing countries, where less than 1% of books are accessible. Two recent international developments – the United Nations Convention on the Rights of Persons with Disabilities (‘CRPD’) and the new Marrakesh Treaty to Facilitate Access to Published Works for Persons who are Blind, Visually Impaired, or otherwise Print Disabled (somewhat ironically nicknamed the ‘VIP Treaty’) – suggest that nation states are increasingly willing to take action to reverse the book famine. The Marrakesh Treaty promises to level out some of the disparity of access between people in developed and developing nations and remove the need for each jurisdiction to digitise a separate copy of each book. This is a remarkable advance, and suggests the beginnings of a possible paradigm shift in global copyright politicsmade all the more remarkable in the face of heated opposition by global copyright industry representatives. Now that the Marrakesh Treaty has been concluded, however, we argue that a substantial exercise of global political will is required to (a) invest the funds required to digitise existing books; and (b) avert any further harm by ensuring that books published in the future are made accessible upon their release.
Resumo:
This practice-led project has two outcomes: a collection of short stories titled 'Corkscrew Section', and an exegesis. The short stories combine written narrative with visual elements such as images and typographic devices, while the exegesis analyses the function of these graphic devices within adult literary fiction. My creative writing explores a variety of genres and literary styles, but almost all of the stories are concerned with fusing verbal and visual modes of communication. The exegesis adopts the interpretive paradigm of multimodal stylistics, which aims to analyse graphic devices with the same level of detail as linguistic analysis. Within this framework, the exegesis compares and extends previous studies to develop a systematic method for analysing how the interactions between language, images and typography create meaning within multimodal literature.
Resumo:
Active Appearance Models (AAMs) employ a paradigm of inverting a synthesis model of how an object can vary in terms of shape and appearance. As a result, the ability of AAMs to register an unseen object image is intrinsically linked to two factors. First, how well the synthesis model can reconstruct the object image. Second, the degrees of freedom in the model. Fewer degrees of freedom yield a higher likelihood of good fitting performance. In this paper we look at how these seemingly contrasting factors can complement one another for the problem of AAM fitting of an ensemble of images stemming from a constrained set (e.g. an ensemble of face images of the same person).
Resumo:
Purpose – This study aims to evaluate the usefulness of a university unit Facebook page, which was established to support a first-year university justice unit. The study pays particular regard to the Facebook page's impact on students learning outcomes and communications amongst students and between students and teaching staff. Design/methodology/approach – All students enrolled in the unit were asked to complete an online survey, which sought to determine whether they used the unit Facebook page and if so, the nature and extent of their use. Findings – The study found that the unit Facebook page was useful in achieving most learning objectives for the unit. This included enhancing students' knowledge and understanding of unit content, as well as their ability to critically analyse unit materials. Students also indicated that they found the Facebook page better than the university's central learning management system across a range of areas. It was particularly useful for facilitating unit-related discussions. Research limitations/implications – The survey results reported in this paper are based on a relatively small sample of students (n=67) from a first-year university justice unit. Future studies should seek to garner evidence from broader and larger samples that transcend specific unit populations. However, the findings of this study do indicate further support for the use of Facebook as a supplementary tool in university education. Originality/value – This study focuses on two aspects of social networking technologies that have not been previously researched and thus contributes to the growing literature on the uses and benefits of Facebook in tertiary education.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.
Resumo:
In this paper we define two models of users that require diversity in search results; these models are theoretically grounded in the notion of intrinsic and extrinsic diversity. We then examine Intent-Aware Expected Reciprocal Rank (ERR-IA), one of the official measures used to assess diversity in TREC 2011-12, with respect to the proposed user models. By analyzing ranking preferences as expressed by the user models and those estimated by ERR-IA, we investigate whether ERR-IA assesses document rankings according to the requirements of the diversity retrieval task expressed by the two models. Empirical results demonstrate that ERR-IA neglects query-intents coverage by attributing excessive importance to redundant relevant documents. ERR-IA behavior is contrary to the user models that require measures to first assess diversity through the coverage of intents, and then assess the redundancy of relevant intents. Furthermore, diversity should be considered separately from document relevance and the documents positions in the ranking.
Resumo:
It has been called “the world’s worst recorded natural disaster,” and “the largest earthquake in 40 years,” galvanizing the largest global relief effort in history. For those of us involved in the discipline and/or the practice of communications, we realized that it presented a unique case study from a number of perspectives. Both the media and the public became so enraptured and enmeshed in the story of the tsunami of December 26, 2004, bringing to the fore a piece of geography and a peoples too rarely considered prior to the tragedy, that we felt compelled to examine the phenomenon. The overwhelming significance of this volume comes from its being a combination of both academic scholars and development practitioners in the field. Its poignancy becomes underscored from their wide-ranging perspectives, with 21 chapters representing some 14 different countries. Their realities provide not only credibility but also an unprecedented sensitivity to communication issues. Our approach here considers Tsunami 2004 from five communication perspectives: 1.) Interpersonal/ intercultural, 2.) Mass media, 3.) Telecommunications, 4.) Ethics, philanthropy, and development communication, and; 5.) Personal testimonies and observations. You will learn even more here about the theory and practice of disaster/crisis communication.
Resumo:
A great football novel is like a perfectly executed bicycle-kick goal, like players such as Argentine legends Diego Maradona and Lionel Messi; they come along once in a generation. Against the accumulated volume of non-fiction football literature (some people still call it soccer), which could fill and spill out of a World Cup Stadium, football novels are comparatively rare. That said, football or soccer fiction is a genre with a very real and important historical longevity...
Resumo:
Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.
Resumo:
Twitter is a very popular social network website that allows users to publish short posts called tweets. Users in Twitter can follow other users, called followees. A user can see the posts of his followees on his Twitter profile home page. An information overload problem arose, with the increase of the number of followees, related to the number of tweets available in the user page. Twitter, similar to other social network websites, attempts to elevate the tweets the user is expected to be interested in to increase overall user engagement. However, Twitter still uses the chronological order to rank the tweets. The tweets ranking problem was addressed in many current researches. A sub-problem of this problem is to rank the tweets for a single followee. In this paper we represent the tweets using several features and then we propose to use a weighted version of the famous voting system Borda-Count (BC) to combine several ranked lists into one. A gradient descent method and collaborative filtering method are employed to learn the optimal weights. We also employ the Baldwin voting system for blending features (or predictors). Finally we use the greedy feature selection algorithm to select the best combination of features to ensure the best results.