113 resultados para Text Editing
Resumo:
WHAT if you lost someone you loved? What if you had to let go for the sake of your own sanity? Lachlan Philpott's Colder and Dennis Kelly's Orphans, playing as part of La Boite's and Queensland Theatre Company's independents programs, are emotionally and textually dense theatrical works...
Resumo:
The development of text classification techniques has been largely promoted in the past decade due to the increasing availability and widespread use of digital documents. Usually, the performance of text classification relies on the quality of categories and the accuracy of classifiers learned from samples. When training samples are unavailable or categories are unqualified, text classification performance would be degraded. In this paper, we propose an unsupervised multi-label text classification method to classify documents using a large set of categories stored in a world ontology. The approach has been promisingly evaluated by compared with typical text classification methods, using a real-world document collection and based on the ground truth encoded by human experts.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
This paper examines the effects of an eco-driving message on driver distraction. Two in-vehicle distracter tasks were compared with an eco-driving task and a baseline task in an advanced driving simulator. N = 22 subjects were asked to perform an eco-driving, CD changing, and a navigation task while engaged in critical manoeuvres during which they were expected to respond to a peripheral detection task (PDT) with total duration of 3.5 h. The study involved two sessions over two consecutive days. The results show that drivers’ mental workloads are significantly higher during navigation and CD changing tasks in comparison to the two other scenarios. However, eco-driving mental workload is still marginally significant (p ∼ .05) across different manoeuvres. Similarly, event detection tasks show that drivers miss significantly more events in the navigation and CD changing scenarios in comparison to both the baseline and eco-driving scenario. Analysis of the practice effect shows that drivers’ baseline scenario and navigation scenario exhibit significantly less demand on the second day. Drivers also can detect significantly more events on the second day for all scenarios. The authors conclude that even reading a simple message while driving could potentially lead to missing an important event, especially when executing critical manoeuvres. However, there is some evidence of a practice effect which suggests that future research should focus on performance with habitual rather than novel tasks. It is recommended that sending text as an eco-driving message analogous to the study circumstances should not be delivered to drivers on-line when vehicle is in motion.
Resumo:
A contentious issue in the field of destination marketing has been the recent tendency by some authors to refer to destination marketing organisations (DMOs) as destination management organisations. This nomenclature infers control over destination resources, a level of influence that is in reality held by few DMOs. This issue of a lack of control over the destination ‘amalgam’ is acknowledged by a number of the contributors, including the editors and the discussion on destination competitiveness by J.R. Brent Ritchie and Geoffrey Crouch, and is perhaps best summed up by Alan Fyall in the concluding chapter: “...unless all elements are owned by the same body, then the ability to control and influence the direction, quality and development of the destination pose very real challenges’ (p. 343). The title of the text acknowledges both marketing and management, in relation to theories and applications. While there are insightful propositions about ideals of destination management, readers will find there is a lack of coverage of destination management in practise by DMOs. This represents fertile ground for future research.
Resumo:
Much has been written on Michel Foucault’s reluctance to clearly delineate a research method, particularly with respect to genealogy (Harwood 2000; Meadmore, Hatcher, & McWilliam 2000; Tamboukou 1999). Foucault (1994, p. 288) himself disliked prescription stating, “I take care not to dictate how things should be” and wrote provocatively to disrupt equilibrium and certainty, so that “all those who speak for others or to others” no longer know what to do. It is doubtful, however, that Foucault ever intended for researchers to be stricken by that malaise to the point of being unwilling to make an intellectual commitment to methodological possibilities. Taking criticism of “Foucauldian” discourse analysis as a convenient point of departure to discuss the objectives of poststructural analyses of language, this paper develops what might be called a discursive analytic; a methodological plan to approach the analysis of discourses through the location of statements that function with constitutive effects.
Resumo:
Our everyday environment is full of text but this rich source of information remains largely inaccessible to mobile robots. In this paper we describe an active text spotting system that uses a small number of wide angle views to locate putative text in the environment and then foveates and zooms onto that text in order to improve the reliability of text recognition. We present extensive experimental results obtained with a pan/tilt/zoom camera and a ROS-based mobile robot operating in an indoor environment.
Resumo:
Information retrieval (IR) by clinicians in the healthcare setting is critical for informing clinical decision-making. However, a large part of this information is in the form of free-text and inhibits clinical decision support and effective healthcare services. This makes meaningful use of clinical free-text in electronic health records (EHRs) for patient care a difficult task. Within the context of IR, given a repository of free-text clinical reports, one might want to retrieve and analyse data for patients who have a known clinical finding.
Resumo:
Tax law and policy is a vital part of Australian society. Australian society insists that the Federal Government provide extensive public programs, such as health services, education, social security, foreign aid, legal infra¬structure, regulation, police services, national defence and funding for sports development. These programs are costly to provide and are funded by taxation. The aim of this book is to introduce and explain the principles of tax law and tax policy in plain English. The book contains detailed commentary on tax principles together with extracts from cases and materials that illustrate the application of the principles. The book considers tax policy and the economic and social aspects of tax law. While tax students must develop technical competence in tax law, given the speed with which changes are made to the technical details of tax law, it is also important to grasp tax principles and policy to understand why tax law has changed or why it should change. The chapters are structured to direct readers to the key provisions of the tax law. Each case is introduced by an explanation of the facts, followed by the taxpayer’s arguments, the Commissioner’s assertions and the decision of the Administrative Appeals Tribunal or a court. The commentary guides readers through the issues considered in the judgments. The book contains extracts from: articles; materials dealing with tax policy; and the Commissioner’s rulings. The book also has references for further reading and medium-neutral citations (Internet citations) for cases decided since 1998.
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
A big challenge for classification on text is the noisy of text data. It makes classification quality low. Many classification process can be divided into two sequential steps scoring and threshold setting (thresholding). Therefore to deal with noisy data problem, it is important to describe positive feature effectively scoring and to set a suitable threshold. Most existing text classifiers do not concentrate on these two jobs. In this paper, we propose a novel text classifier with pattern-based scoring that describe positive feature effectively, followed by threshold setting. The thresholding is based on score of training set, make it is simple to implement in other scoring methods. Experiment shows that our pattern-based classifier is promising.