889 resultados para Text Editing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This magazine, written by Melissa Giles, features three Brisbane-based media organisations: Radio 4RPH, Queensland Pride and 98.9FM. The PDF file on this website contains a text-only version of the magazine. Contact the author if you would like a copy of the text-only EPUB file or a copy of the full digital magazine with images. An audio version of the magazine is available at http://eprints.qut.edu.au/41729/

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From the late sixteenth century, in response to the problem of how best to teach children to read, a variety of texts such as primers, spellers and readers were produced in England for vernacular instruction. This paper describes how these materials were used by teachers to develop first, a specific religious understanding according to the stricture of the time and second, a moral reading practice that provided the child with a guide to secular conduct. The analysis focuses on the use of these texts as a productive means for shaping the child-reader in the context of newly emerging educational spaces which fostered a particular, morally formative relation among teacher, child and text.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In ‘something 2.0’, a section of a Hollywood film is re-edited to include textual elements that appear as ‘Pinocchio-ish’ protrusions from the actors faces. As they sit in what appears to be an interview or therapy session, this re-editing and looping imposes a new fictionalized narrative upon the characters. The histrionic yet vague nature of the text, and its imperfect integration into the footage, can be read as both a comical imposition and a failed critical gesture, both speaking to the complications involved in the relationship of the artist and the fan as they engage with popular culture. Thw work was included in the group show 'Perfection' part of the Metro Arts Artistic Program 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information mismatch and overload are two fundamental issues influencing the effectiveness of information filtering systems. Even though both term-based and pattern-based approaches have been proposed to address the issues, neither of these approaches alone can provide a satisfactory decision for determining the relevant information. This paper presents a novel two-stage decision model for solving the issues. The first stage is a novel rough analysis model to address the overload problem. The second stage is a pattern taxonomy mining model to address the mismatch problem. The experimental results on RCV1 and TREC filtering topics show that the proposed model significantly outperforms the state-of-the-art filtering systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Turtle Twilight” is a two-screen video installation. Paragraphs of text adapted from a travel blog type across the left-hand screen. A computer-generated image of a tropical sunset is slowly animated on the right-hand screen. The two screens are accompanied by an atmospheric stock music track. This work examines how we construct, represent and deploy ‘nature’ in our contemporary lives. It mixes cinematic codes with image, text and sound gleaned from online sources. By extending on Nicolas Bourriad’s understanding of ‘postproduction’ and the creative and critical strategies of ‘editing’, it questions the relationship between contemporary screen culture, nature, desire and contemplation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this video, text sourced from dream description websites is combined into a narrative. The words floating against an animated cloud background are set to a stock music track. This work examines the nature of consciousness and identity in a contemporary context. It mixes the languages of dream description and cinematic narrative. By extending on some of Nicolas Bourriaud’s ideas around “postproduction” and the creative and critical strategies of ‘editing’, this work draws attention to the ways popular culture and private anxieties continually mix together in our experiences of lived and imagined realities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process modeling is an important design practice in organizational improvement projects. In this paper, we examine the design of business process diagrams in contexts where novice analysts only have basic design tools such as paper and pencils available, and little to no understanding of formalized modeling approaches. Based on a quasi-experimental study with 89 BPM students, we identify five distinct process design archetypes ranging from textual to hybrid and graphical representation forms. We examine the quality of the designs and identify which representation formats enable an analyst to articulate business rules, states, events, activities, temporal and geospatial information in a process model. We found that the quality of the process designs decreases with the increased use of graphics and that hybrid designs featuring appropriate text labels and abstract graphical forms appear well-suited to describe business processes. We further examine how process design preferences predict formalized process modeling ability. Our research has implications for practical process design work in industry as well as for academic curricula on process design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this video, a thumping house-music track is accompanied by lines of rotating text, which resemble computer screen-savers. The text is sourced from websites offering tips for dating and seducing potential lovers. This work engages with the language of online forums. It reworks text from online advice forums and mixes them with visual codes of computer graphics. By extending on some of Nicolas Bourriaud’s ideas around ‘postproduction’ and the creative and critical strategies of ‘editing’, it offers new speculative perspectives on the relationship between screen realities, desire and romance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

WHAT if you lost someone you loved? What if you had to let go for the sake of your own sanity? Lachlan Philpott's Colder and Dennis Kelly's Orphans, playing as part of La Boite's and Queensland Theatre Company's independents programs, are emotionally and textually dense theatrical works...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of text classification techniques has been largely promoted in the past decade due to the increasing availability and widespread use of digital documents. Usually, the performance of text classification relies on the quality of categories and the accuracy of classifiers learned from samples. When training samples are unavailable or categories are unqualified, text classification performance would be degraded. In this paper, we propose an unsupervised multi-label text classification method to classify documents using a large set of categories stored in a world ontology. The approach has been promisingly evaluated by compared with typical text classification methods, using a real-world document collection and based on the ground truth encoded by human experts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.