219 resultados para Automatic writing
Resumo:
Horror and redemption in Holocaust writing for young adults: Marcus Zusak’s The Book Thief, John Boyne’s The Boy in the Striped Pyjamas. While it has long been thought that the Holocaust is not an appropriate subject matter for young audiences, from The Diary of Anne Frank onwards it has always been part of their reading matter. Never, however, has there been so much interest as in the recent best-selling publications by Zusak and Boyne (the latter of which has been made into a film). This chapter examines the politics of crafting stories for young people about the unspeakable events of the recent past, about who has the right to ‘speak for’ the victims, and whether some genres (for example, fairy stories or fabulism) work best, given the horrific nature of the subject matter.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
This paper discusses the teaching of writing within the competing and often contradictory spaces of high-stakes testing and the practices and priorities around writing pedagogy in diverse school communities. It uses socio-spatial theory to examine the real-and-imagined spaces (Soja, 1996) that influence and are influenced by teachers’ pedagogical priorities for writing in two linguistically diverse elementary school case studies. Methods of critical discourse analysis are used to examine rich data sets to make visible the discourses and power relations at play in the case schools. Findings show that when teachers’ practices focus on the teaching of structure and skills alongside identity building and voice, students with diverse linguistic backgrounds can produce dramatic, authoritative and resonant texts. The paper argues that “thirdspaces” can be forged that both attend to accountability requirements, yet also give the necessary attention to more complex aspects of writing necessary for students from diverse and multilingual backgrounds to invest in writing as a creative and critical form of communication for participation in society and the knowledge economy.
Resumo:
This work aims at developing a planetary rover capable of acting as an assistant astrobiologist: making a preliminary analysis of the collected visual images that will help to make better use of the scientists time by pointing out the most interesting pieces of data. This paper focuses on the problem of detecting and recognising particular types of stromatolites. Inspired by the processes actual astrobiologists go through in the field when identifying stromatolites, the processes we investigate focus on recognising characteristics associated with biogenicity. The extraction of these characteristics is based on the analysis of geometrical structure enhanced by passing the images of stromatolites into an edge-detection filter and its Fourier Transform, revealing typical spatial frequency patterns. The proposed analysis is performed on both simulated images of stromatolite structures and images of real stromatolites taken in the field by astrobiologists.
Resumo:
Camera-laser calibration is necessary for many robotics and computer vision applications. However, existing calibration toolboxes still require laborious effort from the operator in order to achieve reliable and accurate results. This paper proposes algorithms that augment two existing trustful calibration methods with an automatic extraction of the calibration object from the sensor data. The result is a complete procedure that allows for automatic camera-laser calibration. The first stage of the procedure is automatic camera calibration which is useful in its own right for many applications. The chessboard extraction algorithm it provides is shown to outperform openly available techniques. The second stage completes the procedure by providing automatic camera-laser calibration. The procedure has been verified by extensive experimental tests with the proposed algorithms providing a major reduction in time required from an operator in comparison to manual methods.
Resumo:
The teaching of writing, particularly in the middle years of schooling, is impacted on by converging, and at times, contradictory pedagogical spaces. Perceptions about the way in which writing should be taught are clearly affected by standardised testing regimes in Australia. That is, much writing is taught as a genre process, yet results on standardised tests such as National Assessment Program in Literacy and Numeracy (NAPLAN) show that the writing component consistently receives the lowest scores (ACARA, 2013. Research shows that creative and individualised approaches are necessary for quality writing (Grainger, Goouch & Lambirth, 2005). This paper investigates the writing practices of students in years 5 to 7 in two culturally and linguistically diverse schools. It shows that the writing practices of these students are greatly influenced by teachers’ perceptions about what is required by external testing bodies such as the Australian Curriculum, Assessment and Reporting Authority (ACARA). The paper will then highlight how socio-spatial theory (Lefebvre, 1991) can be applied to explain these practices and offers the notion of a more productive ‘thirdspace’ (Soja, 1996) for improvement in the teaching of writing.
Resumo:
The production of culture is today a matter of ‘user generated content’ and young people are vital participants as ‘prosumers’, i.e. both producers and consumers, of cultural products. Among other things, they are busy creating fan works (stories, pictures, films) based on already published material. Using the genre fan fiction as a point of departure, this article explores the drivers behind net communities organised around fan culture and argues that fan fiction sites can in many aspects be regarded as informal learning settings. By turning to the rhetoric principle of imitatio, the article shows how in the collective interactive processes between readers and writers such fans develop literacies and construct gendered identities.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
This is a discussion of the journal article: "Construcing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation". The article and discussion have appeared in the Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Resumo:
In the context of modern western psychologised, techno-social hybrid realities, where individuals are incited constantly to work on themselves and perform their self-development in public, the use of online social networking sites (SNSs) can be conceptualised as what Foucault has described as a ‘technique of self’. This article explores examples of status updates on Facebook to reveal that writing on Facebook is a tool for self-formation with historical roots. Exploring examples of self-writing from the past, and considering some of the continuities and discontinuities between these age-old practices and their modern translations, provides a non-technologically deterministic and historically aware way of thinking about the use of new media technologies in modern societies that understands them to be more than mere tools for communication.
Resumo:
In the first Modern Language Association newsletter for 2006, renowned poetry critic and MLA President, Marjorie Perloff, remarked on the growing ascendency of Creative Writing within English Studies in North America. In her column, Perloff notes that "[i]n studying the English Job Information List (JIL) so as to advise my own students and others I know currently on the market, I noticed what struck me as a curious trend: there are, in 2005, almost three times as many positions in creative writing as in the study of twentieth-century literature" (3). The dominance of Creative Writing in the English Studies job list in turn reflects the growing student demand for undergraduate and postgraduate degrees in the field—over the past 20 years, BA and MA degrees in Creative Writing in North American tertiary institutions have quadrupled (3)...
Resumo:
We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.
Resumo:
In the last years, the trade-o between exibility and sup- port has become a leading issue in work ow technology. In this paper we show how an imperative modeling approach used to de ne stable and well-understood processes can be complemented by a modeling ap- proach that enables automatic process adaptation and exploits planning techniques to deal with environmental changes and exceptions that may occur during process execution. To this end, we designed and imple- mented a Custom Service that allows the Yawl execution environment to delegate the execution of subprocesses and activities to the SmartPM execution environment, which is able to automatically adapt a process to deal with emerging changes and exceptions. We demonstrate the fea- sibility and validity of the approach by showing the design and execution of an emergency management process de ned for train derailments.