839 resultados para Text feature extraction
Resumo:
There is growing scholarly interest in the everyday work undertaken by screen producers in part prompted by disciplinary shifts (the ‘material turn’, the rise of creative industries research) and in part by major transformations in the business of media production and consumption in recent years. However, the production cultures and motivations of screen producers, particularly those working in emergent online and convergent media markets, remain poorly understood. The 2012 Australian Screen Producer survey, building upon research undertaken in the Australian Screen Content Producer Survey conducted in 2009, was a nation-wide survey-based study of screen content producers working in four industry segments: film, television, corporate and new media production. The broad objectives of the 2012 Australian Producer Survey study were to: • Provide deeper and more detailed analysis into the nature of digital media producers and their practices and how these findings compare to the practices of established screen media producers; • Interrogate issues around the pace of industry change, industry sentiment and how producers are adapting to a changing marketplace; and • Offer insight into the transitional pathways of established media producers into production for digital media markets. The Australian Screen Producer Survey Online Interactive provides users (principally filmmakers, scholars and policymakers) with direct access to raw survey data through an interactive website that allows them to customise queries according to particular interests. The Online Interactive therefore provides customisable findings – unlike ‘static’ research outputs – delineating the practices, attitudes, strategies, and aspirations of screen producers working in feature film, television and corporate production as well as those operating in an increasingly convergent digital media marketplace. The survey was developed by researchers at the ARC Centre of Excellence for Creative Industries and Innovation (CCI), Queensland University of Technology, Deakin University, the Centre for Screen Business at Australian Film Television and Radio School (AFTRS) and was undertaken in association with Bergent Research. The Online Interactive website (http://screenproducersurvey.com/) was developed with support from the Centre for Memory Imagination and Invention (CMII).
Resumo:
Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.
Resumo:
Online business or Electronic Commerce (EC) is getting popular among customers today, as a result large number of product reviews have been posted online by the customers. This information is very valuable not only for prospective customers to make decision on buying product but also for companies to gather information of customers’ satisfaction about their products. Opinion mining is used to capture customer reviews and separated this review into subjective expressions (sentiment word) and objective expressions (no sentiment word). This paper proposes a novel, multi-dimensional model for opinion mining, which integrates customers’ characteristics and their opinion about any products. The model captures subjective expression from product reviews and transfers to fact table before representing in multi-dimensions named as customers, products, time and location. Data warehouse techniques such as OLAP and Data Cubes were used to analyze opinionated sentences. A comprehensive way to calculate customers’ orientation on products’ features and attributes are presented in this paper.
Resumo:
The Australian Curriculum: English (AC:E) is being implemented in Queensland and asks teachers and curriculum designers to incorporate the cross curriculum priority of Sustainability. This paper examines some texts suitable for inclusion in classroom study and suggests some companion texts that may be studied alongside them, including online resources by the ABC and those developed online for the Australian Curriculum. We also suggest some formative and summative assessment possibilities for responding to the selected works in this guide. We have endeavoured to investigate literature that enable students to explore and produce text types across the three AC:E categories: persuasive, imaginative and informative. The selected texts cover traditional novels, novellas, Sci-fi and speculative fiction, non-fiction, documentary, feature film and animation. Some of the texts reviewed here also cover the other cross curriculum priorities including texts by Aboriginal and Torres Strait Islander writers and some which also include Asian representations. We have also indicated which of the AC:E the general capabilities are addressed in each text.
Resumo:
A significant minority of young job-seekers remain unemployed for many months, and are at risk of developing depression. Both empirical studies and theoretical models suggest that cognitive, behavioural and social isolation factors interact to increase this risk. Thus, interventions that reduce or prevent depression in young unemployed job-seekers by boosting their resilience are required. Mobile phones may be an effective medium to deliver resilience-boosting support to young unemployed people by using SMS messages to interrupt the feedback loop of depression and social isolation. Three focus groups were conducted to explore young unemployed job-seekers’ attitudes to receiving and requesting regular SMS messages that would help them to feel supported and motivated while job-seeking. Participants reacted favourably to this proposal, and thought that it would be useful to continue to receive and request SMS messages for a few months after commencing employment as well.
Resumo:
Flows of cultural heritage in textual practices are vital to sustaining Indigenous communities. Indigenous heritage, whether passed on by oral tradition or ubiquitous social media, can be seen as a “conversation between the past and the future” (Fairclough, 2012, xv). Indigenous heritage involves appropriating memories within a cultural flow to pass on a spiritual legacy. This presentation reports ethnographic research of social media practices in a small independent Aboriginal school in Southeast Queensland, Australia that is resided over by the Yugambeh elders and an Aboriginal principal. The purpose was to rupture existing notions of white literacies in schools, and to deterritorialize the uses of digital media by dominant cultures in the public sphere. Examples of learning experiences included the following: i. Integrating Indigenous language and knowledge into media text production; ii. Using conversations with Indigenous elders and material artifacts as an entry point for storytelling; iii. Dadirri – spiritual listening in the yarning circle to develop storytelling (Ungunmerr-Baumann, 2002); and iv. Writing and publicly sharing oral histories through digital scrapbooking shared via social media. The program aligned with the Australian National Curriculum English (ACARA, 2012), which mandates the teaching of multimodal text creation. Data sources included a class set of digital scrapbooks collaboratively created in a multi-age primary classroom. The digital scrapbooks combined digitally encoded words, images of material artifacts, and digital music files. A key feature of the writing and digital design task was to retell and digitally display and archive a cultural narrative of significance to the Indigenous Australian community and its memories and material traces of the past for the future. Data analysis of the students’ digital stories involved the application of key themes of negotiated, material, and digitally mediated forms of heritage practice. It drew on Australian Indigenous research by Keddie et al. (2013) to guard against the homogenizing of culture that can arise from a focus on a static view of culture. The interpretation of findings located Indigenous appropriation of social media within broader racialized politics that enables Indigenous literacy to be understood as a dynamic, negotiated, and transgenerational flows of practice. The findings demonstrate that Indigenous children’s use of media production reflects “shifting and negotiated identities” in response to changing media environments that can function to sustain Indigenous cultural heritages (Appadurai, 1696, xv). It demonstrated how the children’s experiences of culture are layered over time, as successive generations inherit, interweave, and hear others’ cultural stories or maps. It also demonstrated how the children’s production of narratives through multimedia can provide a platform for the flow and reconstruction of performative collective memories and “lived traces of a common past” (Giaccardi, 2012). It disrupts notions of cultural reductionism and racial incommensurability that fix and homogenize Indigenous practices within and against a dominant White norm. Recommendations are provided for an approach to appropriating social media in schools that explicitly attends to the dynamic nature of Indigenous practices, negotiated through intercultural constructions and flows, and opening space for a critical anti-racist approach to multimodal text production.
Resumo:
Building on and bringing up to date the material presented in the first installment of Directory of World Cinema : Australia and New Zealand, this volume continues the exploration of the cinema produced in Australia and New Zealand since the beginning of the twentieth century. Among the additions to this volume are in-depth treatments of the locations that feature prominently in the countries' cinema. Essays by leading critics and film scholars consider the significance in films of the outback and the beach, which is evoked as a liminal space in Long Weekend and a symbol of death in Heaven's Burning, among other films. Other contributions turn the spotlight on previously unexplored genres and key filmmakers, including Jane Campion, Rolf de Heer, Charles Chauvel, and Gillian Armstrong.
Resumo:
Changes in the molecular structure of polymer antioxidants such as hindered amine light stabilisers (HALS) is central to their efficacy in retarding polymer degradation and therefore requires careful monitoring during their in-service lifetime. The HALS, bis-(1-octyloxy-2,2,6,6-tetramethyl-4-piperidinyl) sebacate (TIN123) and bis-(1,2,2,6,6-pentamethyl-4-piperidinyl) sebacate (TIN292), were formulated in different polymer systems and then exposed to various curing and ageing treatments to simulate in-service use. Samples of these coatings were then analysed directly using liquid extraction surface analysis (LESA) coupled with a triple quadrupole mass spectrometer. Analysis of TIN123 formulated in a cross-linked polyester revealed that the polymer matrix protected TIN123 from undergoing extensive thermal degradation that would normally occur at 292 degrees C, specifically, changes at the 1- and 4-positions of the piperidine groups. The effect of thermal versus photo-oxidative degradation was also compared for TIN292 formulated in polyacrylate films by monitoring the in situ conversion of N-CH3 substituted piperidines to N-H. The analysis confirmed that UV light was required for the conversion of N-CH3 moieties to N-H - a major pathway in the antioxidant protection of polymers - whereas this conversion was not observed with thermal degradation. The use of tandem mass spectrometric techniques, including precursor-ion scanning, is shown to be highly sensitive and specific for detecting molecular-level changes in HALS compounds and, when coupled with LESA, able to monitor these changes in situ with speed and reproducibility. (C) 2013 Elsevier B. V. All rights reserved.
Resumo:
Several websites utilise a rule-base recommendation system, which generates choices based on a series of questionnaires, for recommending products to users. This approach has a high risk of customer attrition and the bottleneck is the questionnaire set. If the questioning process is too long, complex or tedious; users are most likely to quit the questionnaire before a product is recommended to them. If the questioning process is short; the user intensions cannot be gathered. The commonly used feature selection methods do not provide a satisfactory solution. We propose a novel process combining clustering, decisions tree and association rule mining for a group-oriented question reduction process. The question set is reduced according to common properties that are shared by a specific group of users. When applied on a real-world website, the proposed combined method outperforms the methods where the reduction of question is done only by using association rule mining or only by observing distribution within the group.
Resumo:
Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.
Resumo:
The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.
Resumo:
Aims Pathology notification for a Cancer Registry is regarded as the most valid information for the confirmation of a diagnosis of cancer. In view of the importance of pathology data, an automatic medical text analysis system (Medtex) is being developed to perform electronic Cancer Registry data extraction and coding of important clinical information embedded within pathology reports. Methods The system automatically scans HL7 messages received from a Queensland pathology information system and analyses the reports for terms and concepts relevant to a cancer notification. A multitude of data items for cancer notification such as primary site, histological type, stage, and other synoptic data are classified by the system. The underlying extraction and classification technology is based on SNOMED CT1 2. The Queensland Cancer Registry business rules3 and International Classification of Diseases – Oncology – Version 34 have been incorporated. Results The cancer notification services show that the classification of notifiable reports can be achieved with sensitivities of 98% and specificities of 96%5, while the coding of cancer notification items such as basis of diagnosis, histological type and grade, primary site and laterality can be extracted with an overall accuracy of 80%6. In the case of lung cancer staging, the automated stages produced were accurate enough for the purposes of population level research and indicative staging prior to multi-disciplinary team meetings2 7. Medtex also allows for detailed tumour stream synoptic reporting8. Conclusions Medtex demonstrates how medical free-text processing could enable the automation of some Cancer Registry processes. Over 70% of Cancer Registry coding resources are devoted to information acquisition. The development of a clinical decision support system to unlock information from medical free-text could significantly reduce costs arising from duplicated processes and enable improved decision support, enhancing efficiency and timeliness of cancer information for Cancer Registries.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.