115 resultados para Intended text
Resumo:
Sonic Loom is a purpose built classroom tool for teachers and students of drama that will enable them to explore the use of music in live performance in theory and practice. It’s intended as a resource for drama classrooms, to encourage communication and exchange about the way music works on us so we can find new ways we can make it work for us. Working to consciously attend to music and how it’s used, particularly in cinema (as a popular way in to styles of western theatre and live performance) will allow students and teachers to use music in more subtle and complex ways an aid to narrative in performance. Sonic Loom encourages active listening, (aided but not encumbered by traditional musicology) so students (and teachers) can develop a ‘critical ear’ in the transformation and adaptation of music for their own artistic purposes, whether it’s soundtracking existing scene work, or acting as a pre-text for scenes which have yet to be created.
Resumo:
Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase) based approaches should perform better than the term-based ones, but many experiments did not support this hypothesis. This paper presents an innovative technique, effective pattern discovery which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.
Resumo:
This magazine, written by Melissa Giles, features three Brisbane-based media organisations: Radio 4RPH, Queensland Pride and 98.9FM. The PDF file on this website contains a text-only version of the magazine. Contact the author if you would like a copy of the text-only EPUB file or a copy of the full digital magazine with images. An audio version of the magazine is available at http://eprints.qut.edu.au/41729/
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, “to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice” (Gable et al, 2006). IS-Impact is defined as “a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups” (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the “impact” half includes Organizational-Impact and Individual-Impact dimensions; the “quality” half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalizable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employ perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalization of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. The objective of this study is to develop a Mandarin version IS-Impact model, encompassing a list of China-specific IS-Impact measures, aiding in a better understanding of the IS-Impact phenomenon in a Chinese organizational context. The IS-Impact model provides a much needed theoretical guidance for this investigation of ES and ES impacts in a Chinese context. The appropriateness and soundness of employing the IS-Impact model as a theoretical foundation are evident: the model originated from a sound theory of IS Success (1992), developed through rigorous validation, and also derived in the context of Enterprise Systems. Based on the IS-Impact model, this study investigates a number of research questions (RQs). Firstly, the research investigated what essential impacts have been derived from ES by Chinese users and organizations [RQ1]. Secondly, we investigate which salient quality features of ES are perceived by Chinese users [RQ2]. Thirdly, we seek to answer whether the quality and impacts measures are sufficient to assess ES-success in general [RQ3]. Lastly, the study attempts to address whether the IS-Impact measurement model is appropriate for Chinese organizations in terms of evaluating their ES [RQ4]. An open-ended, qualitative identification survey was employed in the study. A large body of short text data was gathered from 144 Chinese users and 633 valid IS-Impact statements were generated from the data set. A generally inductive approach was applied in the qualitative data analysis. Rigorous qualitative data coding resulted in 50 first-order categories with 6 second-order categories that were grounded from the context of Chinese organization. The six second-order categories are: 1) System Quality; 2) Information Quality; 3) Individual Impacts;4) Organizational Impacts; 5) User Quality and 6) IS Support Quality. The final research finding of the study is the contextualized Mandarin version IS-Impact measurement model that includes 38 measures organized into 4 dimensions: System Quality, information Quality, Individual Impacts and Organizational Impacts. The study also proposed two conceptual models to harmonize the IS-Impact model and the two emergent constructs – User Quality and IS Support Quality by drawing on previous IS effectiveness literatures and the Work System theory proposed by Alter (1999) respectively. The study is significant as it is the first effort that empirically and comprehensively investigates IS-Impact in China. Specifically, the research contributions can be classified into theoretical contributions and practical contributions. From the theoretical perspective, through qualitative evidence, the study test and consolidate IS-Impact measurement model in terms of the quality of robustness, completeness and generalizability. The unconventional research design exhibits creativity of the study. The theoretical model does not work as a top-down a priori seeking for evidence demonstrating its credibility; rather, the study allows a competitive model to emerge from the bottom-up and open-coding analysis. Besides, the study is an example extending and localizing pre-existing theory developed in Western context when the theory is introduced to a different context. On the other hand, from the practical perspective, It is first time to introduce prominent research findings in field of IS Success to Chinese academia and practitioner. This study provides a guideline for Chinese organizations to assess their Enterprise System, and leveraging IT investment in the future. As a research effort in ITPS track, this study contributes the research team with an alternative operationalization of the dependent variable. The future research can take on the contextualized Mandarin version IS-Impact framework as a theoretical a priori model, further quantitative and empirical testing its validity.
Resumo:
Information has no value unless it is accessible. Information must be connected together so a knowledge network can then be built. Such a knowledge base is a key resource for Internet users to interlink information from documents. Information retrieval, a key technology for knowledge management, guarantees access to large corpora of unstructured text. Collaborative knowledge management systems such as Wikipedia are becoming more popular than ever; however, their link creation function is not optimized for discovering possible links in the collection and the quality of automatically generated links has never been quantified. This research begins with an evaluation forum which is intended to cope with the experiments of focused link discovery in a collaborative way as well as with the investigation of the link discovery application. The research focus was on the evaluation strategy: the evaluation framework proposal, including rules, formats, pooling, validation, assessment and evaluation has proved to be efficient, reusable for further extension and efficient for conducting evaluation. The collection-split approach is used to re-construct the Wikipedia collection into a split collection comprising single passage files. This split collection is proved to be feasible for improving relevant passages discovery and is devoted to being a corpus for focused link discovery. Following these experiments, a mobile client-side prototype built on iPhone is developed to resolve the mobile Search issue by using focused link discovery technology. According to the interview survey, the proposed mobile interactive UI does improve the experience of mobile information seeking. Based on this evaluation framework, a novel cross-language link discovery proposal using multiple text collections is developed. A dynamic evaluation approach is proposed to enhance both the collaborative effort and the interacting experience between submission and evaluation. A realistic evaluation scheme has been implemented at NTCIR for cross-language link discovery tasks.
Resumo:
From the late sixteenth century, in response to the problem of how best to teach children to read, a variety of texts such as primers, spellers and readers were produced in England for vernacular instruction. This paper describes how these materials were used by teachers to develop first, a specific religious understanding according to the stricture of the time and second, a moral reading practice that provided the child with a guide to secular conduct. The analysis focuses on the use of these texts as a productive means for shaping the child-reader in the context of newly emerging educational spaces which fostered a particular, morally formative relation among teacher, child and text.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.
Resumo:
In June 2011, a research project team from the Institute for Ethics, Governance and Law (IEGL), Queensland University of Technology, the United Nations University, and the Australian Government’s Asia Pacific Civil-Military Centre of Excellence (APCMCOE) held three Capacity-Building Workshops (the Workshops) on the Responsibility to Protect (R2P) and the Protection of Civilians (POC) in Armed Conflict in Manila, Kuala Lumpur, and Jakarta. The research project is funded by the Australian Responsibility to Protect Fund, with support from APCMCOE. Developments in Libya and Cote d’Ivoire and the actions of the United Nations Security Council have given new significance to the relationship between R2P and POC, providing impetus to the relevance and application of the POC principle recognised in numerous Security Council resolutions, and the R2P principle, which was recognised by the United Nations General Assembly in 2005 and, now, by the Security Council. The Workshops considered the relationship between R2P and POC. The project team presented the preliminary findings of their study and sought contributions and feedback from Workshop participants. Prior to the Workshops, members of the project team undertook interviews with UN offices and agencies, international organisations (IOs) and non-government organisations (NGOs) in Geneva and New York as part of the process of mapping the relationship between R2P and POC. Initial findings were considered at an Academic-Practitioner Workshop held at the University of Sydney in November 2010. In addition to an extensive literature review and a series of academic publications, the project team is preparing a practical guidance text (the Guide) on the relationship between R2P and POC to assist the United Nations, governments, regional bodies, IOs and NGOs in considering and applying appropriate protection strategies. It is intended that the Guide be presented to the United Nations Secretariat in New York in early 2012. The primary aim of the Workshops was to test the project’s initial findings among an audience of diplomats, military, police, civilian policy-makers, practitioners, researchers and experts from within the region. Through dialogue and discussion, the project team gathered feedback – comments, questions, critique and suggestions – to help shape the development of practical guidance about when, how and by whom R2P and POC might be implemented.
Resumo:
A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.
Resumo:
In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
WHAT if you lost someone you loved? What if you had to let go for the sake of your own sanity? Lachlan Philpott's Colder and Dennis Kelly's Orphans, playing as part of La Boite's and Queensland Theatre Company's independents programs, are emotionally and textually dense theatrical works...
Resumo:
The development of text classification techniques has been largely promoted in the past decade due to the increasing availability and widespread use of digital documents. Usually, the performance of text classification relies on the quality of categories and the accuracy of classifiers learned from samples. When training samples are unavailable or categories are unqualified, text classification performance would be degraded. In this paper, we propose an unsupervised multi-label text classification method to classify documents using a large set of categories stored in a world ontology. The approach has been promisingly evaluated by compared with typical text classification methods, using a real-world document collection and based on the ground truth encoded by human experts.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
The Australian e-Health Research Centre and Queensland University of Technology recently participated in the TREC 2011 Medical Records Track. This paper reports on our methods, results and experience using a concept-based information retrieval approach. Our concept-based approach is intended to overcome specific challenges we identify in searching medical records. Queries and documents are transformed from their term-based originals into medical concepts as de ned by the SNOMED-CT ontology. Results show our concept-based approach performed above the median in all three performance metrics: bref (+12%), R-prec (+18%) and Prec@10 (+6%).