97 resultados para Thematic Text Analysis
em Queensland University of Technology - ePrints Archive
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.
Resumo:
In late 2007, newly elected Prime Minister Kevin Rudd placed education reform on centre stage as a key policy in the Labor Party's agenda for social reform in Australia. A major policy strategy within this 'Education Revolution' was the development of a national curriculum, the Australian Curriculum Within this political context, this study is an investigation into how social justice and equity have been used in political speeches to justify the need for, and the nature of, Australia's first official national curriculum. The aim is to provide understandings into what is said or not said; who is included or excluded, represented or misrepresented; for what purpose; and for whose benefit. The study investigates political speeches made by Education Ministers between 2008 and 201 0; that is, from the inception of the Australian Curriculum to the release of the Phase 1 F - 10 draft curriculum documents in English, mathematics, science and history. Curriculum development is defined here as an ongoing process of complex conversations. To contextualise the process of curriculum development within Australia, the thesis commences with an initial review of curriculum development in this nation over the past three decades. It then frames this review within contemporary curriculum theory; in particular it calls upon the work of William Pinar and the key notions of currere and reconceptualised curriculum. This contextualisation work is then used as a foundation to examine how social justice and equity have been represented in political speeches delivered by the respective Education Ministers Julia Gillard and Peter Garrett at key junctures of Australian Curriculum document releases. A critical thematic policy analysis is the approach used to examine selected official speech transcripts released by the ministerial media centre through the DEEWR website. This approach provides a way to enable insights and understandings of representations of social justice and equity issues in the policy agenda. Broader social implications are also discussed. The project develops an analytic framework that enables an investigation into the framing of social justice and equity issues such as inclusion, equality, quality education, sharing of resources and access to learning opportunities in political speeches aligned with the development of the Australian Curriculum Through this analysis, the study adopts a focus on constructions of educationally disadvantaged students and how the solutions of 'fixing' teachers and providing the 'right' curriculum are presented as resolutions to the perceived problem. In this way, it aims to work towards offering insights into political justifications for a national curriculum in Australia from a social justice perspective.
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
Assessing students’ conceptual understanding of technical content is important for instructors as well as students to learn content and apply knowledge in various contexts. Concept inventories that identify possible misconceptions through validated multiple-choice questions are helpful in identifying a misconception that may exist, but do not provide a meaningful assessment of why they exist or the nature of the students’ understanding. We conducted a case study with undergraduate students in an electrical engineering course by testing a validated multiple-choice response concept inventory that we augmented with a component for students to provide written explanations for their multiple-choice selection. Results revealed that correctly chosen multiple-choice selections did not always match correct conceptual understanding for question testing a specific concept. The addition of a text-response to multiple-choice concept inventory questions provided an enhanced and meaningful assessment of students’ conceptual understanding and highlighted variables associated with current concept inventories or multiple choice questions.
Resumo:
This thesis addressed issues that have prevented qualitative researchers from using thematic discovery algorithms. The central hypothesis evaluated whether allowing qualitative researchers to interact with thematic discovery algorithms and incorporate domain knowledge improved their ability to address research questions and trust the derived themes. Non-negative Matrix Factorisation and Latent Dirichlet Allocation find latent themes within document collections but these algorithms are rarely used, because qualitative researchers do not trust and cannot interact with the themes that are automatically generated. The research determined the types of interactivity that qualitative researchers require and then evaluated interactive algorithms that matched these requirements. Theoretical contributions included the articulation of design guidelines for interactive thematic discovery algorithms, the development of an Evaluation Model and a Conceptual Framework for Interactive Content Analysis.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
Background Prescription medicine samples provided by pharmaceutical companies are predominantly newer and more expensive products. The range of samples provided to practices may not represent the drugs that the doctors desire to have available. Few studies have used a qualitative design to explore the reasons behind sample use. Objective The aim of this study was to explore the opinions of a variety of Australian key informants about prescription medicine samples, using a qualitative methodology. Methods Twenty-three organizations involved in quality use of medicines in Australia were identified, based on the authors' previous knowledge. Each organization was invited to nominate 1 or 2 representatives to participate in semistructured interviews utilizing seeding questions. Each interview was recorded and transcribed verbatim. Leximancer v2.25 text analysis software (Leximancer Pty Ltd., Jindalee, Queensland, Australia) was used for textual analysis. The top 10 concepts from each analysis group were interrogated back to the original transcript text to determine the main emergent opinions. Results A total of 18 key interviewees representing 16 organizations participated. Samples, patient, doctor, and medicines were the major concepts among general opinions about samples. The concept drug became more frequent and the concept companies appeared when marketing issues were discussed. The Australian Pharmaceutical Benefits Scheme and cost were more prevalent in discussions about alternative sample distribution models, indicating interviewees were cognizant of budgetary implications. Key interviewee opinions added richness to the single-word concepts extracted by Leximancer. Conclusions Participants recognized that prescription medicine samples have an influence on quality use of medicines and play a role in the marketing of medicines. They also believed that alternative distribution systems for samples could provide benefits. The cost of a noncommercial system for distributing samples or starter packs was a concern. These data will be used to design further research investigating alternative models for distribution of samples.
Resumo:
In the aftermath of the global financial crisis, effective risk management (RM) and its communication to stakeholders are now considered essential components in corporate governance. However, despite the importance of RM communication, it is still unclear how and to what extent disclosures in financial reports can achieve effective communication of RM activities. The situation is hampered by the paucity of international RM Research that captures institution differences in corporate governance standards. The Australian setting provides an ideal environment in which to examine RM communication because the Australian Securities Exchange (ASX) has since 2007 recommended RM disclosures under its principle-based governance rules. The recommendations are contained in Principle 7 of the Corporate Governance Principles and recommendations (ASX CGPR). Accordingly, to assess the effectiveness of the AXS's RM governance principle, this study examines the nature and extent of RM disclosures reported by major ASX-listed firms. Using a mixed method approach (thematic content analysis and a series of regression analysis) we find widespread divergence in disclosure practices and low conformance with the Principle 7 recommendations. Certain corporate governance mechanisms appear to influence some categories of RM dislcosure but equity risk has surprisingly little explanatory power. These results suggest that the RM disclosures practices observed in the Australian setting may not be meeting the objectives of regulators and the needs of stakeholders.
Resumo:
Engineers must have deep and accurate conceptual understanding of their field and Concept inventories (CIs) are one method of assessing conceptual understanding and providing formative feedback. Current CI tests use Multiple Choice Questions (MCQ) to identify misconceptions and have undergone reliability and validity testing to assess conceptual understanding. However, they do not readily provide the diagnostic information about students’ reasoning and therefore do not effectively point to specific actions that can be taken to improve student learning. We piloted the textual component of our diagnostic CI on electrical engineering students using items from the signals and systems CI. We then analysed the textual responses using automated lexical analysis software to test the effectiveness of these types of software and interviewed the students regarding their experience using the textual component. Results from the automated text analysis revealed that students held both incorrect and correct ideas for certain conceptual areas and provided indications of student misconceptions. User feedback also revealed that the inclusion of the textual component is helpful to students in assessing and reflecting on their own understanding.
Resumo:
We explored how people negotiate, and respond to, identity transitions following a diagnosis of pancreatic cancer. Interviews with 19 people with pancreatic cancer were analysed using thematic discourse analysis. While discursively negotiating two transitions, “moving from healthy to ill” and “moving from active treatment to end-of-life care”, participants positioned themselves as “in control”, “optimistic” and managing their health and illness. In the absence of other discourses or “models” of life post-cancer, many people draw on the promise of survival. Moving away from “survivorship” may assist people with advanced cancer to make sense of their lives in a short timeframe.
Resumo:
Experiences showed that developing business applications that base on text analysis normally requires a lot of time and expertise in the field of computer linguistics. Several approaches of integrating text analysis systems with business applications have been proposed, but so far there has been no coordinated approach which would enable building scalable and flexible applications of text analysis in enterprise scenarios. In this paper, a service-oriented architecture for text processing applications in the business domain is introduced. It comprises various groups of processing components and knowledge resources. The architecture, created as a result of our experiences with building natural language processing applications in business scenarios, allows for the reuse of text analysis and other components, and facilitates the development of business applications. We verify our approach by showing how the proposed architecture can be applied to create a text analytics enabled business application that addresses a concrete business scenario. © 2010 IEEE.
Resumo:
- Purpose Communication of risk management practices are a critical component of good corporate governance. Research to date has been of little benefit in informing regulators internationally. This paper seeks to contribute to the literature by investigating how listed Australian companies in a setting where disclosures are explicitly required by the ASX corporate governance framework, disclose risk management (RM) information in the corporate governance statements within annual reports. - Design/methodology/approach To address our study’s research questions and related hypotheses, we examine the top 300 ASX-listed companies by market capitalisation at 30 June 2010. For these firms, we identify, code and categorise RM disclosures made in the annual reports according to the disclosure categories specified in Australian Stock Exchange Corporate Governance Principles and Recommendations (ASX CGPR). The derived data is then examined using a comprehensive approach comprising thematic content analysis and regression analysis. - Findings The results indicate widespread divergence in disclosure practices and low conformance with the Principle 7 of the ASX CGPR. This result suggests that companies are not disclosing all ‘material business risks’ possibly due to ignorance at the board level, or due to the intentional withholding of sensitive information from financial statement users. The findings also show mixed results across the factors expected to influence disclosure behaviour. Notably, the presence of a risk committee (RC) (in particular, a standalone RC) and technology committee (TC) are found to be associated with improved levels of disclosure. we do not find evidence that company risk measures (as proxied by equity beta and the market-to-book ratio) are significantly associated with greater levels of RM disclosure. Also, contrary to common findings in the disclosure literature, factors such as board independence and expertise, audit committee independence, and the usage of a Big-4 auditor do not seem to impact the level of RM disclosure in the Australian context. - Research limitation/implications The study is limited by the sample and study period selection as the RM disclosures of only the largest (top 300) ASX firms are examined for the fiscal year 2010. Thus, the finding may not be generalisable to smaller firms, or earlier/later years. Also, the findings may have limited applicability in other jurisdictions with different regulatory environments. - Practical implications The study’s findings suggest that insufficient attention has been applied to RM disclosures by listed companies in Australia. These results suggest that the RM disclosures practices observed in the Australian setting may not be meeting the objectives of regulators and the needs of stakeholders. - Originality/value Despite the importance of risk management communication, it is unclear whether disclosures in annual financial reports achieve this communication. The Australian setting provides an ideal environment to examine the nature and extent of risk management communication as the Australian Securities Exchange (ASX) has recommended risk management disclosures follow Principle 7 of its principle-based governance rules since 2007.
Resumo:
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering concepts exist due to a lack of mental frameworks, or schemas, for these types of concepts or conceptual areas. This study incorporated an open textual response component in a multiple-choice concept inventory test to capture written explanations of students' selections. The study's goal was to identify, through text analysis of student responses, the types and categorizations of concepts in these explanations that had not been uncovered by the distractor selections. The analysis of the textual explanations of a subset of the discrete-time signals and systems concept inventory questions revealed that students have difficulty conceptually explaining several dimensions of signal processing. This contributed to their inability to provide a clear explanation of the underlying concepts, such as mathematical concepts. The methods used in this study evaluate students' understanding of signals and systems concepts through their ability to express understanding in written text. This may present a bias for students with strong written communication skills. This study presents a framework for extracting and identifying the types of concepts students use to express their reasoning when answering conceptual questions.
Resumo:
Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.
Resumo:
Short-termism among firms, the tendency to excessively discount long-term benefits and favour less valuable short-term benefits, has been a prominent issue in business and public policy debates but research to date has been inconclusive. We study how managers frame, interpret, and resolve problems of intertemporal choice in actual decisions by using computer aided text analysis to measure the frequency of top-team temporal references in 1653 listed Australian firms between 1992-2005. Contrary to short-termism arguments we find evidence of a significant general increase in Future orientation and a significant decrease in Current/Past orientation. We also show top-teams’ temporal orientation is related to their strategic orientation, specifically the extent to which they focus on Innovation-Expansion and Capacity Building.