428 resultados para electronic information


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This qualitative study views international students as information-using learners, through an information literacy lens. Focusing on the experiences of 25 international students at two Australian universities, the study investigates how international students use online information resources to learn, and identifies associated information literacy learning needs. An expanded critical incident approach provided the methodological framework for the study. Building on critical incident technique, this approach integrated a variety of concepts and research strategies. The investigation centred on real-life critical incidents experienced by the international students whilst using online resources for assignment purposes. Data collection involved semi-structured interviews and an observed online resource-using task. Inductive data analysis and interpretation enabled the creation of a multifaceted word picture of international students using online resources and a set of critical findings about their information literacy learning needs. The study’s key findings reveal: • the complexity of the international students’ experience of using online information resources to learn, which involves an interplay of their interactions with online resources, their affective and reflective responses to using them, and the cultural and linguistic dimensions of their information use. • the array of strengths as well as challenges that the international students experience in their information use and learning. • an apparent information literacy imbalance between the international students’ more developed information skills and less developed critical and strategic approaches to using information • the need for enhanced information literacy education that responds to international students’ identified information literacy needs. Responding to the findings, the study proposes an inclusive informed learning approach to support reflective information use and inclusive information literacy learning in culturally diverse higher education environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates self–Googling through the monitoring of search engine activities of users and adds to the few quantitative studies on this topic already in existence. We explore this phenomenon by answering the following questions: To what extent is the self–Googling visible in the usage of search engines; is any significant difference measurable between queries related to self–Googling and generic search queries; to what extent do self–Googling search requests match the selected personalised Web pages? To address these questions we explore the theory of narcissism in order to help define self–Googling and present the results from a 14–month online experiment using Google search engine usage data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CDIO (Conceive-Design-Implement-Operate) Initiative has been globally recognised as an enabler for engineering education reform. With the CDIO process, the CDIO Standards and the CDIO Syllabus, many scholarly contributions have been made around cultural change, curriculum reform and learning environments. In the Australasian region, reform is gaining significant momentum within the engineering education community, the profession, and higher education institutions. This paper presents the CDIO Syllabus cast into the Australian context by mapping it to the Engineers Australia Graduate Attributes, the Washington Accord Graduate Attributes and the Queensland University of Technology Graduate Capabilities. Furthermore, in recognition that many secondary schools and technical training institutions offer introductory engineering technology subjects, this paper presents an extended self-rating framework suited for recognising developing levels of proficiency at a preparatory level. A demonstrator mapping tool has been created to demonstrate the application of this extended graduate attribute mapping framework as a precursor to an integrated curriculum information model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evolution of organisms that cause healthcare acquired infections (HAI) puts extra stress on hospitals already struggling with rising costs and demands for greater productivity and cost containment. Infection control can save scarce resources, lives, and possibly a facility’s reputation, but statistics and epidemiology are not always sufficient to make the case for the added expense. Economics and Preventing Healthcare Acquired Infection presents a rigorous analytic framework for dealing with this increasingly serious problem. ----- Engagingly written for the economics non-specialist, and brimming with tables, charts, and case examples, the book lays out the concepts of economic analysis in clear, real-world terms so that infection control professionals or infection preventionists will gain competence in developing analyses of their own, and be confident in the arguments they present to decision-makers. The authors: ----- Ground the reader in the basic principles and language of economics. ----- Explain the role of health economists in general and in terms of infection prevention and control. ----- Introduce the concept of economic appraisal, showing how to frame the problem, evaluate and use data, and account for uncertainty. ----- Review methods of estimating and interpreting the costs and health benefits of HAI control programs and prevention methods. ----- Walk the reader through a published economic appraisal of an infection reduction program. ----- Identify current and emerging applications of economics in infection control. ---- Economics and Preventing Healthcare Acquired Infection is a unique resource for practitioners and researchers in infection prevention, control and healthcare economics. It offers valuable alternate perspective for professionals in health services research, healthcare epidemiology, healthcare management, and hospital administration. ----- Written for: Professionals and researchers in infection control, health services research, hospital epidemiology, healthcare economics, healthcare management, hospital administration; Association of Professionals in Infection Control (APIC), Society for Healthcare Epidemiologists of America (SHEA)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland Injury Surveillance Unit (QISU) has been collecting and analysing injury data in Queensland since 1988. QISU data is collected from participating emergency departments (EDs) in urban, rural and remote areas of Queensland. Using this data, QISU produces several injury bulletins per year on selected topics, providing a picture of Queensland injury, and setting this in the context of relevant local, national and international research and policy. These bulletins are used by numerous government and non-government groups to inform injury prevention and practice throughout the state. QISU bulletins are also used by local and state media to inform the general public of injury risk and prevention strategies. In addition to producing the bulletins, QISU regularly responds to requests for information from a variety of sources. These requests often require additional analysis of QISU data to tailor the response to the needs of the end user. This edition of the bulletin reviews 5 years of information requests to QISU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender Systems is one of the effective tools to deal with information overload issue. Similar with the explicit rating and other implicit rating behaviours such as purchase behaviour, click streams, and browsing history etc., the tagging information implies user’s important personal interests and preferences information, which can be used to recommend personalized items to users. This paper is to explore how to utilize tagging information to do personalized recommendations. Based on the distinctive three dimensional relationships among users, tags and items, a new user profiling and similarity measure method is proposed. The experiments suggest that the proposed approach is better than the traditional collaborative filtering recommender systems using only rating data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An asset registry arguably forms the core system that needs to be in place before other systems can operate or interoperate. Most systems have rudimentary asset registry functionality that store assets, relationships, or characteristics, and this leads to different asset management systems storing similar sets of data in multiple locations in an organisation. As organisations have been slowly moving their information architecture toward a service-oriented architecture, they have also been consolidating their multiple data stores, to form a “single point of truth”. As part of a strategy to integrate several asset management systems in an Australian railway organisation, a case study for developing a consolidated asset registry was conducted. A decision was made to use the MIMOSA OSA-EAI CRIS data model as well as the OSA-EAI Reference Data in building the platform due to the standard’s relative maturity and completeness. A pilot study of electrical traction equipment was selected, and the data sources feeding into the asset registry were primarily diagrammatic based. This paper presents the pitfalls encountered, approaches taken, and lessons learned during the development of the asset registry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter considers how open content licences of copyright-protected materials – specifically, Creative Commons (CC) licences - can be used by governments as a simple and effective mechanism to enable reuse of their PSI, particularly where materials are made available in digital form online or distributed on disk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To assess the effects of information interventions which orient patients and their carers/family to a cancer care facility and the services available in the facility.