977 resultados para information metrics
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
Background: High levels of distress and need for self-care information by patients commencing chemotherapy suggest that current prechemotherapy education is suboptimal. We conducted a randomised, controlled trial of a prechemotherapy education intervention (ChemoEd) to assess impact on patient distress, treatment-related concerns, and the prevalence and severity of and bother caused by six chemotherapy side-effects. Patients and methods: One hundred and ninety-two breast, gastrointestinal, and haematologic cancer patients were recruited before the trial closing prematurely (original target 352). ChemoEd patients received a DVD, question-prompt list, self-care information, an education consultation ≥24 h before first treatment (intervention 1), telephone follow-up 48 h after first treatment (intervention 2), and a face-to-face review immediately before second treatment (intervention 3). Patient outcomes were measured at baseline (T1: pre-education) and immediately preceding treatment cycles 1 (T2) and 3 (T3). Results: ChemoEd did not significantly reduce patient distress. However, a significant decrease in sensory/psychological (P = 0.027) and procedural (P = 0.03) concerns, as well as prevalence and severity of and bother due to vomiting (all P = 0.001), were observed at T3. In addition, subgroup analysis of patients with elevated distress at T1 indicated a significant decrease (P = 0.035) at T2 but not at T3 (P = 0.055) in ChemoEd patients. Conclusions: ChemoEd holds promise to improve patient treatment-related concerns and some physical/psychological outcomes; however, further research is required on more diverse patient populations to ensure generalisability.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
This paper presents a graph-based method to weight medical concepts in documents for the purposes of information retrieval. Medical concepts are extracted from free-text documents using a state-of-the-art technique that maps n-grams to concepts from the SNOMED CT medical ontology. In our graph-based concept representation, concepts are vertices in a graph built from a document, edges represent associations between concepts. This representation naturally captures dependencies between concepts, an important requirement for interpreting medical text, and a feature lacking in bag-of-words representations. We apply existing graph-based term weighting methods to weight medical concepts. Using concepts rather than terms addresses vocabulary mismatch as well as encapsulates terms belonging to a single medical entity into a single concept. In addition, we further extend previous graph-based approaches by injecting domain knowledge that estimates the importance of a concept within the global medical domain. Retrieval experiments on the TREC Medical Records collection show our method outperforms both term and concept baselines. More generally, this work provides a means of integrating background knowledge contained in medical ontologies into data-driven information retrieval approaches.
Resumo:
Retrieving information from Twitter is always challenging due to its large volume, inconsistent writing and noise. Most existing information retrieval (IR) and text mining methods focus on term-based approach, but suffers from the problems of terms variation such as polysemy and synonymy. This problem deteriorates when such methods are applied on Twitter due to the length limit. Over the years, people have held the hypothesis that pattern-based methods should perform better than term-based methods as it provides more context, but limited studies have been conducted to support such hypothesis especially in Twitter. This paper presents an innovative framework to address the issue of performing IR in microblog. The proposed framework discover patterns in tweets as higher level feature to assign weight for low-level features (i.e. terms) based on their distributions in higher level features. We present the experiment results based on TREC11 microblog dataset and shows that our proposed approach significantly outperforms term-based methods Okapi BM25, TF-IDF and pattern based methods, using precision, recall and F measures.
Resumo:
New technologies and the pace of change in modern society mean changes for classroom teaching and learning. Information and communication technologies (ICTs) feature in everyday life and provide ample opportunities for enhancing classroom programs. This article outlines how ICTs complement curriculum implementation in one year two classroom. It suggests practical strategies demonstrating how teachers can make ICTs work for them and progressively teach children how to make ICTs work for them.
Resumo:
Entity-oriented search has become an essential component of modern search engines. It focuses on retrieving a list of entities or information about the specific entities instead of documents. In this paper, we study the problem of finding entity related information, referred to as attribute-value pairs, that play a significant role in searching target entities. We propose a novel decomposition framework combining reduced relations and the discriminative model, Conditional Random Field (CRF), for automatically finding entity-related attribute-value pairs from free text documents. This decomposition framework allows us to locate potential text fragments and identify the hidden semantics, in the form of attribute-value pairs for user queries. Empirical analysis shows that the decomposition framework outperforms pattern-based approaches due to its capability of effective integration of syntactic and semantic features.
Resumo:
Phenomenography is a research approach devised to allow the investigation of varying ways in which people experience aspects of their world. Whilst growing attention is being paid to interpretative research in LIS, it is not always clear how the outcomes of such research can be used in practice. This article explores the potential contribution of phenomenography in advancing the application of phenomenological and hermeneutic frameworks to LIS theory, research and practice. In phenomenography we find a research toll which in revealing variation, uncovers everyday understandings of phenomena and provides outcomes which are readily applicable to professional practice. THe outcomes may be used in human computer interface design, enhancement, implementation and training, in the design and evaluation of services, and in education and training for both end users and information professionals. A proposed research territory for phenomenography in LIS includes investigating qualitative variation in the experienced meaning of: 1) information and its role in society 2) LIS concepts and principles 3) LIS processes and; 4) LIS elements.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
Process mining encompasses the research area which is concerned with knowledge discovery from information system event logs. Within the process mining research area, two prominent tasks can be discerned. First of all, process discovery deals with the automatic construction of a process model out of an event log. Secondly, conformance checking focuses on the assessment of the quality of a discovered or designed process model in respect to the actual behavior as captured in event logs. Hereto, multiple techniques and metrics have been developed and described in the literature. However, the process mining domain still lacks a comprehensive framework for assessing the goodness of a process model from a quantitative perspective. In this study, we describe the architecture of an extensible framework within ProM, allowing for the consistent, comparative and repeatable calculation of conformance metrics. For the development and assessment of both process discovery as well as conformance techniques, such a framework is considered greatly valuable.
Resumo:
This item provides supplementary materials for the paper mentioned in the title, specifically a range of organisms used in the study. The full abstract for the main paper is as follows: Next Generation Sequencing (NGS) technologies have revolutionised molecular biology, allowing clinical sequencing to become a matter of routine. NGS data sets consist of short sequence reads obtained from the machine, given context and meaning through downstream assembly and annotation. For these techniques to operate successfully, the collected reads must be consistent with the assumed species or species group, and not corrupted in some way. The common bacterium Staphylococcus aureus may cause severe and life-threatening infections in humans,with some strains exhibiting antibiotic resistance. In this paper, we apply an SVM classifier to the important problem of distinguishing S. aureus sequencing projects from alternative pathogens, including closely related Staphylococci. Using a sequence k-mer representation, we achieve precision and recall above 95%, implicating features with important functional associations.
Resumo:
The Therapeutic Advice and Information Service was funded by the National Prescribing Service to provide a national drug information service for health professionals working in the community. For ten years the service achieved high levels of client satisfaction, and reached its contracted target of 6000 enquiries about medicines per year, however the service ceased on 30 June 2010.
Resumo:
Twitter is an important and influential social media platform, but much research into its uses remains centred around isolated cases – e.g. of events in political communication, crisis communication, or popular culture, often coordinated by shared hashtags (brief keywords, prefixed with the symbol ‘#’). In particular, a lack of standard metrics for comparing communicative patterns across cases prevents researchers from developing a more comprehensive perspective on the diverse, sometimes crucial roles which hashtags play in Twitter-based communication. We address this problem by outlining a catalogue of widely applicable, standardised metrics for analysing Twitter-based communication, with particular focus on hashtagged exchanges. We also point to potential uses for such metrics, presenting an indication of what broader comparisons of diverse cases can achieve.
Resumo:
A building information model (BIM) provides a rich representation of a building's design. However, there are many challenges in getting construction-specific information from a BIM, limiting the usability of BIM for construction and other downstream processes. This paper describes a novel approach that utilizes ontology-based feature modeling, automatic feature extraction based on ifcXML, and query processing to extract information relevant to construction practitioners from a given BIM. The feature ontology generically represents construction-specific information that is useful for a broad range of construction management functions. The software prototype uses the ontology to transform the designer-focused BIM into a construction-specific feature-based model (FBM). The formal query methods operate on the FBM to further help construction users to quickly extract the necessary information from a BIM. Our tests demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.