904 resultados para information filtering
Resumo:
In vivo confocal microscopy (IVCM) is an emerging technology that provides minimally invasive, high resolution, steady-state assessment of the ocular surface at the cellular level. Several challenges still remain but, at present, IVCM may be considered a promising technique for clinical diagnosis and management. This mini-review summarizes some key findings in IVCM of the ocular surface, focusing on recent and promising attempts to move “from bench to bedside”. IVCM allows prompt diagnosis, disease course follow-up, and management of potentially blinding atypical forms of infectious processes, such as acanthamoeba and fungal keratitis. This technology has improved our knowledge of corneal alterations and some of the processes that affect the visual outcome after lamellar keratoplasty and excimer keratorefractive surgery. In dry eye disease, IVCM has provided new information on the whole-ocular surface morphofunctional unit. It has also improved understanding of pathophysiologic mechanisms and helped in the assessment of prognosis and treatment. IVCM is particularly useful in the study of corneal nerves, enabling description of the morphology, density, and disease- or surgically induced alterations of nerves, particularly the subbasal nerve plexus. In glaucoma, IVCM constitutes an important aid to evaluate filtering blebs, to better understand the conjunctival wound healing process, and to assess corneal changes induced by topical antiglaucoma medications and their preservatives. IVCM has significantly enhanced our understanding of the ocular response to contact lens wear. It has provided new perspectives at a cellular level on a wide range of contact lens complications, revealing findings that were not previously possible to image in the living human eye. The final section of this mini-review provides a focus on advances in confocal microscopy imaging. These include 2D wide-field mapping, 3D reconstruction of the cornea and automated image analysis.
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
This study aimed to determine if systematic variation of the diagnostic terminology embedded within written discharge information (i.e., concussion or mild traumatic brain injury, mTBI) would produce different expected symptoms and illness perceptions. We hypothesized that compared to concussion advice, mTBI advice would be associated with worse outcomes. Sixty-two volunteers with no history of brain injury or neurological disease were randomly allocated to one of two conditions in which they read a mTBI vignette followed by information that varied only by use of the embedded terms concussion (n = 28) or mTBI (n = 34). Both groups reported illness perceptions (timeline and consequences subscale of the Illness Perception Questionnaire-Revised) and expected Postconcussion Syndrome (PCS) symptoms 6 months post injury (Neurobehavioral Symptom Inventory, NSI). Statistically significant group differences due to terminology were found on selected NSI scores (i.e., total, cognitive and sensory symptom cluster scores (concussion > mTBI)), but there was no effect of terminology on illness perception. When embedded in discharge advice, diagnostic terminology affects some but not all expected outcomes. Given that such expectations are a known contributor to poor mTBI outcome, clinicians should consider the potential impact of varied terminology on their patients.
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
In the evolving knowledge societies of today, some people are overloaded with information, others are starved for information. Everywhere, people are yearning to freely express themselves,to actively participate in governance processes and cultural exchanges. Universally, there is a deep thirst to understand the complex world around us. Media and Information Literacy (MIL) is a basis for enhancing access to information and knowledge, freedom of expression, and quality education. It describes skills, and attitudes that are needed to value the functions of media and other information providers, including those on the Internet, in societies and to find, evaluate and produce information and media content; in other words, it covers the competencies that are vital for people to be effectively engaged in all aspects of development.
Resumo:
This article content analyzes music in tourism TV commercials from 95 regions and countries to identify their general acoustic characteristics. The objective is to offer a general guideline in the postproduction of tourism TV commercials. It is found that tourism TV commercials tend to be produced in a faster tempo with beats per minute close to 120, which is rare to be found in general TV commercials. To compensate for the faster tempo (increased aural information load), less scenes (longer duration per scene) were edited into the footage. Production recommendations and future research are presented.
Resumo:
Introduction This paper reports on university students' experiences of learning information literacy. Method Phenomenography was selected as the research approach as it describes the experience from the perspective of the study participants, which in this case is a mixture of undergraduate and postgraduate students studying education at an Australian university. Semi-structured, one-on-one interviews were conducted with fifteen students. Analysis The interview transcripts were iteratively reviewed for similarities and differences in students' experiences of learning information literacy. Categories were constructed from an analysis of the distinct features of the experiences that students reported. The categories were grouped into a hierarchical structure that represents students' increasingly sophisticated experiences of learning information literacy. Results The study reveals that students experience learning information literacy in six ways: learning to find information; learning a process to use information; learning to use information to create a product; learning to use information to build a personal knowledge base; learning to use information to advance disciplinary knowledge; and learning to use information to grow as a person and to contribute to others. Conclusions Understanding the complexity of the concept of information literacy, and the collective and diverse range of ways students experience learning information literacy, enables academics and librarians to draw on the range of experiences reported by students to design academic curricula and information literacy education that targets more powerful ways of learning to find and use information.
Resumo:
Introduction In a connected world youth are participating in digital content creating communities. This paper introduces a description of teens' information practices in digital content creating and sharing communities. Method The research design was a constructivist grounded theory methodology. Seventeen interviews with eleven teens were collected and observation of their digital communities occurred over a two-year period. Analysis The data were analysed iteratively to describe teens' interactions with information through open and then focused coding. Emergent categories were shared with participants to confirm conceptual categories. Focused coding provided connections between conceptual categories resulting in the theory, which was also shared with participants for feedback. Results The paper posits a substantive theory of teens' information practices as they create and share content. It highlights that teens engage in the information actions of accessing, evaluating, and using information. They experienced information in five ways: participation, information, collaboration, process, and artefact. The intersection of enacting information actions and experiences of information resulted in five information practices: learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. Conclusion This study contributes to our understanding of youth information actions, experiences, and practices. Further research into these communities might indicate what information practices are foundational to digital communities.
Resumo:
Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.
Resumo:
In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.
Resumo:
"Students transitioning from vocational education and training (VET) to university can experience a number of challenges. This small research project explored the information literacy needs of VET and university students and how they differ. Students studying early childhood related VET and university courses reported differences in how and where they searched for information in their studies. These differences reflect the more practical focus of VET compared with the more academic and theoretical approach of university. The author proposes a framework of support that could be provided to transitioning students to enable them to develop the necessary information literacy skills for university study."--publisher website
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
Retrieval with Logical Imaging is derived from belief revision and provides a novel mechanism for estimating the relevance of a document through logical implication (i.e. P(q -> d)). In this poster, we perform the first comprehensive evaluation of Logical Imaging (LI) in Information Retrieval (IR) across several TREC test Collections. When compared against standard baseline models, we show that LI fails to improve performance. This failure can be attributed to a nuance within the model that means non-relevant documents are promoted in the ranking, while relevant documents are demoted. This is an important contribution because it not only contextualizes the effectiveness of LI, but crucially ex- plains why it fails. By addressing this nuance, future LI models could be significantly improved.
Resumo:
Quantum-inspired models have recently attracted increasing attention in Information Retrieval. An intriguing characteristic of the mathematical framework of quantum theory is the presence of complex numbers. However, it is unclear what such numbers could or would actually represent or mean in Information Retrieval. The goal of this paper is to discuss the role of complex numbers within the context of Information Retrieval. First, we introduce how complex numbers are used in quantum probability theory. Then, we examine van Rijsbergen’s proposal of evoking complex valued representations of informations objects. We empirically show that such a representation is unlikely to be effective in practice (confuting its usefulness in Information Retrieval). We then explore alternative proposals which may be more successful at realising the power of complex numbers.
Resumo:
The presence of spam in a document ranking is a major issue for Web search engines. Common approaches that cope with spam remove from the document rankings those pages that are likely to contain spam. These approaches are implemented as post-retrieval processes, that filter out spam pages only after documents have been retrieved with respect to a user’s query. In this paper we suggest to remove spam pages at indexing time, therefore obtaining a pruned index that is virtually “spam-free”. We investigate the benefits of this approach from three points of view: indexing time, index size, and retrieval performances. Not surprisingly, we found that the strategy decreases both the time required by the indexing process and the space required for storing the index. Surprisingly instead, we found that by considering a spam-pruned version of a collection’s index, no difference in retrieval performance is found when compared to that obtained by traditional post-retrieval spam filtering approaches.