883 resultados para Multimodal retrieval
Resumo:
Reports of children and teachers taking transformative social action in schools are becoming rare. This session illustrates how teachers, while feeling the weight of accountability testing in schools, are active agents who can re-imagine literacy pedagogy to change elements of their community. It reports the critical dimensions of a movie-making unit with Year 5 students within a school reform project. The students filmed interviews with people in the local shops to gather lay-knowledge and experiences of the community. The short documentaries challenged stereotypes about what it is like to live in Logan, and critically identified potential improvements to public spaces in the local community. A student panel presented these multimodal texts at a national conference of social activists and community leaders. The report does not valorize or privilege local or lay knowledge over dominant knowledge, but argues that prescribed curriculum should not hinder the capacity for critical consciousness.
Resumo:
Globalised communication in society today is characterised by multimodal forms of meaning making in the context of increased cultural and linguistic diversity. This research paper responds to these imperatives, applying Halliday's (1978, 1994) categories of systemic functional linguistics - representational or ideational, interactive or interpersonal, and compositional or textual meanings. Following the work of Kress (2000), van Leeuwen (Kress and van Leeuwen, 1996), and Jewitt (2006), multimodal semiotic analysis is applied to claymation movies that were collaboratively designed by Year 6 students. The significance of this analysis is the metalanguage for textual work in the kineikonic mode - moving images.
Resumo:
Information skills instruction for research candidates bas recently been formalised as coursework at the Queensland University of Technology. Feedback solicited from participants suggests that students benefit from such coursework in a number of ways. Their perception of the value of specific content areas to their literature review and thesis presentation is favourable. A small group of students who participated in Interviews identified five ways in which the coursework assisted the research process. As Instructors continue to work with the post·graduate community it would be useful to deepen our understanding of how such instruction is perceived and the benefits which can be derived from it.
Resumo:
Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.
Resumo:
The Bluetooth technology is being increasingly used to track vehicles throughout their trips, within urban networks and across freeway stretches. One important opportunity offered by this type of data is the measurement of Origin-Destination patterns, emerging from the aggregation and clustering of individual trips. In order to obtain accurate estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. These issues mainly stem from the use of the Bluetooth technology amongst drivers, and the physical properties of the Bluetooth sensors themselves. First, not all cars are equipped with discoverable Bluetooth devices and the Bluetooth-enabled vehicles may belong to some small socio-economic groups of users. Second, the Bluetooth datasets include data from various transport modes; such as pedestrian, bicycles, cars, taxi driver, buses and trains. Third, the Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be extracted from the data. Finally, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.
Resumo:
This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.
Resumo:
This practice-led project has two outcomes: a collection of short stories titled 'Corkscrew Section', and an exegesis. The short stories combine written narrative with visual elements such as images and typographic devices, while the exegesis analyses the function of these graphic devices within adult literary fiction. My creative writing explores a variety of genres and literary styles, but almost all of the stories are concerned with fusing verbal and visual modes of communication. The exegesis adopts the interpretive paradigm of multimodal stylistics, which aims to analyse graphic devices with the same level of detail as linguistic analysis. Within this framework, the exegesis compares and extends previous studies to develop a systematic method for analysing how the interactions between language, images and typography create meaning within multimodal literature.
Resumo:
Literacy Theories for the Digital Age insightfully brings together six essential approaches to literacy research and educational practice. The book provides powerful and accessible theories for readers, including Socio-cultural, Critical, Multimodal, Socio-spatial, Socio-material and Sensory Literacies. The brand new Sensory Literacies approach is an original and visionary contribution to the field, coupled with a provocative foreword from leading sensory anthropologist David Howes. This dynamic collection explores a legacy of literacy research while showing the relationships between each paradigm, highlighting their complementarity and distinctions. This highly relevant compendium will inspire readers to explore new frontiers of thought and practice in times of diversity and technological change.
Resumo:
Entity-oriented retrieval aims to return a list of relevant entities rather than documents to provide exact answers for user queries. The nature of entity-oriented retrieval requires identifying the semantic intent of user queries, i.e., understanding the semantic role of query terms and determining the semantic categories which indicate the class of target entities. Existing methods are not able to exploit the semantic intent by capturing the semantic relationship between terms in a query and in a document that contains entity related information. To improve the understanding of the semantic intent of user queries, we propose concept-based retrieval method that not only automatically identifies the semantic intent of user queries, i.e., Intent Type and Intent Modifier but introduces concepts represented by Wikipedia articles to user queries. We evaluate our proposed method on entity profile documents annotated by concepts from Wikipedia category and list structure. Empirical analysis reveals that the proposed method outperforms several state-of-the-art approaches.
Resumo:
Process-Aware Information Systems (PAISs) support executions of operational processes that involve people, resources, and software applications on the basis of process models. Process models describe vast, often infinite, amounts of process instances, i.e., workflows supported by the systems. With the increasing adoption of PAISs, large process model repositories emerged in companies and public organizations. These repositories constitute significant information resources. Accurate and efficient retrieval of process models and/or process instances from such repositories is interesting for multiple reasons, e.g., searching for similar models/instances, filtering, reuse, standardization, process compliance checking, verification of formal properties, etc. This paper proposes a technique for indexing process models that relies on their alternative representations, called untanglings. We show the use of untanglings for retrieval of process models based on process instances that they specify via a solution to the total executability problem. Experiments with industrial process models testify that the proposed retrieval approach is up to three orders of magnitude faster than the state of the art.
Resumo:
This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.
Resumo:
Background The transfer and/or retrieval of a critically patient is inherently dangerous not only for the patient but for staff as well. The quality and experience of unplanned transfers can influence patient mortality and morbidity. However, international evidence suggests that dedicated transfer/retrieval teams can improve mortality and morbidity outcomes. Aims The initial aim of this paper is to describe an in-house competency-based training programme, which encompasses the STaR approach to develop members of our existing nursing team to be part of the dedicated transfer/retrieval service. The paper also presents audit data findings which examined the source of referrals, number of patients actually transferred and clinical status of those being transferred. Results Audit data illustrate that the most frequent source of referrals comes from Accident and Emergency and the Surgical Directorate with the most common presenting condition being cardio-respiratory failure or arrest. Audit data reveal that the number of patients actually transferred or retrieved is relatively small (33%) compared with the overall number of requests for assistance. However, 36% of those patients transferred had a level 2 or level 3 acuity status that necessitated the admission to a critical care area. Conclusions A number of studies have concluded that the ill-experienced and ill-equipped transfer team can place patients’ at serious risk of harm. Whether planned or unplanned, dedicated critical care transfer/retrieval teams have been shown to reduce patient mortality and morbidity.
Resumo:
We revisit the venerable question of access credentials management, which concerns the techniques that we, humans with limited memory, must employ to safeguard our various access keys and tokens in a connected world. Although many existing solutions can be employed to protect a long secret using a short password, those solutions typically require certain assumptions on the distribution of the secret and/or the password, and are helpful against only a subset of the possible attackers. After briefly reviewing a variety of approaches, we propose a user-centric comprehensive model to capture the possible threats posed by online and offline attackers, from the outside and the inside, against the security of both the plaintext and the password. We then propose a few very simple protocols, adapted from the Ford-Kaliski server-assisted password generator and the Boldyreva unique blind signature in particular, that provide the best protection against all kinds of threats, for all distributions of secrets. We also quantify the concrete security of our approach in terms of online and offline password guesses made by outsiders and insiders, in the random-oracle model. The main contribution of this paper lies not in the technical novelty of the proposed solution, but in the identification of the problem and its model. Our results have an immediate and practical application for the real world: they show how to implement single-sign-on stateless roaming authentication for the internet, in a ad-hoc user-driven fashion that requires no change to protocols or infrastructure.
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
The Quantum Probability Ranking Principle (QPRP) has been recently proposed, and accounts for interdependent document relevance when ranking. However, to be instantiated, the QPRP requires a method to approximate the interference" between two documents. In this poster, we empirically evaluate a number of different methods of approximation on two TREC test collections for subtopic retrieval. It is shown that these approximations can lead to significantly better retrieval performance over the state of the art.