932 resultados para Knowledge Information Objects
Resumo:
Objectives The intent of this paper is in the examination of health IT implementation processes – the barriers to and facilitators of successful implementation, identification of a beginning set of implementation best practices, the identification of gaps in the health IT implementation body of knowledge, and recommendations for future study and application. Methods A literature review resulted in the identification of six health IT related implementation best practices which were subsequently debated and clarified by participants attending the NI2012 Research Post Conference held in Montreal in the summer of 2012. Using the framework for implementation research (CFIR) to guide their application, the six best practices were applied to two distinct health IT implementation studies to assess their applicability. Results Assessing the implementation processes from two markedly diverse settings illustrated both the challenges and potentials of using standardized implementation processes. In support of what was discovered in the review of the literature, “one size fits all” in health IT implementation is a fallacy, particularly when global diversity is added into the mix. At the same time, several frameworks show promise for use as “scaffolding” to begin to assess best practices, their distinct dimensions, and their applicability for use. Conclusions Health IT innovations, regardless of the implementation setting, requires a close assessment of many dimensions. While there is no “one size fits all”, there are commonalities and best practices that can be blended, adapted, and utilized to improve the process of implementation. This paper examines health IT implementation processes and identifies a beginning set of implementation best practices, which could begin to address gaps in the health IT implementation body of knowledge.
Resumo:
Process models are used to convey semantics about business operations that are to be supported by an information system. A wide variety of professionals is targeted to use such models, including people who have little modeling or domain expertise. We identify important user characteristics that influence the comprehension of process models. Through a free simulation experiment, we provide evidence that selected cognitive abilities, learning style, and learning strategy influence the development of process model comprehension. These insights draw attention to the importance of research that views process model comprehension as an emergent learning process rather than as an attribute of the models as objects. Based on our findings, we identify a set of organizational intervention strategies that can lead to more successful process modeling workshops.
Resumo:
Delirium is a significant problem for older hospitalized people and is associated with poor outcomes. It is poorly recognized and evidence suggests that a major reason is lack of education. Nurses, who are educated about delirium, can play a significant role in improving delirium recognition. This study evaluated the impact of a delirium specific educational website. A cluster randomized controlled trial, with a pretest/post-test time series design, was conducted to measure delirium knowledge (DK) and delirium recognition (DR) over three time-points. Statistically significant differences were found between the intervention and non-intervention group. The intervention groups' DK scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for DR when comparing T2 and T1 results (t=2.56 p=0.011) between both groups but not for changes in DR scores between T3 and T1 (t=1.80 p=0.074). Participants rated the website highly on the visual, functional and content elements. This study supports the concept that web-based delirium learning is an effective and satisfying method of information delivery for registered nurses. Future research is required to investigate clinical outcomes as a result of this web-based education.
Resumo:
Road traffic injuries are one of the major public health burdens worldwide. The United Nations Decade of Action for Road Safety (2011-2020) implores all nations to work to reduce this burden. This decade represents a unique and historic period of time in the field of road safety. Information exchange and co-operation between nations is an important step in achieving the goal. The burden of road crashes, fatalities and injuries is not equally distributed. We know that low and middle-income countries experience the majority of the road trauma burden. Therefore it is imperative that these countries learn from the successes of others that have developed and implemented road safety laws, public education campaigns and countermeasures over many years and have achieved significant road trauma reductions as a result. China is one of the countries experiencing a large road trauma burden. Vulnerable road users such as pedestrians and cyclists make up a large proportion of fatalities and injuries in China. Speeding, impaired/drug driving, distracted driving, vehicle overloading, inadequate road infrastructure, limited use of safety restraints and helmets, and limited road safety training have all been identified as contributing to the problem. Some important steps have been taken to strengthen China’s approach, including increased penalties for drunk driving in May 2011 and increased attention to school bus safety in 2011/12. However, there is still a large amount of work needed to improve the current road safety position in China. This paper provides details of a program to assist with road safety knowledge exchange between China and Australia that was funded by the Australian Government which was undertaken in the latter part of 2012. The four month program provided the opportunity for the first author to work closely with key agencies in Australia that are responsible for policy development and implementation of a broad range of road safety initiatives. In doing so, an in-depth understanding was gained about key road safety strategies in Australia and processes for developing and implementing them. Insights were also gained into the mechanisms used for road safety policy development, implementation and evaluation in several Australian jurisdictions. Road traffic law and enforcement issues were explored with the relevant jurisdictional transport and police agencies to provide a greater understanding of how Chinese laws and practices could be enhanced. Working with agencies responsible for public education and awareness campaigns about road safety in Australia also provided relevant information about how to promote road safety at the broader community level in China. Finally, the program provided opportunities to work closely with several world-renowned Australian research centres and key expert researchers to enhance opportunities for ongoing road safety research in China. The overall program provided the opportunity for the first author to develop knowledge in key areas of road safety strategy development, implementation and management which are directly relevant to the current situation in China. This paper describes some main observations and findings from participation in the program.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Business Process Management (BPM) is rapidly evolving as an established discipline. There are a number of efforts underway to formalize the various aspects of BPM practice; creating a formal Body of Knowledge (BoK) is one such effort. Bodies of knowledge are artifacts that have a proven track record for accelerating the professionalization of various disciplines. In order for this to succeed in BPM, it is vital to involve the broader business process community and derive a BoK that has essential characteristics that addresses the discipline’s needs. We argue for the necessity of a comprehensive BoK for the BPM domain, and present a core list of essential features to consider when developing a BoK based on preliminary empirical evidence. The paper identifies and critiques existing Bodies of Knowledge related to BPM, and firmly calls for an effort to develop a more accurate and sustainable BoK for BPM. An approach for this effort is presented with preliminary outcomes.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
Utilising quantitative and qualitative research methods the thesis explored how movement patterns were coordinated under different conditions in elite athletes. Results revealed each elite athlete's ability to use multiple, varied information sources to guide successful task performance, highlighting the specific role of surrounding objects in the performance environment to perceptually guide behaviour. Combining elite coaching knowledge with empirical research enhanced understanding of the role of vision in regulating interceptive behaviours, enhancing the representative design of training environments. The main findings have been applied to training design of the Athletics Australia National Jumps Centre at the Queensland Academy of Sport in preparation for the World Indoor Championships, World Championships, and Olympic Games for Australian long and triple jumpers.
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
In the evolving knowledge societies of today, some people are overloaded with information, others are starved for information. Everywhere, people are yearning to freely express themselves,to actively participate in governance processes and cultural exchanges. Universally, there is a deep thirst to understand the complex world around us. Media and Information Literacy (MIL) is a basis for enhancing access to information and knowledge, freedom of expression, and quality education. It describes skills, and attitudes that are needed to value the functions of media and other information providers, including those on the Internet, in societies and to find, evaluate and produce information and media content; in other words, it covers the competencies that are vital for people to be effectively engaged in all aspects of development.
Resumo:
Introduction This paper reports on university students' experiences of learning information literacy. Method Phenomenography was selected as the research approach as it describes the experience from the perspective of the study participants, which in this case is a mixture of undergraduate and postgraduate students studying education at an Australian university. Semi-structured, one-on-one interviews were conducted with fifteen students. Analysis The interview transcripts were iteratively reviewed for similarities and differences in students' experiences of learning information literacy. Categories were constructed from an analysis of the distinct features of the experiences that students reported. The categories were grouped into a hierarchical structure that represents students' increasingly sophisticated experiences of learning information literacy. Results The study reveals that students experience learning information literacy in six ways: learning to find information; learning a process to use information; learning to use information to create a product; learning to use information to build a personal knowledge base; learning to use information to advance disciplinary knowledge; and learning to use information to grow as a person and to contribute to others. Conclusions Understanding the complexity of the concept of information literacy, and the collective and diverse range of ways students experience learning information literacy, enables academics and librarians to draw on the range of experiences reported by students to design academic curricula and information literacy education that targets more powerful ways of learning to find and use information.
Resumo:
Introduction In a connected world youth are participating in digital content creating communities. This paper introduces a description of teens' information practices in digital content creating and sharing communities. Method The research design was a constructivist grounded theory methodology. Seventeen interviews with eleven teens were collected and observation of their digital communities occurred over a two-year period. Analysis The data were analysed iteratively to describe teens' interactions with information through open and then focused coding. Emergent categories were shared with participants to confirm conceptual categories. Focused coding provided connections between conceptual categories resulting in the theory, which was also shared with participants for feedback. Results The paper posits a substantive theory of teens' information practices as they create and share content. It highlights that teens engage in the information actions of accessing, evaluating, and using information. They experienced information in five ways: participation, information, collaboration, process, and artefact. The intersection of enacting information actions and experiences of information resulted in five information practices: learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. Conclusion This study contributes to our understanding of youth information actions, experiences, and practices. Further research into these communities might indicate what information practices are foundational to digital communities.
Resumo:
Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.
Resumo:
Tacit knowledge sharing amongst physicians is known to have a significant impact on the quality of medical decisions. This thesis posits that social media can provide new opportunities for tacit knowledge sharing amongst physicians, and demonstrates this by presenting findings from a review of relevant literature and a qualitative survey conducted with physicians. Using thematic analysis, the study revealed five major themes and over twenty sub-themes as potential contributions of social media to tacit knowledge flow amongst physicians.
Resumo:
Quantum-inspired models have recently attracted increasing attention in Information Retrieval. An intriguing characteristic of the mathematical framework of quantum theory is the presence of complex numbers. However, it is unclear what such numbers could or would actually represent or mean in Information Retrieval. The goal of this paper is to discuss the role of complex numbers within the context of Information Retrieval. First, we introduce how complex numbers are used in quantum probability theory. Then, we examine van Rijsbergen’s proposal of evoking complex valued representations of informations objects. We empirically show that such a representation is unlikely to be effective in practice (confuting its usefulness in Information Retrieval). We then explore alternative proposals which may be more successful at realising the power of complex numbers.