999 resultados para Effective rigidity
Resumo:
With the emergence of patient-centered care, consumers are becoming more effective managers of their care—in other words, “effective consumers.” To support patients to become effective consumers, a number of strategies to translate knowledge to action (KTA) have been used with varying success. The use of a KTA framework can be helpful to researchers and implementers when framing, planning, and evaluating knowledge translation activities and can potentially lead to more successful activities. This article briefly describes the KTA framework and its use by a team based out of the University of Ottawa to translate evidence-based knowledge to consumers. Using the framework, tailored consumer summaries, decision aids, and a scale to measure consumer effectiveness were created in collaboration with consumers. Strategies to translate the products into action then were selected and implemented. Evaluation of the knowledge tools and products indicates that the products are useful to consumers. Current research is in place to monitor the use of these products, and future research is planned to evaluate the effect of using the knowledge on health outcomes. The KTA framework provides a useful and valuable approach to knowledge translation.
Resumo:
As the popularity of video as an information medium rises, the amount of video content that we produce and archive keeps growing. This creates a demand for shorter representations of videos in order to assist the task of video retrieval. The traditional solution is to let humans watch these videos and write textual summaries based on what they saw. This summarisation process, however, is time-consuming. Moreover, a lot of useful audio-visual information contained in the original video can be lost. Video summarisation aims to turn a full-length video into a more concise version that preserves as much information as possible. The problem of video summarisation is to minimise the trade-off between how concise and how representative a summary is. There are also usability concerns that need to be addressed in a video summarisation scheme. To solve these problems, this research aims to create an automatic video summarisation framework that combines and improves on existing video summarisation techniques, with the focus on practicality and user satisfaction. We also investigate the need for different summarisation strategies in different kinds of videos, for example news, sports, or TV series. Finally, we develop a video summarisation system based on the framework, which is validated by subjective and objective evaluation. The evaluation results shows that the proposed framework is effective for creating video skims, producing high user satisfaction rate and having reasonably low computing requirement. We also demonstrate that the techniques presented in this research can be used for visualising video summaries in the form web pages showing various useful information, both from the video itself and from external sources.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
Individuals, community organisations and industry have always been involved to varying degrees in efforts to address the Queensland road toll. Traditionally, road crash prevention efforts have been led by state and local government organisations. While community and industry groups have sometimes become involved (e.g. Driver Reviver campaign), their efforts have largely been uncoordinated and under-resourced. A common strength of these initiatives lies in the energy, enthusiasm and persistence of community-based efforts. Conversely, a weakness has sometimes been the lack of knowledge, awareness or prioritisation of evidence-based interventions or their capacity to build on collaborative efforts. In 2000, the Queensland University of Technology’s Centre for Accident Research and Road Safety – Queensland (CARRS-Q) identified this issue as an opportunity to bridge practice and research and began acknowledging a selection of these initiatives, in partnership with the RACQ, through the Queensland Road Safety Awards program. After nine years it became apparent there was need to strengthen this connection, with the Centre establishing a Community Engagement Workshop in 2009 as part of the overall Awards program. With an aim of providing community participants opportunities to see, hear and discuss the experiences of others, this event was further developed in 2010, and with the collaboration of the Queensland Department of Transport and Main Roads, the RACQ, Queensland Police Service and Leighton Contractors Pty Ltd, a stand-alone Queensland Road Safety Awards Community Engagement Workshop was held in 2010. Each collaborating organisation recognised a need to mobilise the community through effective information and knowledge sharing, and recognised that learning and discussion can influence lasting behaviour change and action in this often emotive, yet not always evidence-based, area. This free event featured a number of speakers representing successful projects from around Australia and overseas. Attendees were encouraged to interact with the speakers, to ask questions, and most importantly, build connections with other attendees to build a ‘community road safety army’ all working throughout Australia on projects underpinned by evaluated research. The workshop facilitated the integration of research, policy and grass-roots action enhancing the success of community road safety initiatives. For collaboration partners, the event enabled them to transfer their knowledge in an engaged approach, working within a more personal communication process. An analysis of the success factors for this event identified openness to community groups and individuals, relevance of content to local initiatives, generous support with the provision of online materials and ongoing communication with key staff members as critical and supports the view that the university can directly provide both the leadership and the research needed for effective and credible community-based initiatives to address injury and death on the roads.
Resumo:
Evasive change-of-direction manoeuvres (agility skills) are a fundamental ability in rugby union. In this study, we explored the attributes of agility skill execution as they relate to effective attacking strategies in rugby union. Seven Super 14 games were coded using variables that assessed team patterns and individual movement characteristics during attacking ball carries. The results indicated that tackle-breaks are a key determinant of try-scoring ability and team success in rugby union. The ability of the attacking ball carrier to receive the ball at high speed with at least two body lengths from the defence line against an isolated defender promoted tackle-breaks. Furthermore, the execution of a side-step evasive manoeuvre at a change of direction angle of 20–60° and a distance of one to two body lengths from the defence, and then straightening the running line following the initial direction change at an angle of 20–60°, was associated with tackle-breaks. This study provides critical insight regarding the attributes of agility skill execution that are associated with effective ball carries in rugby union.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Building an efficient and an effective search engine is a very challenging task. In this paper, we present the efficiency and effectiveness of our search engine at the INEX 2009 Efficiency and Ad Hoc Tracks. We have developed a simple and effective pruning method for fast query evaluation, and used a two-step process for Ad Hoc retrieval. The overall results from both tracks show that our search engine performs very competitively in terms of both efficiency and effectiveness.
Resumo:
Most recommendation methods employ item-item similarity measures or use ratings data to generate recommendations. These methods use traditional two dimensional models to find inter relationships between alike users and products. This paper proposes a novel recommendation method using the multi-dimensional model, tensor, to group similar users based on common search behaviour, and then finding associations within such groups for making effective inter group recommendations. Web log data is multi-dimensional data. Unlike vector based methods, tensors have the ability to highly correlate and find latent relationships between such similar instances, consisting of users and searches. Non redundant rules from such associations of user-searches are then used for making recommendations to the users.
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.
Resumo:
Search log data is multi dimensional data consisting of number of searches of multiple users with many searched parameters. This data can be used to identify a user’s interest in an item or object being searched. Identifying highest interests of a Web user from his search log data is a complex process. Based on a user’s previous searches, most recommendation methods employ two-dimensional models to find relevant items. Such items are then recommended to a user. Two-dimensional data models, when used to mine knowledge from such multi dimensional data may not be able to give good mappings of user and his searches. The major problem with such models is that they are unable to find the latent relationships that exist between different searched dimensions. In this research work, we utilize tensors to model the various searches made by a user. Such high dimensional data model is then used to extract the relationship between various dimensions, and find the prominent searched components. To achieve this, we have used popular tensor decomposition methods like PARAFAC, Tucker and HOSVD. All experiments and evaluation is done on real datasets, which clearly show the effectiveness of tensor models in finding prominent searched components in comparison to other widely used two-dimensional data models. Such top rated searched components are then given as recommendation to users.