920 resultados para Swear words
Resumo:
Aim: To determine whether telephone support using an evidence-based protocol for chronic heart failure (CHF) management will improve patient outcomes and will reduce hospital readmission rates in patients without access to hospital-based management programs. Methods: The rationale and protocol for a cluster-design randomised controlled trial (RCT) of a semi-automated telephone intervention for the management of CHF, the Chronic Heart-failure Assistance by Telephone (CHAT) Study is described. Care is coordinated by trained cardiac nurses located in Heartline, the national call center of the National Heart Foundation of Australia in partnership with patients’ general practitioners (GPs). Conclusions: The CHAT Study model represents a potentially cost-effective and accessible model for the Australian health system in caring for CHF patients in rural and remote areas. The system of care could also be readily adapted for a range of chronic diseases and health systems. Key words: chronic disease management; chronic heart failure; integrated health care systems; nursing care, rural health services; telemedicine; telenursing
Resumo:
Background Alcoholism imposes a tremendous social and economic burden. There are relatively few pharmacological treatments for alcoholism, with only moderate efficacy, and there is considerable interest in identifying additional therapeutic options. Alcohol exposure alters SK-type potassium channel (SK) function in limbic brain regions. Thus, positive SK modulators such as chlorzoxazone (CZX), a US Food and Drug Administration–approved centrally acting myorelaxant, might enhance SK function and decrease neuronal activity, resulting in reduced alcohol intake. Methods We examined whether CZX reduced alcohol consumption under two-bottle choice (20% alcohol and water) in rats with intermittent access to alcohol (IAA) or continuous access to alcohol (CAA). In addition, we used ex vivo electrophysiology to determine whether SK inhibition and activation can alter firing of nucleus accumbens (NAcb) core medium spiny neurons. Results Chlorzoxazone significantly and dose-dependently decreased alcohol but not water intake in IAA rats, with no effects in CAA rats. Chlorzoxazone also reduced alcohol preference in IAA but not CAA rats and reduced the tendency for rapid initial alcohol consumption in IAA rats. Chlorzoxazone reduction of IAA drinking was not explained by locomotor effects. Finally, NAcb core neurons ex vivo showed enhanced firing, reduced SK regulation of firing, and greater CZX inhibition of firing in IAA versus CAA rats. Conclusions The potent CZX-induced reduction of excessive IAA alcohol intake, with no effect on the more moderate intake in CAA rats, might reflect the greater CZX reduction in IAA NAcb core firing observed ex vivo. Thus, CZX could represent a novel and immediately accessible pharmacotherapeutic intervention for human alcoholism. Key Words: Alcohol intake; intermittent; neuro-adaptation; nucleus accumbens; SK potassium channel
Resumo:
With the emergence of patient-centered care, consumers are becoming more effective managers of their care—in other words, “effective consumers.” To support patients to become effective consumers, a number of strategies to translate knowledge to action (KTA) have been used with varying success. The use of a KTA framework can be helpful to researchers and implementers when framing, planning, and evaluating knowledge translation activities and can potentially lead to more successful activities. This article briefly describes the KTA framework and its use by a team based out of the University of Ottawa to translate evidence-based knowledge to consumers. Using the framework, tailored consumer summaries, decision aids, and a scale to measure consumer effectiveness were created in collaboration with consumers. Strategies to translate the products into action then were selected and implemented. Evaluation of the knowledge tools and products indicates that the products are useful to consumers. Current research is in place to monitor the use of these products, and future research is planned to evaluate the effect of using the knowledge on health outcomes. The KTA framework provides a useful and valuable approach to knowledge translation.
Resumo:
In this research we examined, by means of case studies, the mechanisms by which relationships can be managed and by which communication and cooperation can be enhanced in developing sustainable supply chains. The research was predicated on the contention that the development of a sustainable supply chain depends, in part, on the transfer of knowledge and capabilities from the larger players in the supply chain. A sustainable supply chain requires proactive relationship management and the development of an appropriate organisational culture, and trust. By legitimising individuals’ expectations of the type of culture which is appropriate to their company and empowering employees to address mismatches that may occur, a situation can be created whereby the collaborating organisations develop their competences symbiotically and so facilitate a sustainable supply chain. Effective supply chain management enhances organisation performance and competitiveness through the management of operations across organisational boundaries. Relational contracting approaches facilitate the exchange of information and knowledge and build capacity in the supply chain, thus enhancing its sustainability. Relationship management also provides the conditions necessary for the development of collaborative and cooperative relationships However, often subcontractors and suppliers are not empowered to attend project meetings or to have direct communication with project based staff. With this being a common phenomenon in the construction industry, one might ask: what are the barriers to implementation of relationship management through the supply chain? In other words, the problem addressed in this research is the engagement of the supply chain through relationship management.
Resumo:
Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.
Resumo:
Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.
Resumo:
Continuous user authentication with keystroke dynamics uses characters sequences as features. Since users can type characters in any order, it is imperative to find character sequences (n-graphs) that are representative of user typing behavior. The contemporary feature selection approaches do not guarantee selecting frequently-typed features which may cause less accurate statistical user-representation. Furthermore, the selected features do not inherently reflect user typing behavior. We propose four statistical based feature selection techniques that mitigate limitations of existing approaches. The first technique selects the most frequently occurring features. The other three consider different user typing behaviors by selecting: n-graphs that are typed quickly; n-graphs that are typed with consistent time; and n-graphs that have large time variance among users. We use Gunetti’s keystroke dataset and k-means clustering algorithm for our experiments. The results show that among the proposed techniques, the most-frequent feature selection technique can effectively find user representative features. We further substantiate our results by comparing the most-frequent feature selection technique with three existing approaches (popular Italian words, common n-graphs, and least frequent ngraphs). We find that it performs better than the existing approaches after selecting a certain number of most-frequent n-graphs.
Resumo:
Both Knowledge Management (KM) and Project Management (PM) are known as crucial factors to develop competitive advantage(CA). PM Office (PMO) is recognized as a strong solution to institutionalize PM practices in organization. However, according to the literature there is a significant gap in addressing KM practices in the PMO. In other words, existing PMO maturity models has not been addressed from KM perceptive. This paper discusses undertaken investigations of both KM and PM as an initial part of PhD research on the role of knowledge in PMO
Resumo:
In “Thinking Feeling” a camera zooms in and around an animated constellation of words. There are ten words, each repeated one hundred times. The individual words independently pulse and orbit an invisible nucleus. The slow movements of the words and camera are reinforced by an airy, synthesised soundtrack. Over time, various phrasal combinations form and dissolve on screen. A bit like forcing oneself to sleep, “Thinking Feeling” picks at that fine line between controlling and letting go of thoughts. It creates small mantric loops that slip in and out of focus, playing with the liminal zones between the conscious and unconscious, between language and sensation, between gripping and releasing, and between calm and irritation.
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion.
Resumo:
PURPOSE: To examine the visual predictors of falls and injurious falls among older adults with glaucoma. METHODS: Prospective falls data were collected for 71 community-dwelling adults with primary open-angle glaucoma, mean age 73.9 ± 5.7 years, for one year using monthly falls diaries. Baseline assessment of central visual function included high-contrast visual acuity and Pelli-Robson contrast sensitivity. Binocular integrated visual fields were derived from monocular Humphrey Field Analyser plots. Rate ratios (RR) for falls and injurious falls with 95% confidence intervals (CIs) were based on negative binomial regression models. RESULTS: During the one year follow-up, 31 (44%) participants experienced at least one fall and 22 (31%) experienced falls that resulted in an injury. Greater visual impairment was associated with increased falls rate, independent of age and gender. In a multivariate model, more extensive field loss in the inferior region was associated with higher rate of falls (RR 1.57, 95%CI 1.06, 2.32) and falls with injury (RR 1.80, 95%CI 1.12, 2.98), adjusted for all other vision measures and potential confounding factors. Visual acuity, contrast sensitivity, and superior field loss were not associated with the rate of falls; topical beta-blocker use was also not associated with increased falls risk. CONCLUSIONS: Falls are common among older adults with glaucoma and occur more frequently in those with greater visual impairment, particularly in the inferior field region. This finding highlights the importance of the inferior visual field region in falls risk and assists in identifying older adults with glaucoma at risk of future falls, for whom potential interventions should be targeted. KEY WORDS: glaucoma, visual field, visual impairment, falls, injury
Resumo:
This paper presents an experiment designed to investigate if redundancy in an interface has any impact on the use of complex interfaces by older people and people with low prior-experience with technology. The important findings of this study were that older people (65+ years) completed the tasks on the Words only based interface faster than on Redundant (text and symbols) interface. The rest of the participants completed tasks significantly faster on the Redundant interface. From a cognitive processing perspective, sustained attention (one of the functions of Central Executive) has emerged as one of the important factors in completing tasks on complex interfaces faster and with fewer of errors.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.