890 resultados para Pain Threshold
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Introduction. The purpose of this chapter is to address the question raised in the chapter title. Specifically, how can models of motor control help us understand low back pain (LBP)? There are several classes of models that have been used in the past for studying spinal loading, stability, and risk of injury (see Reeves and Cholewicki (2003) for a review of past modeling approaches), but for the purpose of this chapter we will focus primarily on models used to assess motor control and its effect on spine behavior. This chapter consists of 4 sections. The first section discusses why a shift in modeling approaches is needed to study motor control issues. We will argue that the current approach for studying the spine system is limited and not well-suited for assessing motor control issues related to spine function and dysfunction. The second section will explore how models can be used to gain insight into how the central nervous system (CNS) controls the spine. This segues segue nicely into the next section that will address how models of motor control can be used in the diagnosis and treatment of LBP. Finally, the last section will deal with the issue of model verification and validity. This issue is important since modelling accuracy is critical for obtaining useful insight into the behavior of the system being studied. This chapter is not intended to be a critical review of the literature, but instead intended to capture some of the discussion raised during the 2009 Spinal Control Symposium, with some elaboration on certain issues. Readers interested in more details are referred to the cited publications.
Resumo:
Over the past few decades a major paradigm shift has occurred in the conceptualisation of chronic pain as a complex multidimensional phenomenon. Yet, pain experienced by individuals with a primary disability continues to be understood largely from a traditional biomedical model, despite its inherent limitations. This is reflected in the body of literature on the topic that is primarily driven by positivist assumptions and the search for etiologic pain mechanisms. Conversely, little is known about the experiences of and meanings attributed to, disability-related pain. Thus the purpose of this paper is to discuss the use of focus group methodology in elucidating the meanings and experiences of this population. Here, a distinction is made between the method of the focus group and focus group research as methodology. Typically, the focus group is presented as a seemingly atheoretical method of research. Drawing on research undertaken on the impact of chronic pain in people with multiple sclerosis, this paper seeks to theorise the focus group in arguing the methodological congruence of focus group research and the study of pain experience. It is argued that the contributions of group interaction and shared experiences in focus group discussions produce data and insights less accessible through more structured research methods. It is concluded that a biopsychosocial perspective of chronic pain may only ever be appreciated when the person-in-context is the unit of investigation.
Resumo:
This paper explores the theoretical framework of threshold concepts and its potential for LIS education. Threshold concepts are key ideas, often troublesome and counter-intuitive, that are critical to profound understanding of a domain. Once understood, they allow mastery of significant aspects of the domain, opening up new, previously inaccessible ways of thinking. The paper is developed in three parts. First, threshold concept theory is introduced and studies of its use in higher education are described, including emergent work related to LIS. Second, results of a recent study on learning experiences integral to learning to search are presented along with their implications for search expertise and search education, forming a case illustration of what threshold concept theory may contribute to this and other areas of LIS education. Third, the potential of threshold concept theory for LIS education is discussed. The paper concludes that threshold concept theory has much to offer LIS education, particularly for researching critical concepts and competencies, and considerations for a research agenda are put forth.
Resumo:
This chapter describes an innovative method of curriculum design that is based on combining phenomenographic research, and the associated variation theory of learning, with the notion of disciplinary threshold concepts to focus specialised design attention on the most significant and difficult parts of the curriculum. The method involves three primary stages: (i) identification of disciplinary concepts worthy of intensive curriculum design attention, using the criteria for threshold concepts; (ii) action research into variation in students’ understandings/misunderstandings of those concepts, using phenomenography as the research approach; (iii) design of learning activities to address the poorer understandings identified in the second stage, using variation theory as a guiding framework. The curriculum design method is inherently theory and evidence based. It was developed and trialed during a two-year project funded by the Australian Learning and Teaching Council, using physics and law disciplines as case studies. Disciplinary teachers’ perceptions of the impact of the method on their teaching and understanding of student learning were profound. Attempts to measure the impact on student learning were less conclusive; teachers often unintentionally deviated from the design when putting it into practice for the first time. Suggestions for improved implementation of the method are discussed.
Resumo:
Dengue virus (DENV) transmission in Australia is driven by weather factors and imported dengue fever (DF) cases. However, uncertainty remains regarding the threshold effects of high-order interactions among weather factors and imported DF cases and the impact of these factors on autochthonous DF. A time-series regression tree model was used to assess the threshold effects of natural temporal variations of weekly weather factors and weekly imported DF cases in relation to incidence of weekly autochthonous DF from 1 January 2000 to 31 December 2009 in Townsville and Cairns, Australia. In Cairns, mean weekly autochthonous DF incidence increased 16.3-fold when the 3-week lagged moving average maximum temperature was <32 °C, the 4-week lagged moving average minimum temperature was ≥24 °C and the sum of imported DF cases in the previous 2 weeks was >0. When the 3-week lagged moving average maximum temperature was ≥32 °C and the other two conditions mentioned above remained the same, mean weekly autochthonous DF incidence only increased 4.6-fold. In Townsville, the mean weekly incidence of autochthonous DF increased 10-fold when 3-week lagged moving average rainfall was ≥27 mm, but it only increased 1.8-fold when rainfall was <27 mm during January to June. Thus, we found different responses of autochthonous DF incidence to weather factors and imported DF cases in Townsville and Cairns. Imported DF cases may also trigger and enhance local outbreaks under favorable climate conditions.
Resumo:
Background: Appropriate disposition of emergency department (ED) patients with chest pain is dependent on clinical evaluation of risk. A number of chest pain risk stratification tools have been proposed. The aim of this study was to compare the predictive performance for major adverse cardiac events (MACE) using risk assessment tools from the National Heart Foundation of Australia (HFA), the Goldman risk score and the Thrombolysis in Myocardial Infarction risk score (TIMI RS). Methods: This prospective observational study evaluated ED patients aged ≥30 years with non-traumatic chest pain for which no definitive non-ischemic cause was found. Data collected included demographic and clinical information, investigation findings and occurrence of MACE by 30 days. The outcome of interest was the comparative predictive performance of the risk tools for MACE at 30 days, as analyzed by receiver operator curves (ROC). Results: Two hundred eighty-one patients were studied; the rate of MACE was 14.1%. Area under the curve (AUC) of the HFA, TIMI RS and Goldman tools for the endpoint of MACE was 0.54, 0.71 and 0.67, respectively, with the difference between the tools in predictive ability for MACE being highly significant [chi2 (3) = 67.21, N = 276, p < 0.0001]. Conclusion: The TIMI RS and Goldman tools performed better than the HFA in this undifferentiated ED chest pain population, but selection of cutoffs balancing sensitivity and specificity was problematic. There is an urgent need for validated risk stratification tools specific for the ED chest pain population.
Resumo:
In 2012 the New Zealand government spent $3.4 billion, or nearly $800 per person, on responses to crime via the justice system. Research shows that much of this spending does little to reduce the changes of re-offending. Relatively little money is spent on victims, the rehabilitation of offenders or to support the families of offenders. This book is based on papers presented at the Costs of Crime forum held by the Institute of Policy Studies in February 2011. It presents lessons from what is happening in Australia, Britain and the United States and focuses on how best to manage crime, respond to victims, and reduce offending in a cost-effective manner in a New Zealand context. It is clear that strategies are needed that are based on better research and a more informed approach to policy development. Such strategies must assist victims constructively while also reducing offending. Using public resources to lock as many people in our prisons as possible cannot be justified by the evidence and is fiscally unsustainable; nor does such an approach make society safer. To reduce the costs of crime we need to reinvest resources in effective strategies to build positive futures for those at risk and the communities needed to sustain them.
Resumo:
This book showcases the development and evaluation of innovative examples of pain management initiatives by advanced practitioners. It considers each service development or community initiative both in terms of advanced practice nursing and pain management. There is a wide range of examples of innovation in pain management included - from the introduction of ketamine use in one trust, to wider issues around meeting the needs of pain management in the community. The book considers issues including use of research, education and interprofessional working in the advanced practitioner role. Each chapter looks at development of the service, challenges of implementation, evaluation of the service's success and justifying the importance of the advanced nurse in the service's achievements.
Resumo:
This chapter contains sections titled: Introduction Advanced practice The context of pain management: definitions and prevalence Advancing practice in pain management Bringing together advanced practice and pain management Conclusions References
Resumo:
This chapter contains sections titled: Introduction Acute pain Chronic pain Rationale for service development Evaluation use of audit and CPD Justifying the advanced nursing contribution to develop nurse prescribing in pain management Conclusions References
Resumo:
This chapter contains sections titled: Introduction Advancing practice in pain management Conclusions References
Resumo:
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).