989 resultados para Standardised testing
Predicting intentions and behaviours in populations with or at-risk of diabetes: A systematic review
Resumo:
Purpose To systematically review the Theory of Planned Behaviour studies predicting self-care intentions and behaviours in populations with and at-risk of diabetes. Methods A systematic review using six electronic databases was conducted in 2013. A standardised protocol was used for appraisal. Studies eligibility included a measure of behaviour for healthy eating, physical activity, glucose monitoring, medication use (ii) the TPB variables (iii) the TPB tested in populations with diabetes or at-risk. Results Sixteen studies were appraised for testing the utility of the TPB. Studies included cross-sectional (n=7); prospective (n=5) and randomised control trials (n=4). Intention (18% – 76%) was the most predictive construct for all behaviours. Explained variance for intentions were similar across cross-sectional (28 -76%); prospective (28 -73%); and RCT studies (18 - 63%). RCTs (18 - 43%) provided slightly stronger evidence for predicting behaviour. Conclusions Few studies tested predictability of the TPB in populations with or at-risk of diabetes. This review highlighted differences in the predictive utility of the TPB suggesting that the model is behaviour and population specific. Findings on key determinants of specific behaviours contribute to a better understanding of mechanisms of behaviour change and are useful in designing targeted behavioural interventions for different diabetes populations.
Resumo:
Aromatherapy has been found to have some effectiveness in treating conditions such as postoperative nausea and vomiting, however unless clinicians are aware of and convinced by this evidence, it is unlikely they will choose to use it with their patients. The aim of this study was to test and modify an existing tool, Martin and Furnham’s Beliefs About Aromatherapy Scale in order to make it relevant and meaningful for use with a population of nurses and midwives working in an acute hospital setting. A Delphi process was used to modify the tool and then it was tested in a population of nurses and midwives, then exploratory factor analysis was conducted. The modified tool is reliable and valid for measuring beliefs about aromatherapy in this population.
Resumo:
Many researchers in the field of civil structural health monitoring have developed and tested their methods on simple to moderately complex laboratory structures such as beams, plates, frames, and trusses. Field work has also been conducted by many researchers and practitioners on more complex operating bridges. Most laboratory structures do not adequately replicate the complexity of truss bridges. This paper presents some preliminary results of experimental modal testing and analysis of the bridge model presented in the companion paper, using the peak picking method, and compares these results with those of a simple numerical model of the structure. Three dominant modes of vibration were experimentally identified under 15 Hz. The mode shapes and order of the modes matched those of the numerical model; however, the frequencies did not match.
Resumo:
While the implementation of the IEC 61850 standard has significantly enhanced the performance of communications in electrical substations, it has also increased the complexity of the system. Subsequently, these added elaborations have introduced new challenges in relation to the skills and tools required for the design, test and maintenance of 61850-compatible substations. This paper describes a practical experience of testing a protection relay using a non-conventional test equipment; in addition, it proposes a third party software technique to reveal the contents of the packets transferred on the substation network. Using this approach, the standard objects can be linked and interpreted to what the end-users normally see in the IED and test equipment proprietary software programs.
Resumo:
The increasing amount of information that is annotated against standardised semantic resources offers opportunities to incorporate sophisticated levels of reasoning, or inference, into the retrieval process. In this position paper, we reflect on the need to incorporate semantic inference into retrieval (in particular for medical information retrieval) as well as previous attempts that have been made so far with mixed success. Medical information retrieval is a fertile ground for testing inference mechanisms to augment retrieval. The medical domain offers a plethora of carefully curated, structured, semantic resources, along with well established entity extraction and linking tools, and search topics that intuitively require a number of different inferential processes (e.g., conceptual similarity, conceptual implication, etc.). We argue that integrating semantic inference in information retrieval has the potential to uncover a large amount of information that otherwise would be inaccessible; but inference is also risky and, if not used cautiously, can harm retrieval.
Resumo:
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. As design criteria transition from empirical to mechanistic-empirical, soil test methods and equipment that measure properties such as stiffness and modulus and how they relate to Florida materials are needed. Requirements for the selected equipment are that they be portable, cost effective, reliable, a ccurate, and repeatable. A second objective is that the selected equipment measures soil properties without the use of nuclear materials.The current device used to measure soil compaction is the nuclear density gauge (NDG). Equipment evaluated in this research included lightweight deflectometers (LWD) from different manufacturers, a dynamic cone penetrometer (DCP), a GeoGauge, a Clegg impact soil tester (CIST), a Briaud compaction device (BCD), and a seismic pavement analyzer (SPA). Evaluations were conducted over ranges of measured densities and moistures.Testing (Phases I and II) was conducted in a test box and test pits. Phase III testing was conducted on materials found on five construction projects located in the Jacksonville, Florida, area. Phase I analyses determined that the GeoGauge had the lowest overall coefficient of variance (COV). In ascending order of COV were the accelerometer-type LWD, the geophone-type LWD, the DCP, the BCD, and the SPA which had the highest overall COV. As a result, the BCD and the SPA were excluded from Phase II testing.In Phase II, measurements obtained from the selected equipment were compared to the modulus values obtained by the static plate load test (PLT), the resilient modulus (MR) from laboratory testing, and the NDG measurements. To minimize soil and moisture content variability, the single spot testing sequence was developed. At each location, test results obtained from the portable equipment under evaluation were compared to the values from adjacent NDG, PLT, and laboratory MR measurements. Correlations were developed through statistical analysis. Target values were developed for various soils for verification on similar soils that were field tested in Phase III. The single spot testing sequence also was employed in Phase III, field testing performed on A-3 and A-2-4 embankments, limerock-stabilized subgrade, limerock base, and graded aggregate base found on Florida Department of Transportation construction projects. The Phase II and Phase III results provided potential trend information for future research—specifically, data collection for in-depth statistical analysis for correlations with the laboratory MR for specific soil types under specific moisture conditions. With the collection of enough data, stronger relationships could be expected between measurements from the portable equipment and the MR values. Based on the statistical analyses and the experience gained from extensive use of the equipment, the combination of the DCP and the LWD was selected for in-place soil testing for compaction control acceptance. Test methods and developmental specifications were written for the DCP and the LWD. The developmental specifications include target values for the compaction control of embankment, subgrade, and base materials.
Resumo:
Cold water immersion and ice baths are popular methods of recovery used by athletes. From the simple wheelie bin with water and ice, to the inflatable baths with complex water cooling units to recovery sessions in the ocean, the practice of cold water immersion is wide and varied. Research into cold water immersion was conducted as early as 1963 when Clarke1 examined the influence of cold water on performance recovery after a sustained handgrip exercise. Research has been conducted to understand how cold water immersion might affect the body’s physiological systems and how factors such as water temperature and the duration of immersion might enhance recovery after training and/or competition. Despite this research activity, how are we to know if research is being put into practice? In more serious situations, where guidelines and policies need to be standardised for the safe use of a product, one would expect that there is a straight forward follow-on from research into practice. Although cold water immersion may not need the rigor of testing compared to drug treatments, for example, the decision on whether to use cold water immersion in specific situations (e.g. after training or competition) may rest with one or two of the staff associated with the athlete/team. Therefore, it would be expected that these staff are well-informed on the current literature regarding cold water immersion.
Resumo:
Background In the emergency department, portable point-of-care testing (POCT) coagulation devices may facilitate stroke patient care by providing rapid International Normalized Ratio (INR) measurement. The objective of this study was to evaluate the reliability, validity, and impact on clinical decision-making of a POCT device for INR testing in the setting of acute ischemic stroke (AIS). Methods A total of 150 patients (50 healthy volunteers, 51 anticoagulated patients, 49 AIS patients) were assessed in a tertiary care facility. The INR's were measured using the Roche Coaguchek S and the standard laboratory technique. Results The interclass correlation coefficient and 95% confidence interval between overall POCT device and standard laboratory value INRs was high (0.932 (0.69 - 0.78). In the AIS group alone, the correlation coefficient and 95% CI was also high 0.937 (0.59 - 0.74) and diagnostic accuracy of the POCT device was 94%. Conclusions When used by a trained health professional in the emergency department to assess INR in acute ischemic stroke patients, the CoaguChek S is reliable and provides rapid results. However, as concordance with laboratory INR values decreases with higher INR values, it is recommended that with CoaguChek S INRs in the > 1.5 range, a standard laboratory measurement be used to confirm the results.
Resumo:
Background The aim of this study was to compare through surface electromyographic (sEMG) recordings of the maximum voluntary contraction (MVC) on dry land and in water by manual muscle test (MMT). Method Sixteen healthy right-handed subjects (8 males and 8 females) participated in measurement of muscle activation of the right shoulder. The selected muscles were the cervical erector spinae, trapezius, pectoralis, anterior deltoid, middle deltoid, infraspinatus and latissimus dorsi. The MVC test conditions were random with respect to the order on the land/in water. Results For each muscle, the MVC test was performed and measured through sEMG to determine differences in muscle activation in both conditions. For all muscles except the latissimus dorsi, no significant differences were observed between land and water MVC scores (p = 0.063–0.679) and precision (%Diff = 7–10%) were observed between MVC conditions in the muscles trapezius, anterior deltoid and middle deltoid. Conclusions If the procedure for data collection is optimal, under MMT conditions it appears that comparable MVC sEMG values were achieved on land and in water and the integrity of the EMG recordings were maintained during wáter immersion.
Resumo:
This paper provides an important and timely overview of a conceptual framework designed to assist with the development of message content, as well as the evaluation, of persuasive health messages. While an earlier version of this framework was presented in a prior publication by the authors in 2009, important refinements to the framework have seen it evolve in recent years, warranting the need for an updated review. This paper outlines the Step approach to Message Design and Testing (or SatMDT) in accordance with the theoretical evidence which underpins, as well as empirical evidence which demonstrates the relevance and feasibility, of each of the framework’s steps. The development and testing of the framework have thus far been based exclusively within the road safety advertising context; however, the view expressed herein is that the framework may have broader appeal and application to the health persuasion context.
Resumo:
To this point, the collection has provided research-based, empirical accounts of the various and multiple effects of the National Assessment Program – Literacy and Numeracy (NAPLAN) in Australian schooling as a specific example of the global phenomenon of national testing. In this chapter, we want to develop a more theoretical analysis of national testing systems, globalising education policy and the promise of national testing as adaptive, online tests. These future moves claim to provide faster feedback and more useful diagnostic help for teachers. There is a utopian testing dream that one day adaptive, online tests will be responsive in real time providing an integrated personalised testing, pedagogy and intervention for each student. The moves towards these next generation assessments are well advanced, including the work of Pearson’s NextGen Learning and Assessment research group, the Organization for Economic Co-operation and Development’s (OECD) move into assessing affective skills and the Australian Curriculum, Assessment and Reporting Authority’s (ACARA) decision to phase in NAPLAN as an online, adaptive test from 2017...
Resumo:
Introduction This book examines a pressing educational issue: the global phenomenon of national testing in schooling and its vernacular development in Australia. The Australian National Assessment Program – Literacy and Numeracy (NAPLAN), introduced in 2008, involves annual census testing of students in Years 3, 5, 7 and 9 in nearly all Australian schools. In a variety of ways, NAPLAN affects the lives of Australia’s 3.5 million school students and their families, as well as more than 350,000 school staff and many other stakeholders in education. This book is organised in relation to a simple question: What are the effects of national testing for systems, schools and individuals? Of course, this simple question requires complex answers. The chapters in this edited collection consider issues relating to national testing policy, the construction of the test, usages of the testing data and various effects of testing in systems, schools and classrooms. Each chapter examines an aspect of national testing in Australia using evidence drawn from research. The final chapter by the editors of this collection provides a broader reflection on this phenomenon and situates developments in testing globally...
Resumo:
Since 2008, Australian schoolchildren in Years 3, 5, 7 and 9 have sat a series of tests each May designed to assess their attainment of basic skills in literacy and numeracy. These tests are known as the National Assessment Program – Literacy and Numeracy (NAPLAN). In 2010, individual school NAPLAN data were first published on the MySchool website which enables comparisons to be made between individual schools and statistically like schools across Australia. NAPLAN represents the increased centrality of the federal government in education, particularly in regards to education policy. One effect of this has been a recast emphasis of education as an economic, rather than democratic, good. As Reid (2009) suggests, this recasting of education within national productivity agendas mobilises commonsense discourses of accountability and transparency. These are common articles of faith for many involved in education administration and bureaucracy; more and better data, and holding people to account for that data, must improve education...
Resumo:
This paper explores Rizvi and Lingard’s (2010) idea of the “local vernacular” of the global education policy trend of using high-stakes testing to increase accountability and transparency, and by extension quality, within schools and education systems in Australia. In the first part of the paper a brief context of the policy trajectory of National Assessment Program – Literacy and Numeracy (NAPLAN) is given in Australia. In the second part, empirical evidence drawn from a survey of teachers in Western Australia (WA) and South Australia (SA) is used to explore teacher perceptions of the impacts a high-stakes testing regime is having on student learning, relationships with parents and pedagogy in specific sites. After the 2007 Australian Federal election, one of Labor’s policy objectives was to deliver an “Education Revolution” designed to improve both the equity and excellence in the Australian school system1 (Rudd & Gillard, 2008). This reform agenda aims to “deliver real changes” through: “raising the quality of teaching in our schools” and “improving transparency and accountability of schools and school systems” (Rudd & Gillard, 2008, p. 5). Central to this linking of accountability, the transparency of schools and school systems and raising teaching quality was the creation of a regime of testing (NAPLAN) that would generate data about the attainment of basic literacy and numeracy skills by students in Australian schools.