820 resultados para Environmental assessment tools


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction The suitability of video conferencing (VC) technology for clinical purposes relevant to geriatric medicine is still being established. This project aimed to determine the validity of the diagnosis of dementia via VC. Methods This was a multisite, noninferiority, prospective cohort study. Patients, aged 50 years and older, referred by their primary care physician for cognitive assessment, were assessed at 4 memory disorder clinics. All patients were assessed independently by 2 specialist physicians. They were allocated one face-to-face (FTF) assessment (Reference standard – usual clinical practice) and an additional assessment (either usual FTF assessment or a VC assessment) on the same day. Each specialist physician had access to the patient chart and the results of a battery of standardized cognitive assessments administered FTF by the clinic nurse. Percentage agreement (P0) and the weighted kappa statistic with linear weight (Kw) were used to assess inter-rater reliability across the 2 study groups on the diagnosis of dementia (cognition normal, impaired, or demented). Results The 205 patients were allocated to group: Videoconference (n = 100) or Standard practice (n = 105); 106 were men. The average age was 76 (SD 9, 51–95) and the average Standardized Mini-Mental State Examination Score was 23.9 (SD 4.7, 9–30). Agreement for the Videoconference group (P0= 0.71; Kw = 0.52; P < .0001) and agreement for the Standard Practice group (P0= 0.70; Kw = 0.50; P < .0001) were both statistically significant (P < .05). The summary kappa statistic of 0.51 (P = .84) indicated that VC was not inferior to FTF assessment. Conclusions Previous studies have shown that preliminary standardized assessment tools can be reliably administered and scored via VC. This study focused on the geriatric assessment component of the interview (interpretation of standardized assessments, taking a history and formulating a diagnosis by medical specialist) and identified high levels of agreement for diagnosing dementia. A model of service incorporating either local or remote administered standardized assessments, and remote specialist assessment, is a reliable process for enabling the diagnosis of dementia for isolated older adults.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective - this study examined the clinical utility and precision of routine screening for alcohol and other drug use among women attending a public antenatal service. Study design - a survey of clients and audit of clinical charts. Participants and setting - clients attending an antenatal clinic of a large tertiary hospital in Queensland, Australia, from October to December 2009. Measurements and findings - data were collected from two sources. First, 32 women who reported use of alcohol or other drugs during pregnancy at initial screening were then asked to complete a full substance use survey. Second, data were collected from charts of 349 new clients who attended the antenatal clinic during the study period. Both sensitivity (86%, 67%) and positive predictive value (100%, 92%) for alcohol and other drug use respectively, were high. Only 15% of surveyed women were uncomfortable about being screened for substance use in pregnancy, yet the chart audit revealed poor staff compliance. During the study period, 25% of clients were either not screened adequately or not at all. Key conclusions and implications for practise - despite recommended universal screening in pregnancy and the apparent acceptance by our participants, alcohol and other drug (A&OD) screening in the antenatal setting remains problematic. Investigation into the reasons behind, and ways to overcome, the low screening rate could improve health outcomes for mothers and children in this at-risk group. Targeted education and training for midwives may form part of the solution as these clinicians have a key role in implementing prevention and early intervention strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Food modelling systems such as the Core Foods and the Australian Guide to Healthy Eating are frequently used as nutritional assessment tools for menus in ‘well’ groups (such as boarding schools, prisons and mental health facilities), with the draft Foundation and Total Diets (FATD) the latest revision. The aim of this paper is to apply the FATD to an assessment of food provision in a long stay, ‘well’, group setting to determine its usefulness as a tool. A detailed menu review was conducted in a 1000 bed male prison, including verification of all recipes. Full diet histories were collected on 106 prisoners which included foods consumed from the menu and self funded snacks. Both the menu and diet histories were analysed according to core foods, with recipes used to assist in quantification of mixed dishes. Comparison was made of average core foods with Foundation Diet recommendations (FDR) for males. Results showed that the standard menu provided sufficient quantity for 8 of 13 FDRs, however was low in nuts, legumes, refined cereals and marginally low in fruits and orange vegetables. The average prisoner diet achieved 9 of 13 FDRs, notably with margarines and oils less than half and legumes one seventh of recommended. Overall, although the menu and prisoner diets could easily be assessed using the FDRs, it was not consistent with recommendations. In long stay settings other Nutrient Reference Values not modelled in the FATDS need consideration, in particular, Suggested Dietary Targets and professional judgement is required in interpretation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current discussions regarding the relationship between welfare governance systems and employment promotion in disability policy appeal to a rejuvenated neo-liberal and paternalistic understanding of welfare governance. At the core of this rationality is the argument that people with disabilities not only have rights, but also duties, in relation to the State. In the Australia welfare system, policy tools are deployed to produce a form of self-discipline, whereby the State emphasises personal responsibility via assessment tools, ‘mutual obligation’ policy, and motivational strategies. Drawing on a two-year semi-longitudinal study with 80 people with a disability accessing welfare benefits, we examine how welfare governance subject recipients to strategies to produce productive citizens who are able to contribute to the national goal of maintaining competitiveness in the global economy. Participants’ interviews reveal the intended and unintended effects of this activation policy, including some acceptance of the logic of welfare-to-work and counter-hegemonic resistance to de-valued social identities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dear Editor We thank Dr Klek for his interest in our article and giving us the opportunity to clarify our study and share our thoughts. Our study looks at the prevalence of malnutrition in an acute tertiary hospital and tracked the outcomes prospectively.1 There are a number of reasons why we chose Subjective Global Assessment (SGA) to determine the nutritional status of patients. Firstly, we took the view that nutrition assessment tools should be used to determine nutrition status and diagnose presence and severity of malnutrition; whereas the purpose of nutrition screening tools are to identify individuals who are at risk of malnutrition. Nutritional assessment rather than screening should be used as the basis for planning and evaluating nutrition interventions for those diagnosed with malnutrition. Secondly, Subjective Global Assessment (SGA) has been well accepted and validated as an assessment tool to diagnose the presence and severity of malnutrition in clinical practice.2, 3 It has been used in many studies as a valid prognostic indicator of a range of nutritional and clinical outcomes.4, 5, 6 On the other hand, Malnutrition Universal Screening Tool (MUST)7 and Nutrition Risk Screening 2002 (NRS 2002)8 have been established as screening rather than assessment tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim The aim of this study was to establish intensive care unit nurses’ knowledge of delirium within an acute tertiary hospital within South East Asia. Background Delirium is a common, life threatening and often preventable cause of morbidity and mortality among older patients. Undetected and untreated delirium is a catalyst to increased mortality, morbidity, functional decline and results in increased requirement for nursing care, healthcare expense and hospital length of stay. However, despite effective assessment tools to identify delirium in the acute setting, there still remains an inability of ICU nurses’ to accurately identify delirium in the critically ill patient especially that of hypoactive delirium. Method A purposive sample of 53 staff nurses from a 13-bedded medical intensive care unit within an acute tertiary teaching hospital in South East Asia were asked to participate. A 40 item 5-point Likert scale questionnaire was employed to determine the participants’ knowledge of the signs and symptoms; the risk factors and negative outcomes of delirium. Results The overall positively answered mean score was 27 (67.3%) out of a possible 40 questions. Mean scores for knowledge of signs and symptoms, risk factors and negative outcomes were 9.52 (63.5%, n = 15), 11.43 (63.5%, n = 17) and 6.0 (75%, n = 8), respectively. Conclusion Whilst the results of this study are similar to others taken from a western perspective, it appeared that the ICU nurses in this study demonstrated limited knowledge of the signs and symptoms, risk factors and negative outcomes of delirium in the critically patient. The implications for practice of this are important given the outcomes of untreated delirium.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Clinical experience, or experience in the ‘real world’ of practice, is a fundamental component of many health professional courses. It often involves students undertaking practical experience in clinical workplace settings, typically referred to as clinical placements, under the supervision of health professionals. Broadly speaking, the role of clinical supervisors, or teachers, is aimed at assisting students to integrate the theoretical and skills based components of the curriculum within the context of patient/client care (Erstzen et al 2009). Clinical experience also provides students with the opportunity to assimilate the attitudes, values and skills which they require to become appropriately skilled professionals in the environments in which they will eventually practise. However, clinical settings are particularly challenging learning environments for students. Unlike classroom learning, students in the clinical setting frequently find themselves involved in unplanned and often complex activities with patients and other health care providers, being supervised by a variety of clinical staff who have very different methods and styles of teaching, and negotiating bureaucratic or hierarchical structures in busy clinical workplaces where they may only be spending a limited amount of time. Kilminster et al (2007) also draw attention to tensions that may exist between the learning needs of students and the provision of quality care or need to prevent harm to the patient (e.g. Elkind et al 2007). All of these factors complicate the realisation of clinical education goals and underscore the need for effective clinical teaching practices that maximise student learning in clinical environments. This report provides a summary of work that has been achieved in relation to ALTC projects and fellowships associated with clinical teaching, and a review of scholarly publications relevant to this field. The report also makes recommendations based on issues identified and/or where further work is indicated. The projects and fellowships reviewed cover a range of discipline areas including Biology, Paramedic Practice, Clinical Exercise Physiology, Occupational Therapy, Speech Pathology, Physiotherapy, Pharmacy, Nursing and Veterinary Science. The main areas of focus cover issues related to curriculum, particularly in relation to industry expectations of ‘work-ready’ graduates and the implications for theoretical and practical, or clinical preparation; development of competency assessment tools that are nationally applicable across discipline-specific courses; and improvement of clinical learning through strategies targeting the clinical learning environment, building the teaching capacity of clinical supervisors and/or enhancing the clinical learning/teaching process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Nurses play a substantial role in the prevention and management of chemotherapy-induced nausea and vomiting (CINV). Objectives This study set out to describe nurses’ roles in the prevention and management of CINV and to identify any gaps that exist across countries. Methods A self-reported survey was completed by 458 registered nurses who administered chemotherapy to cancer patients in Australia, China, Hong Kong, and 9 Latin American countries. Results More than one-third of participants regarded their own knowledge of CINV as fair to poor. Most participants (>65%) agreed that chemotherapy-induced nausea and chemotherapy-induced vomiting should be considered separately (79%), but only 35% were confident in their ability to manage chemotherapy-induced nausea (53%) or chemotherapy-induced vomiting (59%). Only one-fifth reported frequent use of a standardized CINV assessment tool and only a quarter used international clinical guidelines to manage CINV. Conclusions Participants perceived their own knowledge of CINV management to be insufficient. They recognized the need to develop and use a standardized CINV assessment tool and the importance of adopting international guidelines to inform the management of CINV. Implications for Practice: Findings indicate that international guidelines should be made available to nurses in clinically relevant and easily accessible formats, that a review of chemotherapy assessment tools should be undertaken to identify reliable and valid measures amenable to use in a clinical settings, and that a CINV risk screening tool should be developed as a prompt for nurses to enable timely identification of and intervention for patients at high risk of CINV.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is an increasing desire and emphasis to integrate assessment tools into the everyday training environment of athletes. These tools are intended to fine-tune athlete development, enhance performance and aid in the development of individualised programmes for athletes. The areas of workload monitoring, skill development and injury assessment are expected to benefit from such tools. This paper describes the development of an instrumented leg press and its application to testing leg dominance with a cohort of athletes. The developed instrumented leg press is a 45° reclining sled-type leg press with dual force plates, a displacement sensor and a CCD camera. A custom software client was developed using C#. The software client enabled near-real-time display of forces beneath each limb together with displacement of the quad track roller system and video feedback of the exercise. In recording mode, the collection of athlete particulars is prompted at the start of the exercise, and pre-set thresholds are used subsequently to separate the data into epochs from each exercise repetition. The leg press was evaluated in a controlled study of a cohort of physically active adults who performed a series of leg press exercises. The leg press exercises were undertaken at a set cadence with nominal applied loads of 50%, 100% and 150% of body weight without feedback. A significant asymmetry in loading of the limbs was observed in healthy adults during both the eccentric and concentric phases of the leg press exercise (P < .05). Mean forces were significantly higher beneath the non-dominant limb (4–10%) and during the concentric phase of the muscle action (5%). Given that symmetrical loading is often emphasized during strength training and remains a common goal in sports rehabilitation, these findings highlight the clinical potential for this instrumented leg press system to monitor symmetry in lower-limb loading during progressive strength training and sports rehabilitation protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Legislation giving prominence to psychosocial risk factors at work has changed the role of government occupational health and safety (OHS) inspectors in many countries. Yet little is known about how inspectorates have responded to these changes. Between 2003 and 2007 an Australian study was undertaken on OHS standards, entailing detailed documentary analysis, interviews with 36 inspectorate managers and 89 inspectors, and observations made when researchers accompanied inspectors on 120 typical workplace visits. Our study found that general duty provisions in OHS legislation clearly incorporated psychosocial hazards and inspectorates had introduced guidance material, pursued campaigns and increased interventions in this area. However, the regulatory framework remained narrow (focused on bullying/harassment, occupational violence and work stress) and workplace visits revealed psychosocial hazards as a marginal area of inspectorate activity. These findings were reinforced in interviews. While aware of psychosocial hazards inspectors often saw the issue as problematic due to limited training, resourcing constraints, deficiencies in regulation and fears of victimisation amongst workers. In order to address these problems a number of changes are required that recognize the distinctiveness of psychosocial hazards including their ‘invisibility’. Notable here are revisions to regulation (both general duty provisions and specific codes), the development of comprehensive guidance and assessment tools to be used by inspectors, greater use of procedural enforcement, and enhanced inspectorate resourcing and training. There is also a need to recognize complex inter-linkages between psychosocial hazards and the industrial relations context.