815 resultados para Symptom reporting
Resumo:
This paper describes the work being conducted in the baseline rail level crossing project, supported by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper discusses the limitations of near-miss data for analysis obtained using current level crossing occurrence reporting practices. The project is addressing these limitations through the development of a data collection and analysis system with an underlying level crossing accident causation model. An overview of the methodology and improved data recording process are described. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.
Resumo:
Recently there has been significant interest of researchers and practitioners on the use of Bluetooth as a complementary transport data. However, literature is limited with the understanding of the Bluetooth MAC Scanner (BMS) based data acquisition process and the properties of the data being collected. This paper first provides an insight on the BMS data acquisition process. Thereafter, it discovers the interesting facts from analysis of the real BMS data from both motorway and arterial networks of Brisbane, Australia. The knowledge gained is helpful for researchers and practitioners to understand the BMS data being collected which is vital to the development of management and control algorithms using the data.
Resumo:
Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
Despite plentiful efforts to identify perpetrator, victim, and incident characteristics correlated with reporting violence against women to police, few studies have addressed the contexts that shape such reporting. Even fewer have examined variations in these contexts across geographic areas. Drawing upon National Crime Victimization Survey data from 1992 through 2009, this paper uses conjunctive analysis of case configurations to identify and investigate the dominant situational contexts of reporting of violence against women to police across rural, suburban, and urban areas. Our findings show that context matters and the importance of incident, perpetrator, and victim characteristics vary across geographic areas.
Resumo:
Background: Critical care units are designed and resourced to save lives, yet the provision of end-of-life care is a significant component of nursing work in these settings. Limited research has investigated the actual practices of critical care nurses in the provision of end-of-life care, or the factors influencing these practices. To improve the care that patients at the end of life and their families receive, and to support nurses in the provision of this care, further research is needed. The purpose of this study was to identify critical care nurses' end-of-life care practices, the factors influencing the provision of end-of-life care and the factors associated with specific end-of-life care practices. Methods: A three-phase exploratory sequential mixed-methods design was utilised. Phase one used a qualitative approach involving interviews with a convenience sample of five intensive care nurses to identify their end-of-life care experiences and practices. In phase two, an online survey instrument was developed, based on a review of the literature and the findings of phase one. The survey instrument was reviewed by six content experts and pilot tested with a convenience sample of 28 critical care nurses (response rate 45%) enrolled in a postgraduate critical care nursing subject. The refined survey instrument was used in phase three of this study to conduct a national survey of critical care nurses. Descriptive analyses, exploratory factor analysis and univariate general linear modelling was undertaken on completed survey responses from 392 critical care nurses (response rate 25%). Results: Six end-of-life care practice areas were identified in this study: information sharing, environmental modification, emotional support, patient and family-centred decision making, symptom management and spiritual support. The items most frequently identified as always undertaken by critical care nurses in the provision of end-of-life care were from the information sharing and environmental modification practice areas. Items least frequently identified as always undertaken included items from the emotional support practice area. Eight factors influencing the provision of end-of-life care were identified: palliative values, patient and family preferences, knowledge, preparedness, organisational culture, resources, care planning, and emotional support for nurses. Strong agreement was noted with items reflecting values consistent with a palliative approach and inclusion of patient and family preferences. Variation was noted in agreement for items regarding opportunities for knowledge acquisition in the workplace and formal education, yet most respondents agreed that they felt adequately prepared. A context of nurse-led practice was identified, with variation in access to resources noted. Collegial support networks were identified as a source of emotional support for critical care nurses. Critical care nurses reporting values consistent with a palliative approach and/or those who scored higher on support for patient and family preferences were more likely to be engaged in end-of-life care practice areas identified in this study. Nurses who reported higher levels of preparedness and access to opportunities for knowledge acquisition were more likely to report engaging in interpersonal practices that supported patient and family centred decision making and emotional support of patients and their families. A negative relationship was identified between the explanatory variables of emotional support for nurses and death anxiety, and the patient and family centred decision making practice area. Contextual factors had a limited influence as explanatory variables of specific end-of-life care practice areas. Gender was identified as a significant explanatory variable in the emotional and spiritual support practice areas, with male gender associated with lower summated scores on these practice scales. Conclusions: Critical care nurses engage in practices to share control with and support inclusion of families experiencing death and dying. The most frequently identified end-of-life care practices were those that are easily implemented, practical strategies aimed at supporting the patient at the end of life and the patient's family. These practices arguably require less emotional engagement by the nurse. Critical care nurses' responses reflected values consistent with a palliative approach and a strong commitment to the inclusion of families in end-of-life care, and these factors were associated with engagement in all end-of-life care practice areas. Perceived preparedness or confidence with the provision of end-of-life care was associated with engagement in interpersonal caring practices. Critical care nurses autonomously engage in the provision of end-of-life care within the constraints of an environment designed for curative care and rely on their colleagues for emotional support. Critical care nurses must be adequately prepared and supported to provide comprehensive care in all areas of end-of-life care practice. The findings of this study raise important implications, and informed recommendations for practice, education and further research.
Resumo:
In 1963, the National Institutes of Health (NIH) first issued guidelines for animal housing and husbandry. The most recent 2010 revision emphasizes animal care “in ways judged to be scientifically, technically, and humanely appropriate” (National Institutes of Health, 2010, p. XIII). The goal of these guidelines is to ensure humanitarian treatment of animals and to optimize the quality of research. Although these animal care guidelines cover a substantial amount of information regarding animal housing and husbandry, researchers generally do not report all these variables (see Table Table1).1). The importance of housing and husbandry conditions with respect to standardization across different research laboratories has been debated previously (Crabbe et al., 1999; Van Der Staay and Steckler, 2002; Wahlsten et al., 2003; Wolfer et al., 2004; Van Der Staay, 2006; Richter et al., 2010, 2011). This paper focuses on several animal husbandry and housing issues that are particularly relevant to stress responses in rats, including transportation, handling, cage changing, housing conditions, light levels and the light–dark cycle. We argue that these key animal housing and husbandry variables should be reported in greater detail in an effort to raise awareness about extraneous experimental variables, especially those that have the potential to interact with the stress response.
Resumo:
This Technical and Background Paper summarises the results of a Australian Government Attorney-General’s Department’s funded project. The project aimed to clarify the contribution of the community night patrol program in the Northern Territory (NT) to improving the community safety of Indigenous communities. The paper recommends an improved framework for monitoring performance and reporting. Community night patrols or similar services operate in many other areas of Australia and internationally. The paper concludes that the core business of community night patrols is (non-crisis) crime prevention not defacto policing. It also concludes that an unrecognised outcome of patrols is capturing and sharing local knowledge about community safety issues and solutions. Over time, community night patrols should focus on working with other services to reduce the need for repeat assistance to persons at risk and for risky incidents. The recently released Northern Territory Emergency Response Evaluation Report (2011) confirmed that communities and service providers surveyed largely support night patrols, but better data is required to more comprehensively assess their performance.
Resumo:
To improve detection of child sexual abuse, many jurisdictions have enacted mandatory reporting laws requiring selected persons to report known and suspected cases. In Ireland, the Child First approach previously incorporated only a policy-based approach to reporting. Due to a perceived lack of efficacy, the Children First Bill was drafted in 2012 to shift this policy guidance to a legislative approach. What effects will the new legislative reporting duties have on numbers of reports, and outcomes of reports, of suspected child sexual abuse? This paper will shed light on these important questions by presenting results of analyses of the introduction of legislative reporting obligations in two Australian States. Three questions will be explored: 1. Does introducing reporting legislation result in enhanced detection of child sexual abuse? 2. Do different reporter groups have different patterns of reporting? 3. What do the patterns of report numbers and outcomes indicate for child protection systems and communities?
Resumo:
This study examined elementary school teachers’ knowledge of their legislative and policy-based reporting duties with respect to child sexual abuse. Data were collected from 470 elementary school teachers from urban and rural government and nongovernment schools in 3 Australian states, which at the time of the study had 3 different legislative reporting duties for teachers. Teachers completed the 8-part Teacher Reporting Questionnaire (TRQ). Multinomial logistic regression analysis was used to determine factors associated with (a) teachers’ legislation knowledge and (b) teachers’ policy knowledge. Teachers with higher levels of knowledge had a combination of pre- and in-service training about child sexual abuse and more positive attitudes toward reporting, held administration positions in their school, and had reported child sexual abuse at least once during their teaching career. They were also more likely to work in the state with the strongest legislative reporting duty, which had been in place the longest.
Resumo:
Taiwan nurses are mandated to report known or suspected child abuse and neglect (CAN), and self-efficacy is known to have an important influence on professional behaviors. The aim of this study was to develop and test the CAN reporting self-efficacy (CANRSE) scale as a measure of nurses’ self-efficacy to report CAN. A sample of 496 nurses from Southern Taiwanese hospitals used the CANRSE scale. The psychometric evaluation of the scale included content validity, exploratory and confirmatory factor analyses, convergent validity, as well as Cronbach’s α and test−retest reliability. Satisfactory internal consistency (Cronbach’s α = 0.92) and test−retest reliability were demonstrated. Confirmatory factor analysis supported the proposed models as having acceptable model fit. Exploratory factor analysis and regression analyses showed that the CANRSE scale had good construct validity and criterion-related validity, respectively. Convergent validity was tested using the general self-efficacy scale and was found to be satisfactory (r = 0.53). The results indicate the CANRSE is reliable and valid, and further testing of its predictive validity is recommended. It can be used to examine the influence of professional self-efficacy in recognizing and reporting CAN cases and to evaluate the impact of training programs aimed at improving CAN reporting.
Resumo:
Context The relatively low number of older patients in cancer trials limits knowledge of how older adults experience symptoms associated with cancer and its treatment. Objectives This study evaluated for differences in the symptom experience across four older age groups (60–64, 65–69, 70–74, ≥75 years). Methods Demographic, clinical, and symptom data from 330 patients aged >60 years who participated in one Australian and two U.S. studies were evaluated. The Memorial Symptom Assessment Scale was used to evaluate the occurrence, severity, frequency, and distress of 32 symptoms commonly associated with cancer and its treatment. Results On average, regardless of the age group, patients reported 10 concurrent symptoms. The most prevalent symptoms were physical in nature. Worrying was the most common psychological symptom. For 28 (87.5%) of the 32 Memorial Symptom Assessment Scale symptoms, no age-related differences were found in symptom occurrence rates. For symptom severity ratings, an age-related trend was found for difficulty swallowing. As age increased, severity of difficulty swallowing decreased. For symptom frequency, age-related trends were found for feeling irritable and diarrhea, with both decreasing in frequency as age increased. For symptom distress, age-related trends were found for lack of energy, shortness of breath, feeling bloated, and difficulty swallowing. As age increased, these symptoms received lower average distress ratings. Conclusion Additional research is warranted to examine how age differences in symptom experience are influenced by treatment differences, aging-related changes in biological or psychological processes, or age-related response shift.
Resumo:
Occupational exposures of healthcare workers tend to occur because of inconsistent compliance with standard precautions. Also, incidence of occupational exposure is underreported among operating room personnel. The purpose of this project was to develop national estimates for compliance with standard precautions and occupational exposure reporting practices among operating room nurses in Australia. Data was obtained utilizing a 96-item self-report survey. The Standard Precautions and Occupational Exposure Reporting survey was distributed anonymously to 500 members of the Australian College of Operating Room Nurses. The Health Belief Model was the theoretical framework used to guide the analysis of data. Data was analysed to examine relationships between specific constructs of the Health Belief Model to identify factors that might influence the operating room nurse to undertake particular health behaviours to comply with standard precautions and occupational exposure reporting. Results of the study revealed compliance rates of 55.6% with double gloving, 59.1% with announcing sharps transfers, 71.9% with using a hands-free sharps pass technique, 81.9% with no needle recapping and 92.0% with adequate eye protection. Although 31.6% of respondents indicated receiving an occupational exposure in the past 12 months, only 82.6% of them reported their exposures. The results of this study provide national estimates of compliance with standard precautions and occupational exposure reporting among operating room nurses in Australia. These estimates can now be used as support for the development and implementation of measures to improve practices in order to reduce occupational exposures and, ultimately, disease transmission rates among this high-risk group.