243 resultados para Risks Assessment Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful healing of long bone fractures is dependent on the mechanical environment created within the fracture, which in turn is dependent on the fixation strategy. Recent literature reports have suggested that locked plating devices are too stiff to reliably promote healing. However, in vitro testing of these devices has been inconsistent in both method of constraint and reported outcomes, making comparisons between studies and the assessment of construct stiffness problematic. Each of the methods previously used in the literature were assessed for their effect on the bending of the sample and concordant stiffness. The choice of outcome measures used in in vitro fracture studies was also assessed. Mechanical testing was conducted on seven hole locked plated constructs in each method for comparison. Based on the assessment of each method the use of spherical bearings, ball joints or similar is suggested at both ends of the sample. The use of near and far cortex movement was found to be more comprehensive and more accurate than traditional centrally calculated inter fragmentary movement values; stiffness was found to be highly susceptible to the accuracy of deformation measurements and constraint method, and should only be used as a within study comparison method. The reported stiffness values of locked plate constructs from in vitro mechanical testing is highly susceptible to testing constraints and output measures, with many standard techniques overestimating the stiffness of the construct. This raises the need for further investigation into the actual mechanical behaviour within the fracture gap of these devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

- Objective This study examined chronic disease risks and the use of a smartphone activity tracking application during an intervention in Australian truck drivers (April-October 2014). - Methods Forty-four men (mean age=47.5 [SD 9.8] years) completed baseline health measures, and were subsequently offered access to a free wrist-worn activity tracker and smartphone application (Jawbone UP) to monitor step counts and dietary choices during a 20-week intervention. Chronic disease risks were evaluated against guidelines; weekly step count and dietary logs registered by drivers in the application were analysed to evaluate use of the Jawbone UP. - Results Chronic disease risks were high (e.g. 97% high waist circumference [≥94 cm]). Eighteen drivers (41%) did not start the intervention; smartphone technical barriers were the main reason for drop out. Across 20-weeks, drivers who used the Jawbone UP logged step counts for an average of 6 [SD 1] days/week; mean step counts remained consistent across the intervention (weeks 1–4=8,743[SD 2,867] steps/day; weeks 17–20=8,994[SD 3,478] steps/day). The median number of dietary logs significantly decreased from start (17 [IQR 38] logs/weeks) to end of the intervention (0 [IQR 23] logs/week; p<0.01); the median proportion of healthy diet choices relative to total diet choices logged increased across the intervention (weeks 1–4=38[IQR 21]%; weeks 17–20=58[IQR 18]%). - Conclusions Step counts were more successfully monitored than dietary choices in those drivers who used the Jawbone UP. - Implications Smartphone technology facilitated active living and healthy dietary choices, but also prohibited intervention engagement in a number of these high-risk Australian truck drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

- Objectives To identify the psychological effects of false-positive screening mammograms in the UK. - Methods Systematic review of all controlled studies and qualitative studies of women with a false-positive screening mammogram. The control group participants had normal mammograms. All psychological outcomes including returning for routine screening were permitted. All studies had a narrative synthesis. - Results The searches returned seven includable studies (7/4423). Heterogeneity was such that meta-analysis was not possible. Studies using disease-specific measures found that, compared to normal results, there could be enduring psychological distress that lasted up to 3 years; the level of distress was related to the degree of invasiveness of the assessment. At 3 years the relative risks were, further mammography, 1.28 (95% CI 0.82 to 2.00), fine needle aspiration 1.80 (95% CI 1.17 to 2.77), biopsy 2.07 (95% CI 1.22 to 3.52) and early recall 1.82 (95% CI 1.22 to 2.72). Studies that used generic measures of anxiety and depression found no such impact up to 3 months after screening. Evidence suggests that women with false-positive mammograms have an increased likelihood of failing to reattend for routine screening, relative risk 0.97 (95% CI 0.96 to 0.98) compared with women with normal mammograms. - Conclusions Having a false-positive screening mammogram can cause breast cancer-specific distress for up to 3 years. The degree of distress is related to the invasiveness of the assessment. Women with false-positive mammograms are less likely to return for routine assessment than those with normal ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Recent estimates suggest that high body mass index (BMI), smoking, high blood pressure (BP) and physical inactivity are leading risk factors for the overall burden of disease in Australia. The aim was to examine the population attributable risk (PAR) of heart disease for each of these risk factors, across the adult lifespan in Australian women. Methods PARs were estimated using relative risks (RRs) for each of the four risk factors, as used in the Global Burden of Disease Study, and prevalence estimates from the Australian Longitudinal Study on Women's Health, in 15 age groups from 22–27 (N=9608) to 85–90 (N=3901). Results RRs and prevalence estimates varied across the lifespan. RRs ranged from 6.15 for smoking in the younger women to 1.20 for high BMI and high BP in the older women. Prevalence of risk exposure ranged from 2% for high BP in the younger women to 79% for high BMI in mid-age women. In young adult women up to age 30, the highest population risk was attributed to smoking. From age 31 to 90, PARs were highest for physical inactivity. Conclusions From about age 30, the population risk of heart disease attributable to inactivity outweighs that of other risk factors, including high BMI. Programmes for the promotion and maintenance of physical activity deserve to be a much higher public health priority for women than they are now, across the adult lifespan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intrusion (unauthorized stepping-into/staying-in a hazardous area), as a common type of near-miss, is the prime cause of the majority of incidents on construction sites including fall from heights, and striking against or being struck by moving objects. Accidents often occur because workers take shortcuts moving about the site without fully perceiving the potential dangers. A number of researches have been devoted to developing methods to prevent such behaviors mainly based on the theory of Behavior-Based Safety (BBS), which aims to cultivate safety behaviors among workers in accordance with safety regulations. In current BBS practice, trained observers and safety supervisors are responsible for safety behavior inspections following safety plans and operation regulations. The observation process is time-consuming and its effectiveness depends largely on the observer’s safety knowledge and experience, which often results in omissions or bias. This paper presents a reformed safety behavior modification approach by integrating a location-based technology with BBS. Firstly, a detailed background is provided, covering current intrusion problems on site, existing use of BBS for behavior improvement, difficulties in achieving widespread adoption and potential technologies for location tracking and in-time feedback. Then, a conceptual framework of positioning technology-enhanced BBS is developed, followed by details of the corresponding on-line supporting system, Real Time Location System (RTLS) and Virtual Construction System (VCS). The application of the system is then demonstrated and tested in a construction site in Hong Kong. Final comments are made concerning further research direction and prospects for wider adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: The aim was to investigate whether the sleep practices in early childhood education (ECE) settings align with current evidence on optimal practice to support sleep. Background: Internationally, scheduled sleep times are a common feature of daily schedules in ECE settings, yet little is known about the degree to which care practices in these settings align with the evidence regarding appropriate support of sleep. Methods: Observations were conducted in 130 Australian ECE rooms attended by preschool children (Mean = 4.9 years). Of these rooms, 118 had daily scheduled sleep times. Observed practices were scored against an optimality index, the Sleep Environment and Practices Optimality Score, developed with reference to current evidence regarding sleep scheduling, routines, environmental stimuli, and emotional climate. Cluster analysis was applied to identify patterns and prevalence of care practices in the sleep time. Results: Three sleep practices types were identified. Supportive rooms (36%) engaged in practices that maintained regular schedules, promoted routine, reduced environmental stimulation, and maintained positive emotional climate. The majority of ECE rooms (64%), although offering opportunity for sleep, did not engage in supportive practices: Ambivalent rooms (45%) were emotionally positive but did not support sleep; Unsupportive rooms (19%) were both emotionally negative and unsupportive in their practices. Conclusions: Although ECE rooms schedule sleep time, many do not adopt practices that are supportive of sleep. Our results underscore the need for education about sleep supporting practice and research to ascertain the impact of sleep practices in ECE settings on children’s sleep health and broader well-being.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To determine the association between conjunctival goblet cell density (GCD) assessed using in vivo laser scanning confocal microscopy and conjunctival impression cytology in a healthy population. Methods Ninety (90) healthy participants undertook a validated 5-item dry eye questionnaire, non-invasive tear film break-up time measurement, ocular surface fluorescein staining and phenol red thread test. These tests where undertaken to diagnose and exclude participants with dry eye. The nasal bulbar conjunctiva was imaged using laser scanning confocal microscopy (LSCM). Conjunctival impression cytology (CIC) was performed in the same region a few minutes later. Conjunctival goblet cell density was calculated as cells/mm2. Results There was a strong positive correlation of conjunctival GCD between LSCM and CIC (ρ = 0.66). Conjunctival goblet cell density was 475 ± 41 cells/mm2 and 466 ± 51 cells/mm2 measured by LSCM and CIC, respectively. Conclusions The strong association between in vivo and in vitro cellular analysis for measuring conjunctival GCD suggests that the more invasive CIC can be replaced by the less invasive LSCM in research and clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Medication incident reporting (MIR) is a key safety critical care process in residential aged care facilities (RACFs). Retrospective studies of medication incident reports in aged care have identified the inability of existing MIR processes to generate information that can be used to enhance residents’ safety. However, there is little existing research that investigates the limitations of the existing information exchange process that underpins MIR, despite the considerable resources that RACFs’ devote to the MIR process. The aim of this study was to undertake an in-depth exploration of the information exchange process involved in MIR and identify factors that inhibit the collection of meaningful information in RACFs. Methods The study was undertaken in three RACFs (part of a large non-profit organisation) in NSW, Australia. A total of 23 semi-structured interviews and 62 hours of observation sessions were conducted between May to July 2011. The qualitative data was iteratively analysed using a grounded theory approach. Results The findings highlight significant gaps in the design of the MIR artefacts as well as information exchange issues in MIR process execution. Study results emphasized the need to: a) design MIR artefacts that facilitate identification of the root causes of medication incidents, b) integrate the MIR process within existing information systems to overcome key gaps in information exchange execution, and c) support exchange of information that can facilitate a multi-disciplinary approach to medication incident management in RACFs. Conclusions This study highlights the advantages of viewing MIR process holistically rather than as segregated tasks, as a means to identify gaps in information exchange that need to be addressed in practice to improve safety critical processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Project evaluation is a process of measuring costs, benefits, risks and uncertainties for the purpose of decision-making by estimating and assessing impacts of the project to the community. The effects of impacts of toll roads are similar but different from the general non-tolled roads. Project evaluation methodologies are extensively studied and applied to various transport infrastructure projects. However, there is no definitive methodology to evaluate toll roads. This review discusses the impacts of toll roads then reviews the limitations of existing project evaluation methodologies when evaluating toll road impacts. The review identified gaps of knowledge of toll evaluations. First, the treatment of toll in project evaluation, particularly in Cost-Benefit Analysis requires further study to explore the appropriate methodology. Secondly, the project evaluation methodology needs to place strong emphasis on empirically based risk and uncertainty assessment. Addressing the limitations of the existing project evaluation methodologies leads to improvements of the methodology in practical level as well as fills the gap of knowledge of project evaluation for toll roads with respect to net impacts to the community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Patients with diabetic foot disease require frequent screening to prevent complications and may be helped through telemedical home monitoring. Within this context, the goal was to determine the validity and reliability of assessing diabetic foot infection using photographic foot imaging and infrared thermography. Subjects and Methods For 38 patients with diabetes who presented with a foot infection or were admitted to the hospital with a foot-related complication, photographs of the plantar foot surface using a photographic imaging device and temperature data from six plantar regions using an infrared thermometer were obtained. A temperature difference between feet of > 2.2 °C defined a ''hotspot.'' Two independent observers assessed each foot for presence of foot infection, both live (using the Perfusion-Extent-Depth- Infection-Sensation classification) and from photographs 2 and 4 weeks later (for presence of erythema and ulcers). Agreement in diagnosis between live assessment and (the combination of ) photographic assessment and temperature recordings was calculated. Results Diagnosis of infection from photographs was specific (> 85%) but not very sensitive (< 60%). Diagnosis based on hotspots present was sensitive (> 90%) but not very specific (<25%). Diagnosis based on the combination of photographic and temperature assessments was both sensitive (> 60%) and specific (> 79%). Intra-observer agreement between photographic assessments was good (Cohen's j = 0.77 and 0.52 for both observers). Conclusions Diagnosis of foot infection in patients with diabetes seems valid and reliable using photographic imaging in combination with infrared thermography. This supports the intended use of these modalities for the home monitoring of high-risk patients with diabetes to facilitate early diagnosis of signs of foot infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Recent reports have highlighted the prevalence of vitamin D deficiency and suggested an association with excess mortality in critically ill patients. Serum vitamin D concentrations in these studies were measured following resuscitation. It is unclear whether aggressive fluid resuscitation independently influences serum vitamin D. Methods Nineteen patients undergoing cardiopulmonary bypass were studied. Serum 25(OH)D3, 1α,25(OH)2D3, parathyroid hormone, C-reactive protein (CRP), and ionised calcium were measured at five defined timepoints: T1 - baseline, T2 - 5 minutes after onset of cardiopulmonary bypass (CPB) (time of maximal fluid effect), T3 - on return to the intensive care unit, T4 - 24 hrs after surgery and T5 - 5 days after surgery. Linear mixed models were used to compare measures at T2-T5 with baseline measures. Results Acute fluid loading resulted in a 35% reduction in 25(OH)D3 (59 ± 16 to 38 ± 14 nmol/L, P < 0.0001) and a 45% reduction in 1α,25(OH)2D3 (99 ± 40 to 54 ± 22 pmol/L P < 0.0001) and i(Ca) (P < 0.01), with elevation in parathyroid hormone (P < 0.0001). Serum 25(OH)D3 returned to baseline only at T5 while 1α,25(OH)2D3 demonstrated an overshoot above baseline at T5 (P < 0.0001). There was a delayed rise in CRP at T4 and T5; this was not associated with a reduction in vitamin D levels at these time points. Conclusions Hemodilution significantly lowers serum 25(OH)D3 and 1α,25(OH)2D3, which may take up to 24 hours to resolve. Moreover, delayed overshoot of 1α,25(OH)2D3 needs consideration. We urge caution in interpreting serum vitamin D in critically ill patients in the context of major resuscitation, and would advocate repeating the measurement once the effects of the resuscitation have abated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction The Elaborated Intrusion Theory of Desire holds that desires for functional and dysfunctional goals share a common form. Both are embodied cognitive events, characterised by affective intensity and frequency. Accordingly, we developed scales to measure motivational cognitions for functional goals (Motivational Thought Frequency, MTF; State Motivation, SM), based on the existing Craving Experience Questionnaire (CEQ). When applied to increasing exercise, MTF and SM showed the same three-factor structure as the CEQ (Intensity, Imagery, Availability). The current study tested the internal structure and concurrent validity of the MTF and SM Scales when applied to control of alcohol consumption (MTF-A; SM-A). Methods Participants (N = 417) were adult tertiary students, staff or community members who had recently engaged in high-risk drinking or were currently trying to control alcohol consumption. They completed an online survey comprising the MTF-A, SM-A, Alcohol Use Disorders Identification Test (AUDIT), Readiness to Change Questionnaire (RCQ) and demographics. Results Confirmatory Factor Analysis gave acceptable fit for the MTF-A, but required the loss of one SM-A item, and was improved by intercorrelations of error terms. Higher scores were associated with more severe problems on the AUDIT and with higher Contemplation and Action scores on the RCQ. Conclusions The MTF-A and SM-A show potential as measures of motivation to control drinking. Future research will examine their predictive validity and sensitivity to change. The scales' application to both increasing functional and decreasing dysfunctional behaviours is consistent with EI Theory's contention that both goal types operate in similar ways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose In the oncology population where malnutrition prevalence is high, more descriptive screening tools can provide further information to assist triaging and capture acute change. The Patient-Generated Subjective Global Assessment Short Form (PG-SGA SF) is a component of a nutritional assessment tool which could be used for descriptive nutrition screening. The purpose of this study was to conduct a secondary analysis of nutrition screening and assessment data to identify the most relevant information contributing to the PG-SGA SF to identify malnutrition risk with high sensitivity and specificity. Methods This was an observational, cross-sectional study of 300 consecutive adult patients receiving ambulatory anti-cancer treatment at an Australian tertiary hospital. Anthropometric and patient descriptive data were collected. The scored PG-SGA generated a score for nutritional risk (PG-SGA SF) and a global rating for nutrition status. Receiver operating characteristic curves (ROC) were generated to determine optimal cut-off scores for combinations of the PG-SGA SF boxes with the greatest sensitivity and specificity for predicting malnutrition according to scored PG-SGA global rating. Results The additive scores of boxes 1–3 had the highest sensitivity (90.2 %) while maintaining satisfactory specificity (67.5 %) and demonstrating high diagnostic value (AUC = 0.85, 95 % CI = 0.81–0.89). The inclusion of box 4 (PG-SGA SF) did not add further value as a screening tool (AUC = 0.85, 95 % CI = 0.80–0.89; sensitivity 80.4 %; specificity 72.3 %). Conclusions The validity of the PG-SGA SF in chemotherapy outpatients was confirmed. The present study however demonstrated that the functional capacity question (box 4) does not improve the overall discriminatory value of the PG-SGA SF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Toxic chemical pollutants such as heavy metals (HMs) are commonly present in urban stormwater. These pollutants can pose a significant risk to human health and hence a significant barrier for urban stormwater reuse. The primary aim of this study was to develop an approach for quantitatively assessing the risk to human health due to the presence of HMs in stormwater. This approach will lead to informed decision making in relation to risk management of urban stormwater reuse, enabling efficient implementation of appropriate treatment strategies. In this study, risks to human health from heavy metals were assessed as hazard index (HI) and quantified as a function of traffic and land use related parameters. Traffic and land use are the primary factors influencing heavy metal loads in the urban environment. The risks posed by heavy metals associated with total solids and fine solids (<150µm) were considered to represent the maximum and minimum risk levels, respectively. The study outcomes confirmed that Cr, Mn and Pb pose the highest risks, although these elements are generally present in low concentrations. The study also found that even though the presence of a single heavy metal does not pose a significant risk, the presence of multiple heavy metals could be detrimental to human health. These findings suggest that stormwater guidelines should consider the combined risk from multiple heavy metals rather than the threshold concentration of an individual species. Furthermore, it was found that risk to human health from heavy metals in stormwater is significantly influenced by traffic volume and the risk associated with stormwater from industrial areas is generally higher than that from commercial and residential areas.