243 resultados para Risks Assessment Methods
Resumo:
Background Research involving incapacitated persons with dementia entails complex scientific, legal, and ethical issues, making traditional surveys of layperson views on the ethics of such research challenging. We therefore assessed the impact of democratic deliberation (DD), involving balanced, detailed education and peer deliberation, on the views of those responsible for persons with dementia. Methods One hundred and seventy-eight community-recruited caregivers or primary decision-makers for persons with dementia were randomly assigned to either an all-day DD session group or a control group. Educational materials used for the DD session were vetted for balance and accuracy by an interdisciplinary advisory panel. We assessed the acceptability of family-surrogate consent for dementia research (“surrogate-based research”) from a societal policy perspective as well as from the more personal perspectives of deciding for a loved one or for oneself (surrogate and self-perspectives), assessed at baseline, immediately post-DD session, and 1 month after DD date, for four research scenarios of varying risk-benefit profiles. Results At baseline, a majority in both the DD and control groups supported a policy of family consent for dementia research in all research scenarios. The support for a policy of family consent for surrogate-based research increased in the DD group, but not in the control group. The change in the DD group was maintained 1 month later. In the DD group, there were transient changes in attitudes from surrogate or self-perspectives. In the control group, there were no changes from baseline in attitude toward surrogate consent from any perspective. Conclusions Intensive, balanced, and accurate education, along with peer deliberation provided by democratic deliberation, led to a sustained increase in support for a societal policy of family consent in dementia research among those responsible for dementia patients.
Resumo:
Of the numerous factors that play a role in fatal pedestrian collisions, the time of day, day of the week, and time of year can be significant determinants. More than 60% of all pedestrian collisions in 2007 occurred at night, despite the presumed decrease in both pedestrian and automobile exposure during the night. Although this trend is partially explained by factors such as fatigue and alcohol consumption, prior analysis of the Fatality Analysis Reporting System database suggests that pedestrian fatalities increase as light decreases after controlling for other factors. This study applies graphical cross-tabulation, a novel visual assessment approach, to explore the relationships among collision variables. The results reveal that twilight and the first hour of darkness typically observe the greatest frequency of pedestrian fatal collisions. These hours are not necessarily the most risky on a per mile travelled basis, however, because pedestrian volumes are often still high. Additional analysis is needed to quantify the extent to which pedestrian exposure (walking/crossing activity) in these time periods plays a role in pedestrian crash involvement. Weekly patterns of pedestrian fatal collisions vary by time of year due to the seasonal changes in sunset time. In December, collisions are concentrated around twilight and the first hour of darkness throughout the week while, in June, collisions are most heavily concentrated around twilight and the first hours of darkness on Friday and Saturday. Friday and Saturday nights in June may be the most dangerous times for pedestrians. Knowing when pedestrian risk is highest is critically important for formulating effective mitigation strategies and for efficiently investing safety funds. This applied visual approach is a helpful tool for researchers intending to communicate with policy-makers and to identify relationships that can then be tested with more sophisticated statistical tools.
Resumo:
Currently in Australia, there are no decision support tools for traffic and transport engineers to assess the crash risk potential of proposed road projects at design level. A selection of equivalent tools already exists for traffic performance assessment, e.g. aaSIDRA or VISSIM. The Urban Crash Risk Assessment Tool (UCRAT) was developed for VicRoads by ARRB Group to promote methodical identification of future crash risks arising from proposed road infrastructure, where safety cannot be evaluated based on past crash history. The tool will assist practitioners with key design decisions to arrive at the safest and the most cost -optimal design options. This paper details the development and application of UCRAT software. This professional tool may be used to calculate an expected mean number of casualty crashes for an intersection, a road link or defined road network consisting of a number of such elements. The mean number of crashes provides a measure of risk associated with the proposed functional design and allows evaluation of alternative options. The tool is based on historical data for existing road infrastructure in metropolitan Melbourne and takes into account the influence of key design features, traffic volumes, road function and the speed environment. Crash prediction modelling and risk assessment approaches were combined to develop its unique algorithms. The tool has application in such projects as road access proposals associated with land use developments, public transport integration projects and new road corridor upgrade proposals.
Resumo:
A total of 214 rainwater samples from 82 tanks were collected in urban Southeast Queensland (SEQ) in Australia and analysed for the zoonotic bacterial and protozoan pathogen using real-time binary PCR and quantitative PCR (qPCR). Quantitative Microbial Risk Assessment (QMRA) analysis was used to quantify the risk of infection associated with the exposure to potential pathogens from potable and non-potable uses of roof-harvested rainwater. Of the 214 samples tested, 10.7%, 9.8%, and 5.6%, and 0.4% samples were positive for Salmonella invA, Giardia lamblia β-giardin , Legionella pneumophila mip, and Campylobacter jejuni mapA genes. Cryptosporidium parvum could not be detected. The estimated numbers of viable Salmonella spp., G. lamblia β-giradin, and L. pneumophila genes ranged from 1.6 × 101 to 9.5 × 101 cells, 1.4 × 10-1 to 9.0 × 10-1 cysts, and 1.5 × 101 to 4.3 × 101 per 1000 ml of water, respectively. Six risk scenarios were considered from exposure to Salmonella spp., G. lamblia and L. pneumophila. For Salmonella spp., and G. lamblia, these scenarios were: (1) liquid ingestion due to drinking of rainwater on a daily basis (2) accidental liquid ingestion due to garden hosing twice a week (3) aerosol ingestion due to showering on a daily basis, and (4) aerosol ingestion due to hosing twice a week. For L. pneumophila, these scenarios were: (5) aerosol inhalation due to showering on a daily basis, and (6) aerosol inhalation due to hosing twice a week. The risk of infection from Salmonella spp., G. lamblia, and L. pneumophila associated with the use of rainwater for showering and garden hosing was calculated to be well below the threshold value of one extra infection per 10,000 persons per year in urban SEQ. However, the risk of infection from ingesting Salmonella spp. and G. lamblia via drinking exceeds this threshold value, and indicates that if undisinfected rainwater were ingested by drinking, then the gastrointestinal diseases of Salmonellosis and Giardiasis is expected to range from 5.0 × 100 to 2.8 × 101 (Salmonellosis) and 1.0 × 101 to 6.4 × 101 (Giardiasis) cases per 10,000 persons per year, respectively. Since this health risk seems higher than that expected from the reported incidences of gastroenteritis, the assumptions used to estimate these infection risks are critically examined. Nonetheless, it would seem prudent to disinfect rainwater for potable use.
Resumo:
This study used the Australian Environmental Health Risk Assessment Framework to assess the human health risk of dioxin exposure through foods for local residents in two wards of Bien Hoa City, Vietnam. These wards are known hot-spots for dioxin and a range of stakeholders from central government to local levels were involved in this process. Publications on dioxin characteristics and toxicity were reviewed and dioxin concentrations in local soil, mud, foods, milk and blood samples were used as data for this risk assessment. A food frequency survey of 400 randomly selected households in these wards was conducted to provide data for exposure assessment. Results showed that local residents who had consumed locally cultivated foods, especially fresh water fish and bottom-feeding fish, free-ranging chicken, duck, and beef were at a very high risk, with their daily dioxin intake far exceeding the tolerable daily intake recommended by the WHO. Based on the results of this assessment, a multifaceted risk management program was developed and has been recognized as the first public health program ever to have been implemented in Vietnam to reduce the risks of dioxin exposure at dioxin hot-spots.
Resumo:
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.
Resumo:
Collagen fibrillation within articular cartilage (AC) plays a key role in joint osteoarthritis (OA) progression and, therefore, studying collagen synthesis changes could be an indicator for use in the assessment of OA. Various staining techniques have been developed and used to determine the collagen network transformation under microscopy. However, because collagen and proteoglycan coexist and have the same index of refraction, conventional methods for specific visualization of collagen tissue is difficult. This study aimed to develop an advanced staining technique to distinguish collagen from proteoglycan and to determine its evolution in relation to OA progression using optical and laser scanning confocal microscopy (LSCM). A number of AC samples were obtained from sheep joints, including both healthy and abnormal joints with OA grades 1 to 3. The samples were stained using two different trichrome methods and immunohistochemistry (IHC) to stain both colourimetrically and with fluorescence. Using optical microscopy and LSCM, the present authors demonstrated that the IHC technique stains collagens only, allowing the collagen network to be separated and directly investigated. Fluorescently-stained IHC samples were also subjected to LSCM to obtain three-dimensional images of the collagen fibres. Changes in the collagen fibres were then correlated with the grade of OA in tissue. This study is the first to successfully utilize the IHC staining technique in conjunction with laser scanning confocal microscopy. This is a valuable tool for assessing changes to articular cartilage in OA.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
This paper presents an automated image‐based safety assessment method for earthmoving and surface mining activities. The literature review revealed the possible causes of accidents on earthmoving operations, investigated the spatial risk factors of these types of accident, and identified spatial data needs for automated safety assessment based on current safety regulations. Image‐based data collection devices and algorithms for safety assessment were then evaluated. Analysis methods and rules for monitoring safety violations were also discussed. The experimental results showed that the safety assessment method collected spatial data using stereo vision cameras, applied object identification and tracking algorithms, and finally utilized identified and tracked object information for safety decision making.
Resumo:
Australian construction and building workers are exposed to serious workplace risks - including injury, illness and death - and although there have been improvements in occupational health and safety (OHS) performance over the past 20 years, the injury and fatality rate in the Australian construction industry remains a matter of concern. The concept of safety culture is rapidly being adopted in the industry, including recognising the critical role that organisational leaders play in overall safety performance. This paper reviews recent research in construction safety leadership and provides some examples and applications relevant to risk reduction in the workforce. By focusing on developing safety competency in those that fulfil safety critical roles, and clearly articulating the relevant safety management tasks, leaders can positively influence the organisation’s safety culture. Finally, some promising research on Safety Effectiveness Indicators (SEIs) may be an industry-friendly solution to reducing workplace risks across the industry, by providing a credible, accurate, and timely measure of safety performance.
Resumo:
A letter in response to an article by David Rojas-Rueda, Audrey de Nazelle, Marko Tainio, Mark J Nieuwenhuijsen, The health risks and benefits of cycling in urban environments compared with car use: health impact assessment study. BMJ 2011;343:doi:10.1136/bmj.d4521 (Published 4 August 2011) This paper sets out to compare the health benefits of the Bicing scheme (Barcelona's public bicycle share scheme) with possible risks associated with increased bicycle riding. The key variables used by the researchers include physical activity, exposure to air pollution and road traffic injury. The authors rightly identify that although traffic congestion is often a major motivator behind the establishment of public bicycle share schemes (PBSS), the health benefits may well be the largest single benefit of such schemes. Certainly PBSS appear to be one of the most effective methods of increasing the number of bicycle trips across a population, providing additional transport options and improving awareness of the possibilities bicycles offer urban transport systems. Overall, the paper is a useful addition to the literature, in that it has attempted to assess the health benefits of a large scale PBSS and weighed these against potential risks related to cyclists exposure to air pollution and road traffic injuries. Unfortunately a fundamentally flawed assumption related to the proportion of Bicing trips replacing car journeys invalidates the results of this paper. A future paper with up to date data would create a significant contribution to this emerging area within the field of sustainable transport.
Resumo:
Background Not all cancer patients receive state-of-the-art care and providing regular feedback to clinicians might reduce this problem. The purpose of this study was to assess the utility of various data sources in providing feedback on the quality of cancer care. Methods Published clinical practice guidelines were used to obtain a list of processes-of-care of interest to clinicians. These were assigned to one of four data categories according to their availability and the marginal cost of using them for feedback. Results Only 8 (3%) of 243 processes-of-care could be measured using population-based registry or administrative inpatient data (lowest cost). A further 119 (49%) could be measured using a core clinical registry, which contains information on important prognostic factors (e.g., clinical stage, physiological reserve, hormone-receptor status). Another 88 (36%) required an expanded clinical registry or medical record review; mainly because they concerned long-term management of disease progression (recurrences and metastases) and 28 (11.5%) required patient interview or audio-taping of consultations because they involved information sharing between clinician and patient. Conclusion The advantages of population-based cancer registries and administrative inpatient data are wide coverage and low cost. The disadvantage is that they currently contain information on only a few processes-of-care. In most jurisdictions, clinical cancer registries, which can be used to report on many more processes-of-care, do not cover smaller hospitals. If we are to provide feedback about all patients, not just those in larger academic hospitals with the most developed data systems, then we need to develop sustainable population-based data systems that capture information on prognostic factors at the time of initial diagnosis and information on management of disease progression.
Resumo:
Aim: To determine whether telephone support using an evidence-based protocol for chronic heart failure (CHF) management will improve patient outcomes and will reduce hospital readmission rates in patients without access to hospital-based management programs. Methods: The rationale and protocol for a cluster-design randomised controlled trial (RCT) of a semi-automated telephone intervention for the management of CHF, the Chronic Heart-failure Assistance by Telephone (CHAT) Study is described. Care is coordinated by trained cardiac nurses located in Heartline, the national call center of the National Heart Foundation of Australia in partnership with patients’ general practitioners (GPs). Conclusions: The CHAT Study model represents a potentially cost-effective and accessible model for the Australian health system in caring for CHF patients in rural and remote areas. The system of care could also be readily adapted for a range of chronic diseases and health systems. Key words: chronic disease management; chronic heart failure; integrated health care systems; nursing care, rural health services; telemedicine; telenursing
Resumo:
Diabetes is an increasingly prevalent disease worldwide. Providing early management of the complications can prevent morbidity and mortality in this population. Peripheral neuropathy, a significant complication of diabetes, is the major cause of foot ulceration and amputation in diabetes. Delay in attending to complication of the disease contributes to significant medical expenses for diabetic patients and the community. Early structural changes to the neural components of the retina have been demonstrated to occur prior to the clinically visible retinal vasculature complication of diabetic retinopathy. Additionally visual functionloss has been shown to exist before the ophthalmoscopic manifestations of vasculature damage. The purpose of this thesis was to evaluate the relationship between diabetic peripheral neuropathy and both retinal structure and visual function. The key question was whether diabetic peripheral neuropathy is the potential underlying factor responsible for retinal anatomical change and visual functional loss in people with diabetes. This study was conducted on a cohort with type 2 diabetes. Retinal nerve fibre layer thickness was assessed by means of Optical Coherence Tomography (OCT). Visual function was assessed using two different methods; Standard Automated Perimetry (SAP) and flicker perimetry were performed within the central 30 degrees of fixation. The level of diabetic peripheral neuropathy (DPN) was assessed using two techniques - Quantitative Sensory Testing and Neuropathy Disability Score (NDS). These techniques are known to be capable of detecting DPN at very early stages. NDS has also been shown as a gold standard for detecting 'risk of foot ulceration'. Findings reported in this thesis showed that RNFL thickness, particularly in the inferior quadrant, has a significant association with severity of DPN when the condition has been assessed using NDS. More specifically it was observed that inferior RNFL thickness has the ability to differentiate individuals who are at higher risk of foot ulceration from those who are at lower risk, indicating that RNFL thickness can predict late-staged DPN. Investigating the association between RNFL and QST did not show any meaningful interaction, which indicates that RNFL thickness for this cohort was not as predictive of neuropathy status as NDS. In both of these studies, control participants did not have different results from the type 2 cohort who did not DPN suggesting that RNFL thickness is not a marker for diagnosing DPN at early stages. The latter finding also indicated that diabetes per se, is unlikely to affect the RNFL thickness. Visual function as measured by SAP and flicker perimetry was found to be associated with severity of peripheral neuropathy as measured by NDS. These findings were also capable of differentiating individuals at higher risk of foot ulceration; however, visual function also proved not to be a maker for early diagnosis of DPN. It was found that neither SAP, nor flicker sensitivity have meaningful associations with DPN when neuropathy status was measured using QST. Importantly diabetic retinopathy did not explain any of the findings in these experiments. The work described here is valuable as no other research to date has investigated the association between diabetic peripheral neuropathy and either retinal structure or visual function.
Resumo:
Background: Bioimpedance techniques provide a reliable method of assessing unilateral lymphedema in a clinical setting. Bioimpedance devices are traditionally used to assess body composition at a current frequency of 50 kHz. However, these devices are not transferable to the assessment of lymphedema, as the sensitivity of measuring the impedance of extracellular fluid is frequency dependent. It has previously been shown that the best frequency to detect extracellular fluid is 0 kHz (or DC). However, measurement at this frequency is not possible in practice due to the high skin impedance at DC, and an estimate is usually determined from low frequency measurements. This study investigated the efficacy of various low frequency ranges for the detection of lymphedema. Methods and Results: Limb impedance was measured at 256 frequencies between 3 kHz and 1000 kHz for a sample control population, arm lymphedema population, and leg lymphedema population. Limb impedance was measured using the ImpediMed SFB7 and ImpediMed L-Dex® U400 with equipotential electrode placement on the wrists and ankles. The contralateral limb impedance ratio for arms and legs was used to calculate a lymphedema index (L-Dex) at each measurement frequency. The standard deviation of the limb impedance ratio in a healthy control population has been shown to increase with frequency for both the arm and leg. Box and whisker plots of the spread of the control and lymphedema populations show that there exists good differentiation between the arm and leg L-Dex measured for lymphedema subjects and the arm and leg L-Dex measured for control subjects up to a frequency of about 30 kHz. Conclusions: It can be concluded that impedance measurements above a frequency of 30 kHz decrease sensitivity to extracellular fluid and are not reliable for early detection of lymphedema.