929 resultados para Fatal Injuries.
Resumo:
The Queensland Coal Industry Employees Health Scheme was implemented in 1993 to provide health surveillance for all Queensland coal industry workers. Tt1e government, mining employers and mining unions agreed that the scheme should operate for seven years. At the expiry of the scheme, an assessment of the contribution of health surveillance to meet coal industry needs would be an essential part of determining a future health surveillance program. This research project has analysed the data made available between 1993 and 1998. All current coal industry employees have had at least one health assessment. The project examined how the centralised nature of the Health Scheme benefits industry by identi~)jng key health issues and exploring their dimensions on a scale not possible by corporate based health surveillance programs. There is a body of evidence that indicates that health awareness - on the scale of the individual, the work group and the industry is not a part of the mining industry culture. There is also growing evidence that there is a need for this culture to change and that some change is in progress. One element of this changing culture is a growth in the interest by the individual and the community in information on health status and benchmarks that are reasonably attainable. This interest opens the way for health education which contains personal, community and occupational elements. An important element of such education is the data on mine site health status. This project examined the role of health surveillance in the coal mining industry as a tool for generating the necessary information to promote an interest in health awareness. The Health Scheme Database provides the material for the bulk of the analysis of this project. After a preliminary scan of the data set, more detailed analysis was undertaken on key health and related safety issues that include respiratory disorders, hearing loss and high blood pressure. The data set facilitates control for confounding factors such as age and smoking status. Mines can be benchmarked to identify those mines with effective health management and those with particular challenges. While the study has confirmed the very low prevalence of restrictive airway disease such as pneu"moconiosis, it has demonstrated a need to examine in detail the emergence of obstructive airway disease such as bronchitis and emphysema which may be a consequence of the increasing use of high dust longwall technology. The power of the Health Database's electronic data management is demonstrated by linking the health data to other data sets such as injury data that is collected by the Department of l\1mes and Energy. The analysis examines serious strain -sprain injuries and has identified a marked difference between the underground and open cut sectors of the industry. The analysis also considers productivity and OHS data to examine the extent to which there is correlation between any pairs ofJpese and previously analysed health parameters. This project has demonstrated that the current structure of the Coal Industry Employees Health Scheme has largely delivered to mines and effective health screening process. At the same time, the centralised nature of data collection and analysis has provided to the mines, the unions and the government substantial statistical cross-sectional data upon which strategies to more effectively manage health and relates safety issues can be based.
Resumo:
Background Wandering represents a major problem in the management of patients with Alzheimer’s disease (AD). In this study we examined the utility of the Algase Wandering Scale (AWS), a newly developed psychometric instrument that asks caregivers to assess the likelihood of wandering behavior. Methods The AWS was administered to the caregivers of 40 AD patients and total and subscale scores were examined in relation to measures of mental and functional status, depressive symptoms and medication usage. Results AWS scores were comparable, though slightly lower, than those normative values previously published. Higher scores were associated with more severe dementia. The Negative Outcome subscale showed a significant increase in reported falls or injuries in association with anti-depressant use. Conclusions These data provide some construct validation for the AWS as a potentially useful scale to assess wandering behaviors in AD.
Resumo:
Regional safety program managers face a daunting challenge in the attempt to reduce deaths, injuries, and economic losses that result from motor vehicle crashes. This difficult mission is complicated by the combination of a large perceived need, small budget, and uncertainty about how effective each proposed countermeasure would be if implemented. A manager can turn to the research record for insight, but the measured effect of a single countermeasure often varies widely from study to study and across jurisdictions. The challenge of converting widespread and conflicting research results into a regionally meaningful conclusion can be addressed by incorporating "subjective" information into a Bayesian analysis framework. Engineering evaluations of crashes provide the subjective input on countermeasure effectiveness in the proposed Bayesian analysis framework. Empirical Bayes approaches are widely used in before-and-after studies and "hot-spot" identification; however, in these cases, the prior information was typically obtained from the data (empirically), not subjective sources. The power and advantages of Bayesian methods for assessing countermeasure effectiveness are presented. Also, an engineering evaluation approach developed at the Georgia Institute of Technology is described. Results are presented from an experiment conducted to assess the repeatability and objectivity of subjective engineering evaluations. In particular, the focus is on the importance, methodology, and feasibility of the subjective engineering evaluation for assessing countermeasures.
Resumo:
Persistent use of safety restraints prevents deaths and reduces the severity and number of injuries resulting from motor vehicle crashes. However, safety-restraint use rates in the United States have been below those of other nations with safety-restraint enforcement laws. With a better understanding of the relationship between safety-restraint law enforcement and safety-restraint use, programs can be implemented to decrease the number of deaths and injuries resulting from motor vehicle crashes. Does safety-restraint use increase as enforcement increases? Do motorists increase their safety-restraint use in response to the general presence of law enforcement or to targeted law enforcement efforts? Does a relationship between enforcement and restraint use exist at the countywide level? A logistic regression model was estimated by using county-level safety-restraint use data and traffic citation statistics collected in 13 counties within the state of Florida in 1997. The model results suggest that safety-restraint use is positively correlated with enforcement intensity, is negatively correlated with safety-restraint enforcement coverage (in lanemiles of enforcement coverage), and is greater in urban than rural areas. The quantification of these relationships may assist Florida and other law enforcement agencies in raising safety-restraint use rates by allocating limited funds more efficiently either by allocating additional time for enforcement activities of the existing force or by increasing enforcement staff. In addition, the research supports a commonsense notion that enforcement activities do result in behavioral response.
Resumo:
Staphylococcus aureus is a common pathogen that causes a variety of infections including soft tissue infections, impetigo, septicemia toxic shock and scalded skin syndrome. Traditionally, Methicillin-Resistant Staphylococcus aureus (MRSA) was considered a Hospital-Acquired (HA) infection. It is now recognised that the frequency of infections with MRSA is increasing in the community, and that these infections are not originating from hospital environments. A 2007 report by the Centers for Disease Control and Prevention (CDC) stated that Staphylococcus aureus is the most important cause of serious and fatal infections in the USA. Community-Acquired MRSA (CA-MRSA) are genetically diverse and distinct, meaning they are able to be identified and tracked by way of genotyping. Genotyping of MRSA using Single nucleotide polymorphisms (SNPs) is a rapid and robust method for monitoring MRSA, specifically ST93 (Queensland Clone) dissemination in the community. It has been shown that a large proportion of CA-MRSA infections in Queensland and New South Wales are caused by ST93. The rationale for this project was that SNP analysis of MLST genes is a rapid and cost-effective method for genotyping and monitoring MRSA dissemination in the community. In this study, 16 different sequence types (ST) were identified with 41% of isolates identified as ST93 making it the predominate clone. Males and Females were infected equally with an average patient age of 45yrs. Phenotypically, all of the ST93 had an identical antimicrobial resistance pattern. They were resistant to the β-lactams – Penicillin, Flu(di)cloxacillin and Cephalothin but sensitive to all other antibiotics tested. Virulence factors play an important role in allowing S. aureus to cause disease by way of colonising, replication and damage to the host. One virulence factor of particular interest is the toxin Panton-Valentine leukocidin (PVL), which is composed of two separate proteins encoded by two adjacent genes. PVL positive CA-MRSA are shown to cause recurrent, chronic or severe skin and soft tissue infections. As a result, it is important that PVL positive CA-MRSA is genotyped and tracked. Especially now that CA-MRSA infections are more prevalent than HA-MRSA infections and are now deemed endemic in Australia. 98% of all isolates in this study tested positive for the PVL toxin gene. This study showed that PVL is present in many different community based ST, not just ST93, which were all PVL positive. With this toxin becoming entrenched in CA-MRSA, genotyping would provide more accurate data and a way of tracking the dissemination. PVL gene can be sub-typed using an allele-specific Real-Time PCR (RT-PCR) followed by High resolution meltanalysis. This allows the identification of PVL subtypes within the CA-MRSA population and allow the tracking of these clones in the community.
Resumo:
The treatment of challenging fractures and large osseous defects presents a formidable problem for orthopaedic surgeons. Tissue engineering/regenerative medicine approaches seek to solve this problem by delivering osteogenic signals within scaffolding biomaterials. In this study, we introduce a hybrid growth factor delivery system that consists of an electrospun nanofiber mesh tube for guiding bone regeneration combined with peptide-modified alginate hydrogel injected inside the tube for sustained growth factor release. We tested the ability of this system to deliver recombinant bone morphogenetic protein-2 (rhBMP-2) for the repair of critically-sized segmental bone defects in a rat model. Longitudinal [mu]-CT analysis and torsional testing provided quantitative assessment of bone regeneration. Our results indicate that the hybrid delivery system resulted in consistent bony bridging of the challenging bone defects. However, in the absence of rhBMP-2, the use of nanofiber mesh tube and alginate did not result in substantial bone formation. Perforations in the nanofiber mesh accelerated the rhBMP-2 mediated bone repair, and resulted in functional restoration of the regenerated bone. [mu]-CT based angiography indicated that perforations did not significantly affect the revascularization of defects, suggesting that some other interaction with the tissue surrounding the defect such as improved infiltration of osteoprogenitor cells contributed to the observed differences in repair. Overall, our results indicate that the hybrid alginate/nanofiber mesh system is a promising growth factor delivery strategy for the repair of challenging bone injuries.
Resumo:
Introduction: Floods are the most common hazard to cause disasters and have led to extensive morbidity and mortality throughout the world. The impact of floods on the human community is related directly to the location and topography of the area, as well as human demographics and characteristics of the built environment. Objectives: The aim of this study is to identify the health impacts of disasters and the underlying causes of health impacts associated with floods. A conceptual framework is developed that may assist with the development of a rational and comprehensive approach to prevention, mitigation, and management. Methods: This study involved an extensive literature review that located >500 references, which were analyzed to identify common themes, findings, and expert views. The findings then were distilled into common themes. Results: The health impacts of floods are wide ranging, and depend on a number of factors. However, the health impacts of a particular flood are specific to the particular context. The immediate health impacts of floods include drowning, injuries, hypothermia, and animal bites. Health risks also are associated with the evacuation of patients, loss of health workers, and loss of health infrastructure including essential drugs and supplies. In the mediumterm, infected wounds, complications of injury, poisoning, poor mental health, communicable diseases, and starvation are indirect effects of flooding. In the long-term, chronic disease, disability, poor mental health, and poverty-related diseases including malnutrition are the potential legacy. Conclusions: This article proposes a structured approach to the classification of the health impacts of floods and a conceptual framework that demonstrates the relationships between floods and the direct and indirect health consequences.
Resumo:
Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.
Resumo:
National estimates of the prevalence of child abuse-related injuries are obtained from a variety of sectors including welfare, justice, and health resulting in inconsistent estimates across sectors. The International Classification of Diseases (ICD) is used as the international standard for categorising health data and aggregating data for statistical purposes, though there has been limited validation of the quality, completeness or concordance of these data with other sectors. This research study examined the quality of documentation and coding of child abuse recorded in hospital records in Queensland and the concordance of these data with child welfare records. A retrospective medical record review was used to examine the clinical documentation of over 1000 hospitalised injured children from 20 hospitals in Queensland. A data linkage methodology was used to link these records with records in the child welfare database. Cases were sampled from three sub-groups according to the presence of target ICD codes: Definite abuse, Possible abuse, unintentional injury. Less than 2% of cases coded as being unintentional were recoded after review as being possible abuse, and only 5% of cases coded as possible abuse cases were reclassified as unintentional, though there was greater variation in the classification of cases as definite abuse compared to possible abuse. Concordance of health data with child welfare data varied across patient subgroups. This study will inform the development of strategies to improve the quality, consistency and concordance of information between health and welfare agencies to ensure adequate system responses to children at risk of abuse.
Resumo:
Of the numerous factors that play a role in fatal pedestrian collisions, the time of day, day of the week, and time of year can be significant determinants. More than 60% of all pedestrian collisions in 2007 occurred at night, despite the presumed decrease in both pedestrian and automobile exposure during the night. Although this trend is partially explained by factors such as fatigue and alcohol consumption, prior analysis of the Fatality Analysis Reporting System database suggests that pedestrian fatalities increase as light decreases after controlling for other factors. This study applies graphical cross-tabulation, a novel visual assessment approach, to explore the relationships among collision variables. The results reveal that twilight and the first hour of darkness typically observe the greatest frequency of pedestrian fatal collisions. These hours are not necessarily the most risky on a per mile travelled basis, however, because pedestrian volumes are often still high. Additional analysis is needed to quantify the extent to which pedestrian exposure (walking/crossing activity) in these time periods plays a role in pedestrian crash involvement. Weekly patterns of pedestrian fatal collisions vary by time of year due to the seasonal changes in sunset time. In December, collisions are concentrated around twilight and the first hour of darkness throughout the week while, in June, collisions are most heavily concentrated around twilight and the first hours of darkness on Friday and Saturday. Friday and Saturday nights in June may be the most dangerous times for pedestrians. Knowing when pedestrian risk is highest is critically important for formulating effective mitigation strategies and for efficiently investing safety funds. This applied visual approach is a helpful tool for researchers intending to communicate with policy-makers and to identify relationships that can then be tested with more sophisticated statistical tools.
Resumo:
One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.
Resumo:
Fatigue has been recognised as the primary contributing factor in approximately 15% of all fatal road crashes in Australia. To develop effective countermeasures for managing fatigue, this study investigates why drivers continue to drive when sleepy, and driver perceptions and behaviours in regards to countermeasures. Based on responses from 305 Australian drivers, it was identified that the major reasons why these participants continued to drive when sleepy were: wanting to get to their destination; being close to home; and time factors. Participants’ perceptions and use of 18 fatigue countermeasures were investigated. It was found that participants perceived the safest strategies, including stopping and sleeping, swapping drivers and stopping for a quick nap, to be the most effective countermeasures. However, it appeared that their knowledge of safe countermeasures did not translate into their use of these strategies. For example, although the drivers perceived stopping for a quick nap to be an effective countermeasure, they reported more frequent use of less safe methods such as stopping to eat or drink and winding down the window. This finding suggests that, while practitioners should continue educating drivers, they may need a greater focus on motivating drivers to implement safe fatigue countermeasures.