392 resultados para Injury Prediction.
Resumo:
High-risk adolescents are a population most vulnerable to harm from injury due to increased engagement in risk taking behaviour. There is a gap in the literature regarding how universal school based injury prevention programs apply to high-risk adolescents. This study involves a component of the process evaluation of a school based injury prevention program, as it relates to high-risk adolescents (13-14 years)...
Resumo:
Unsafe acts of workers (e.g. misjudgment, inappropriate operation) become the major root causes of construction accidents when they are combined with unsafe working conditions (e.g. working surface conditions, weather) on a construction site. The overarching goal of the research presented in this paper is to explore ways to prevent unsafe acts of workers and reduce the likelihood of construction accidents occurring. The study specifically aims to (1) understand the relationships between human behavior related and working condition related risk factors, (2) identify the significant behavior and condition factors and their impacts on accident types (e.g. struck by/against, caught in/between, falling, shock, inhalation/ingestion/absorption, respiratory failure) and injury severity (e.g. fatality, hospitalized, non-hospitalized), and (3) analyze the fundamental accident-injury relationship on how each accident type contributes to the injury severity. The study reviewed 9,358 accidents which occurred in the U.S. construction industry between 2002 and 2011. The large number of accident samples supported reliable statistical analyses. The analysis identified a total of 17 significant correlations between behavior and condition factors and distinguished key risk factors that highly impacted on the determination of accident types and injury severity. The research outcomes will assist safety managers to control specific unsafe acts of workers by eliminating the associated unsafe working conditions and vice versa. They also can prioritize risk factors and pay more attention to controlling them in order to achieve a safer working environment.
Resumo:
Objectives: To compare measures of fat-free mass (FFM) by three different bioelectrical impedance analysis (BIA) devices and to assess the agreement between three different equations validated in older adult and/or overweight populations. Design: Cross-sectional study. Setting: Orthopaedics ward of Brisbane public hospital, Australia. Participants: Twenty-two overweight, older Australians (72 yr ± 6.4, BMI 34 kg/m2 ± 5.5) with knee osteoarthritis. Measurements: Body composition was measured using three BIA devices: Tanita 300-GS (foot-to-foot), Impedimed DF50 (hand-to-foot) and Impedimed SFB7 (bioelectrical impedance spectroscopy (BIS)). Three equations for predicting FFM were selected based on their ability to be applied to an older adult and/ or overweight population. Impedance values were extracted from the hand-to-foot BIA device and included in the equations to estimate FFM. Results: The mean FFM measured by BIS (57.6 kg ± 9.1) differed significantly from those measured by foot-to-foot (54.6 kg ± 8.7) and hand-to-foot BIA (53.2 kg ± 10.5) (P < 0.001). The mean ± SD FFM predicted by three equations using raw data from hand-to-foot BIA were 54.7 kg ± 8.9, 54.7 kg ± 7.9 and 52.9 kg ± 11.05 respectively. These results did not differ from the FFM predicted by the hand-to-foot device (F = 2.66, P = 0.118). Conclusions: Our results suggest that foot-to-foot and hand-to-foot BIA may be used interchangeably in overweight older adults at the group level but due to the large limits of agreement may lead to unacceptable error in individuals. There was no difference between the three prediction equations however these results should be confirmed within a larger sample and against a reference standard.
Resumo:
• Road crashes as a cause of disability • Disability in the study of road safety • Thai spinal injury study – Contextual information – beliefs and community – Transport system and hidden safety costs – Cambodia experience – Pakistan fatalism study • Feedback to policies and programs
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
Intracellular Flightless I (Flii), a gelsolin family member, has been found to have roles modulating actin regulation, transcriptional regulation and inflammation. In vivo Flii can regulate wound healing responses. We have recently shown that a pool of Flii is secreted by fibroblasts and macrophages, cells typically found in wounds, and its secretion can be upregulated upon wounding. We show that secreted Flii can bind to the bacterial cell wall component lipopolysaccharide and has the potential to regulate inflammation. We now show that secreted Flii is present in both acute and chronic wound fluid.
Resumo:
Objective: To determine if systematic variation of diagnostic terminology (i.e. concussion, minor head injury [MHI], mild traumatic brain injury [mTBI]) following a standardized injury description produced different expected symptoms and illness perceptions. We hypothesized that worse outcomes would be expected of mTBI, compared to other diagnoses, and that MHI would be perceived as worse than concussion. Method:108 volunteers were randomly allocated to conditions in which they read a vignette describing a motor vehicle accident-related mTBI followed by: a diagnosis of mTBI (n=27), MHI (n=24), concussion (n=31); or, no diagnosis (n=26). All groups rated: a) event ‘undesirability’; b) illness perception, and; c) expected Postconcussion Syndrome (PCS) and Posttraumatic Stress Disorder (PTSD) symptoms six months post injury. Results: On average, more PCS symptomatology was expected following mTBI compared to other diagnoses, but this difference was not statistically significant. There was a statistically significant group effect on undesirability (mTBI>concussion & MHI), PTSD symptomatology (mTBI & no diagnosis>concussion), and negative illness perception (mTBI & no diagnosis>concussion). Conclusion: In general, diagnostic terminology did not affect anticipated PCS symptoms six months post injury, but other outcomes were affected. Given that these diagnostic terms are used interchangeably, this study suggests that changing terminology can influence known contributors to poor mTBI outcome.
Resumo:
Brief self-report symptom checklists are often used to screen for postconcussional disorder (PCD) and posttraumatic stress disorder (PTSD) and are highly susceptible to symptom exaggeration. This study examined the utility of the five-item Mild Brain Injury Atypical Symptoms Scale (mBIAS) designed for use with the Neurobehavioral Symptom Inventory (NSI) and the PTSD Checklist–Civilian (PCL–C). Participants were 85 Australian undergraduate students who completed a battery of self-report measures under one of three experimental conditions: control (i.e., honest responding, n = 24), feign PCD (n = 29), and feign PTSD (n = 32). Measures were the mBIAS, NSI, PCL–C, Minnesota Multiphasic Personality Inventory–2, Restructured Form (MMPI–2–RF), and the Structured Inventory of Malingered Symptomatology (SIMS). Participants instructed to feign PTSD and PCD had significantly higher scores on the mBIAS, NSI, PCL–C, and MMPI–2–RF than did controls. Few differences were found between the feign PCD and feign PTSD groups, with the exception of scores on the NSI (feign PCD > feign PTSD) and PCL–C (feign PTSD > feign PCD). Optimal cutoff scores on the mBIAS of ≥8 and ≥6 were found to reflect “probable exaggeration” (sensitivity = .34; specificity = 1.0; positive predictive power, PPP = 1.0; negative predictive power, NPP = .74) and “possible exaggeration” (sensitivity = .72; specificity = .88; PPP = .76; NPP = .85), respectively. Findings provide preliminary support for the use of the mBIAS as a tool to detect symptom exaggeration when administering the NSI and PCL–C.
Resumo:
OBJECTIVE: To review and compare the mild traumatic brain injury (mTBI) vignettes used in postconcussion syndrome (PCS) research, and to develop 3 new vignettes. METHOD: The new vignettes were devised using World Health Organization (WHO) mTBI diagnostic criteria [1]. Each vignette depicted a very mild (VM), mild (M), or severe (S) brain injury. Expert review (N = 27) and readability analysis was used to validate the new vignettes and compare them to 5 existing vignettes. RESULTS: The response rate was 44%. The M vignette and existing vignettes were rated as depicting a mTBI; however, the fit-to-criteria of these vignettes differed significantly. The fit-to-criteria of the M vignette was as good as that of 3 existing vignettes and significantly better than 2 other vignettes. As expected, the VM and S vignettes were a poor fit-to-criteria. CONCLUSIONS: These new vignettes will assist PCS researchers to test the limits of important etiology factors by varying the severity of depicted injuries.
Resumo:
This study investigated the specificity of the post-concussion syndrome (PCS) expectation-as-etiology hypothesis. Undergraduate students (n = 551) were randomly allocated to one of three vignette conditions. Vignettes depicted either a very mild (VMI), mild (MI), or moderate-to-severe (MSI) motor vehicle-related traumatic brain injury (TBI). Participants reported the PCS and PTSD symptoms that they imagined the depicted injury would produce. Secondary outcomes (knowledge of mild TBI, and the perceived undesirability of TBI) were also assessed. After data screening, the distribution of participants by condition was: VMI (n = 100), MI (n = 96), and MSI (n = 71). There was a significant effect of condition on PCS symptomatology, F(2, 264) = 16.55, p < .001. Significantly greater PCS symptomatology was expected in the MSI condition compared to the other conditions (MSI > VMI; medium effect, r = .33; MSI > MI; small-to-medium effect, r = .22). The same pattern of group differences was found for PTSD symptoms, F(2, 264) = 17.12, p < .001. Knowledge of mild TBI was not related to differences in expected PCS symptoms by condition; and the perceived undesirability of TBI was only associated with reported PCS symptomatology in the MSI condition. Systematic variation in the severity of a depicted TBI produces different PCS and PTSD symptom expectations. Even a very mild TBI vignette can elicit expectations of PCS symptoms.
Resumo:
An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Despite a considerable amount of research on traffic injury severities, relatively little is known about the factors influencing traffic injury severity in developing countries, and in particular in Bangladesh. Road traffic crashes are a common headline in daily newspapers of Bangladesh. It has also recorded one of the highest road fatality rates in the world. This research identifies significant factors contributing to traffic injury severity in Dhaka – a mega city and capital of Bangladesh. Road traffic crash data of 5 years from 2007 to 2011 were collected from the Dhaka Metropolitan Police (DMP), which included about 2714 traffic crashes. The severity level of these crashes was documented in a 4-point ordinal scale: no injury (property damage), minor injury, severe injury, and death. An ordered Probit regression model has been estimated to identify factors contributing to injury severities. Results show that night time influence is associated with a higher level injury severity as is for individuals involved in single vehicle crashes. Crashes on highway sections within the city are found to be more injurious than crashes along the arterial and feeder roads. There is a lower likelihood of injury severity, however, if the road sections are monitored and enforced by the traffic police. The likelihood of injuries is lower on two-way traffic arrangements than one-way, and at four-legged intersections and roundabouts compare to road segments. The findings are compared with those from developed countries and the implications of this research are discussed in terms of policy settings for developing countries.