941 resultados para injury data
Resumo:
Introduction Falls are the most frequent adverse event reported in hospitals. Approximately 30% of in-hospital falls lead to an injury and up to 2% result in a fracture. A large randomised trial found that a trained health professional providing individualised falls prevention education to older inpatients reduced falls in a cognitively intact subgroup. This study aims to investigate whether this efficacious intervention can reduce falls and be clinically useful and cost-effective when delivered in the real-life clinical environment. Methods A stepped-wedge cluster randomised trial will be used across eight subacute units (clusters) which will be randomised to one of four dates to start the intervention. Usual care on these units includes patient's screening, assessment and implementation of individualised falls prevention strategies, ongoing staff training and environmental strategies. Patients with better levels of cognition (Mini-Mental State Examination >23/30) will receive the individualised education from a trained health professional in addition to usual care while patient's feedback received during education sessions will be provided to unit staff. Unit staff will receive training to assist in intervention delivery and to enhance uptake of strategies by patients. Falls data will be collected by two methods: case note audit by research assistants and the hospital falls reporting system. Cluster-level data including patient's admissions, length of stay and diagnosis will be collected from hospital systems. Data will be analysed allowing for correlation of outcomes (clustering) within units. An economic analysis will be undertaken which includes an incremental cost-effectiveness analysis. Ethics and dissemination The study was approved by The University of Notre Dame Australia Human Research Ethics Committee and local hospital ethics committees. Results The results will be disseminated through local site networks, and future funding and delivery of falls prevention programmes within WA Health will be informed. Results will also be disseminated through peer-reviewed publications and medical conferences.
Resumo:
There is little research on off-road motorcycle and all-terrain vehicle riders though injury levels are high. This thesis identified formal responsibility for monitoring injuries, targeting young male and recreational riders, promotion of family members as models, and controlled and accessible riding locations as ways to increase safety. These recommendations were based on analysis of Queensland hospitalisation records, rider personal reports and survey responses.
Resumo:
Digital human modeling (DHM) systems underwent significant development within the last years. They achieved constantly growing importance in the field of ergonomic workplace design, product development, product usability, ergonomic research, ergonomic education, audiovisual marketing and the entertainment industry. They help to design ergonomic products as well as healthy and safe socio-technical work systems. In the domain of scientific DHM systems, no industry specific standard interfaces are defined which could facilitate the exchange of 3D solid body data, anthropometric data or motion data. The focus of this article is to provide an overview of requirements for a reliable data exchange between different DHM systems in order to identify suitable file formats. Examples from the literature are discussed in detail. Methods: As a first step a literature review is conducted on existing studies and file formats for exchanging data between different DHM systems. The found file formats can be structured into different categories: static 3D solid body data exchange, anthropometric data exchange, motion data exchange and comprehensive data exchange. Each file format is discussed and advantages as well as disadvantages for the DHM context are pointed out. Case studies are furthermore presented, which show first approaches to exchange data between DHM systems. Lessons learnt are shortly summarized. Results: A selection of suitable file formats for data exchange between DHM systems is determined from the literature review.
Resumo:
Exposure control or case-control methodologies are common techniques for estimating crash risks, however they require either observational data on control cases or exogenous exposure data, such as vehicle-kilometres travelled. This study proposes an alternative methodology for estimating crash risk of road user groups, whilst controlling for exposure under a variety of roadway, traffic and environmental factors by using readily available police-reported crash data. In particular, the proposed method employs a combination of a log-linear model and quasi-induced exposure technique to identify significant interactions among a range of roadway, environmental and traffic conditions to estimate associated crash risks. The proposed methodology is illustrated using a set of police-reported crash data from January 2004 to June 2009 on roadways in Queensland, Australia. Exposure-controlled crash risks of motorcyclists—involved in multi-vehicle crashes at intersections—were estimated under various combinations of variables like posted speed limit, intersection control type, intersection configuration, and lighting condition. Results show that the crash risk of motorcycles at three-legged intersections is high if the posted speed limits along the approaches are greater than 60 km/h. The crash risk at three-legged intersections is also high when they are unsignalized. Dark lighting conditions appear to increase the crash risk of motorcycles at signalized intersections, but the problem of night time conspicuity of motorcyclists at intersections is lessened on approaches with lower speed limits. This study demonstrates that this combined methodology is a promising tool for gaining new insights into the crash risks of road user groups, and is transferrable to other road users.
Resumo:
As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
The Bluetooth technology is being increasingly used, among the Automated Vehicle Identification Systems, to retrieve important information about urban networks. Because the movement of Bluetooth-equipped vehicles can be monitored, throughout the network of Bluetooth sensors, this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed, through data filtering and correction techniques. Some of the main challenges inherent to Bluetooth data are, first, that Bluetooth sensors may fail to detect all of the nearby Bluetooth-enabled vehicles. As a consequence, the exact journey for some vehicles may become a latent pattern that will need to be estimated. Second, sensors that are in close proximity to each other may have overlapping detection areas, thus making the task of retrieving the correct travelled path even more challenging. The aim of this paper is twofold: to give an overview of the issues inherent to the Bluetooth technology, through the analysis of the data available from the Bluetooth sensors in Brisbane; and to propose a method for retrieving the itineraries of the individual Bluetooth vehicles. We argue that estimating these latent itineraries, accurately, is a crucial step toward the retrieval of accurate dynamic Origin Destination Matrices.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users to provide customized information and services. The Smart Card (SC) data, from Automated Fare Collection system, facilitates the understanding of multiday travel regularity of transit passengers, and can be used to segment them into identifiable classes of similar behaviors and needs. However, the use of SC data for market segmentation has attracted very limited attention in the literature. This paper proposes a novel methodology for mining spatial and temporal travel regularity from each individual passenger’s historical SC transactions and segments them into four segments of transit users. After reconstructing the travel itineraries from historical SC transactions, the paper adopts the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm to mine travel regularity of each SC user. The travel regularity is then used to segment SC users by an a priori market segmentation approach. The methodology proposed in this paper assists transit operators to understand their passengers and provide them oriented information and services.
Resumo:
Over the past decade, vision-based tracking systems have been successfully deployed in professional sports such as tennis and cricket for enhanced broadcast visualizations as well as aiding umpiring decisions. Despite the high-level of accuracy of the tracking systems and the sheer volume of spatiotemporal data they generate, the use of this high quality data for quantitative player performance and prediction has been lacking. In this paper, we present a method which predicts the location of a future shot based on the spatiotemporal parameters of the incoming shots (i.e. shot speed, location, angle and feet location) from such a vision system. Having the ability to accurately predict future short-term events has enormous implications in the area of automatic sports broadcasting in addition to coaching and commentary domains. Using Hawk-Eye data from the 2012 Australian Open Men's draw, we utilize a Dynamic Bayesian Network to model player behaviors and use an online model adaptation method to match the player's behavior to enhance shot predictability. To show the utility of our approach, we analyze the shot predictability of the top 3 players seeds in the tournament (Djokovic, Federer and Nadal) as they played the most amounts of games.
Resumo:
Aim/Background: Transfusion-related acute lung injury (TRALI) is a potentially fatal adverse transfusion reaction. It is hypothesised to occur via a two-insult mechanism: the recipient’s underlying co-morbidity in addition to the transfusion of blood products activate neutrophils in the lung resulting in damaged endothelium and capillary leakage. Neutrophil activation may occur by antibody or non-antibody related mechanisms, with the length of storage of cellular blood products implicated in the latter. This study investigated non-antibody mediated priming and/or activation of neutrophil oxidative burst. Methods: A cytochrome C reduction assay was used to assess priming and activation of neutrophil oxidative burst by pooled supernatant (SN) from day 1 (D1; n=75) and day 42 (D42; n=113) packed red blood cells (PRBC). Pooled PRBC-SN were assessed in parallel with PAF (priming), fMLP (activating), PAF + fMLP (priming + activating) and buffer only (negative) controls. Cytochrome C reduction was measured over 30min at 37oC (inclusive of 10min priming). Neutrophil activation by PRBC-SN was assessed cf. buffer only and neutrophil priming by PRBC-SN was assessed by co-incubation with fMLP cf. fMLP alone. One-way ANOVA; Newman-Keuls post-test; p<0.05; n=10 independent assays. Results: Neither D1- nor D42- PRBC-SN alone activated neutrophil oxidative burst. In addition, D1-PRBC-SN did not prime fMLP-activated neutrophil oxidative burst. D42-PRBC-SN did, however, prime neutrophils for subsequent activation of oxidative burst by fMLP, the magnitude of response being similar to PAF (a known neutrophil priming agonist). Conclusion: These findings are consistent with the two-insult mechanism of TRALI. Factors released into the SN during PRBC storage contributed to neutrophil priming synergistically with other neutrophil stimulating agonists. This implicates PRBC storage duration as a key factor contributing to non-immune neutrophil activation in the development of TRALI in patients with pre-disposing inflammatory conditions.
Resumo:
Technological advances have led to an influx of affordable hardware that supports sensing, computation and communication. This hardware is increasingly deployed in public and private spaces, tracking and aggregating a wealth of real-time environmental data. Although these technologies are the focus of several research areas, there is a lack of research dealing with the problem of making these capabilities accessible to everyday users. This thesis represents a first step towards developing systems that will allow users to leverage the available infrastructure and create custom tailored solutions. It explores how this notion can be utilized in the context of energy monitoring to improve conventional approaches. The project adopted a user-centered design process to inform the development of a flexible system for real-time data stream composition and visualization. This system features an extensible architecture and defines a unified API for heterogeneous data streams. Rather than displaying the data in a predetermined fashion, it makes this information available as building blocks that can be combined and shared. It is based on the insight that individual users have diverse information needs and presentation preferences. Therefore, it allows users to compose rich information displays, incorporating personally relevant data from an extensive information ecosystem. The prototype was evaluated in an exploratory study to observe its natural use in a real-world setting, gathering empirical usage statistics and conducting semi-structured interviews. The results show that a high degree of customization does not warrant sustained usage. Other factors were identified, yielding recommendations for increasing the impact on energy consumption.