389 resultados para risk need responsivity model
em Queensland University of Technology - ePrints Archive
Resumo:
Crashes at any particular transport network location consist of a chain of events arising from a multitude of potential causes and/or contributing factors whose nature is likely to reflect geometric characteristics of the road, spatial effects of the surrounding environment, and human behavioural factors. It is postulated that these potential contributing factors do not arise from the same underlying risk process, and thus should be explicitly modelled and understood. The state of the practice in road safety network management applies a safety performance function that represents a single risk process to explain crash variability across network sites. This study aims to elucidate the importance of differentiating among various underlying risk processes contributing to the observed crash count at any particular network location. To demonstrate the principle of this theoretical and corresponding methodological approach, the study explores engineering (e.g. segment length, speed limit) and unobserved spatial factors (e.g. climatic factors, presence of schools) as two explicit sources of crash contributing factors. A Bayesian Latent Class (BLC) analysis is used to explore these two sources and to incorporate prior information about their contribution to crash occurrence. The methodology is applied to the state controlled roads in Queensland, Australia and the results are compared with the traditional Negative Binomial (NB) model. A comparison of goodness of fit measures indicates that the model with a double risk process outperforms the single risk process NB model, and thus indicating the need for further research to capture all the three crash generation processes into the SPFs.
Resumo:
Objective: The aim of this paper is to propose a ‘Perceived barriers and lifestyle risk factor modification model’ that could be incorporated into existing frameworks for diabetes education to enhance lifestyle risk factor education in women. Setting: Diabetes education, community health. Primary argument: ‘Perceived barriers’ is a health promotion concept that has been found to be a significant predictor of health promotion behaviour. There is evidence that women face a range of perceived barriers that prevent them from engaging in healthy lifestyle activities. Despite this, current evidence based models of diabetes education do not explicitly incorporate the concept of perceived barriers. A model of risk factor reduction that incorporates ‘perceived barriers’ is proposed. Conclusion: Although further research is required, current approaches to risk factor reduction in type 2 diabetes could be enhanced by identification and goal setting to reduce an individual’s perceived barriers.
Resumo:
Background: Critically ill patients are at high risk for pressure ulcer (PrU) development due to their high acuity and the invasive nature of the multiple interventions and therapies they receive. With reported incidence rates of PrU development in the adult critical care population as high as 56%, the identification of patients at high risk of PrU development is essential. This paper will explore the association between PrU development and risk factors. It will also explore PrU development and the use of risk assessment scales for critically ill patients in adult intensive care units. Method: A literature search from 2000 to 2012 using the CINHAL, Cochrane Library, EBSCOHost, Medline (via EBSCOHost), PubMed, ProQuest and Google Scholar databases was conducted. Key words used were: pressure ulcer/s; pressure sore/s; decubitus ulcer/s; bed sore/s; critical care; intensive care; critical illness; prevalence; incidence; prevention; management; risk factor; risk assessment scale. Results: Nineteen articles were included in this review; eight studies addressing PrU risk factors, eight studies addressing risk assessment scales and three studies overlapping both. Results from the studies reviewed identified 28 intrinsic and extrinsic risk factors which may lead to PrU development. Development of a risk factor prediction model in this patient population, although beneficial, appears problematic due to many issues such as diverse diagnoses and subsequent patient needs. Additionally, several risk assessment instruments have been developed for early screening of patients at higher risk of developing PrU in the ICU. No existing risk assessment scales are valid for identification high risk critically ill patient,with the majority of scales potentially over-predicting patients at risk for PrU development. Conclusion: Research studies to inform the risk factors for potential pressure ulcer development are inconsistent. Additionally, there is no consistent or clear evidence which demonstrates any scale to better or more effective than another when used to identify the patients at risk for PrU development. Furthermore robust research is needed to identify the risk factors and develop valid scales for measuring the risk of PrU development in ICU.
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
Presentation about research projects that build understanding of urban design and interactions and plan for future opportunities. What do we need to model?
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
We develop a stochastic endogenous growth model to explain the diversity in growth and inequality patterns and the non-convergence of incomes in transitional economies where an underdeveloped financial sector imposes an implicit, fixed cost on the diversification of idiosyncratic risk. In the model endogenous growth occurs through physical and human capital deepening, with the latter being the more dominant element. We interpret the fixed cost as a ‘learning by doing’ cost for entrepreneurs who undertake risk in the absence of well developed financial markets and institutions that help diversify such risk. As such, this cost may be interpreted as the implicit returns foregone due to the lack of diversification opportunities that would otherwise have been available, had such institutions been present. The analytical and numerical results of the model suggest three growth outcomes depending on the productivity differences between the projects and the fixed cost associated with the more productive project. We label these outcomes as poverty trap, dual economy and balanced growth. Further analysis of these three outcomes highlights the existence of a diversity within diversity. Specifically, within the ‘poverty trap’ and ‘dual economy’ scenarios growth and inequality patterns differ, depending on the initial conditions. This additional diversity allows the model to capture a richer range of outcomes that are consistent with the empirical experience of several transitional economies.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
Informed broadly by the theory of planned behaviour, this study used qualitative methodology to understand Australian adults' sun-protective decisions. Forty-two adults participated in focus groups where they discussed behavioural (advantages and disadvantages), normative (important referents), and control (barriers and facilitators) beliefs, as well as potential social influences and images of tanned and non-tanned people. Responses were analysed using the consensual qualitative research approach to determine the dominant themes. Themes of fashion and comfort were prominent, the important role of friends and family in sun safe decision-making was highlighted, as was the availability of sun-protective measures (e.g., in an accessible place or in the environment). Additional themes included the need to model sound sun-protective behaviours to (current and future) children, the emphasis on personal choice and personal responsibility to be sun safe, and the influence of Australian identity and culture on tanning and socially acceptable forms of sun protection. These beliefs can be used to inform interventions and public health campaigns targeting sun safety among Australians, a population with the highest skin cancer incidence in the world.
Resumo:
This chapter describes the evolution of a model to propose the relationship between food literacy and nutrition. This model can also be used as a framework for program planning, implementation and evaluation. Practitioners and policy makers invest in food literacy with outcome expectations beyond diet quality. For this reason, a second model was developed to conceptualise the role of food literacy with respect to food security, body weight and chronic disease risk. This second model is useful in positioning food literacy within multi-strategic public health nutrition and chronic disease plans.
Resumo:
Recent decisions of the Family Court of Australian reflect concerns over the adversarial nature of the legal process. The processes and procedures of the judicial system militate against a detailed examination of the issues and rights of the parties in dispute. The limitations of the family law framework are particularly demonstrated in disputes over the custody of children where the Court has tended to neglect the rights and interests of the primary carer. An alternative "unified family court" framework will be examined in which the Court pursues a more active and interventionist approach in the determination of family law disputes.
Resumo:
Ultraviolet radiation (UV) is the carcinogen that causes the most common malignancy in humans – skin cancer. However, moderate UV exposure is essential for producing vitaminDin our skin. VitaminDincreases the absorption of calcium from the diet, and adequate calcium is necessary for the building and maintenance of bones. Thus, low levels of vitamin D can cause osteomalacia and rickets and contribute to osteoporosis. Emerging evidence also suggests vitamin D may protect against falls, internal cancers, psychiatric conditions, autoimmune diseases and cardiovascular diseases. Since the dominant source of vitamin D is sunlight exposure, there is a need to understand what is a “balanced” level of sun exposure to maintain an adequate level of vitamin D but minimise the risks of eye damage, skin damage and skin cancer resulting from excessive UV exposure. There are many steps in the pathway from incoming solar UV to the eventual vitamin D status of humans (measured as 25-hydroxyvitamin D in the blood), and our knowledge about many of these steps is currently incomplete. This project begins by investigating the levels of UV available for synthesising vitamin D, and how these levels vary across seasons, latitudes and times of the day. The thesis then covers experiments conducted with an in vitro model, which was developed to study several aspects of vitamin D synthesis. Results from the model suggest the relationship between UV dose and vitamin D is not linear. This is an important input into public health messages regarding ‘safe’ UV exposure: larger doses of UV, beyond a certain limit, may not continue to produce vitamin D; however, they will increase the risk of skin cancers and eye damage. The model also showed that, when given identical doses of UV, the amount of vitamin D produced was impacted by temperature. In humans, a temperature-dependent reaction must occur in the top layers of human skin, prior to vitamin D entering the bloodstream. The hypothesis will be raised that cooler temperatures (occurring in winter and at high latitudes) may reduce vitamin D production in humans. Finally, the model has also been used to study the wavelengths of UV thought to be responsible for producing vitamin D. It appears that vitamin D production is limited to a small range of UV wavelengths, which may be narrower than previously thought. Together, these results suggest that further research is needed into the ability of humans to synthesise vitamin D from sunlight. In particular, more information is needed about the dose-response relationship in humans and to investigate the proposed impact of temperature. Having an accurate action spectrum will also be essential for measuring the available levels of vitamin D-effective UV. As this research continues, it will contribute to the scientific evidence-base needed for devising a public health message that will balance the risks of excessive UV exposure with maintaining adequate vitamin D.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.