251 resultados para Modelling lifetime data
em Queensland University of Technology - ePrints Archive
Resumo:
An educational priority of many nations is to enhance mathematical learning in early childhood. One area in need of special attention is that of statistics. This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling activities. Such modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (i.e., identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Results are reported from the first year of a three-year longitudinal study in which three classes of first-grade children and their teachers engaged in activities that required the creation of data models. The theme of “Looking after our Environment,” a component of the children’s science curriculum at the time, provided the context for the activities. Findings focus on how the children dealt with given complex attributes and how they generated their own attributes in classifying broad data sets, and the nature of the models the children created in organising, structuring, and representing their data.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution
Resumo:
A central tenet in the theory of reliability modelling is the quantification of the probability of asset failure. In general, reliability depends on asset age and the maintenance policy applied. Usually, failure and maintenance times are the primary inputs to reliability models. However, for many organisations, different aspects of these data are often recorded in different databases (e.g. work order notifications, event logs, condition monitoring data, and process control data). These recorded data cannot be interpreted individually, since they typically do not have all the information necessary to ascertain failure and preventive maintenance times. This paper presents a methodology for the extraction of failure and preventive maintenance times using commonly-available, real-world data sources. A text-mining approach is employed to extract keywords indicative of the source of the maintenance event. Using these keywords, a Naïve Bayes classifier is then applied to attribute each machine stoppage to one of two classes: failure or preventive. The accuracy of the algorithm is assessed and the classified failure time data are then presented. The applicability of the methodology is demonstrated on a maintenance data set from an Australian electricity company.
Resumo:
Seasonal patterns have been found in a remarkable range of health conditions, including birth defects, respiratory infections and cardiovascular disease. Accurately estimating the size and timing of seasonal peaks in disease incidence is an aid to understanding the causes and possibly to developing interventions. With global warming increasing the intensity of seasonal weather patterns around the world, a review of the methods for estimating seasonal effects on health is timely. This is the first book on statistical methods for seasonal data written for a health audience. It describes methods for a range of outcomes (including continuous, count and binomial data) and demonstrates appropriate techniques for summarising and modelling these data. It has a practical focus and uses interesting examples to motivate and illustrate the methods. The statistical procedures and example data sets are available in an R package called ‘season’. Adrian Barnett is a senior research fellow at Queensland University of Technology, Australia. Annette Dobson is a Professor of Biostatistics at The University of Queensland, Australia. Both are experienced medical statisticians with a commitment to statistical education and have previously collaborated in research in the methodological developments and applications of biostatistics, especially to time series data. Among other projects, they worked together on revising the well-known textbook "An Introduction to Generalized Linear Models," third edition, Chapman Hall/CRC, 2008. In their new book they share their knowledge of statistical methods for examining seasonal patterns in health.
Resumo:
Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary care setting. The primary care professional refers the patient to a third party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the patient. This paper examines the cost-effectiveness of ERS in promoting physical activity compared with usual care in primary care setting. Methods A decision analytic model was developed to estimate the cost-effectiveness of ERS from a UK NHS perspective. The costs and outcomes of ERS were modelled over the patient's lifetime. Data were derived from a systematic review of the literature on the clinical and cost-effectiveness of ERS, and on parameter inputs in the modelling framework. Outcomes were expressed as incremental cost per quality-adjusted life-year (QALY). Deterministic and probabilistic sensitivity analyses investigated the impact of varying ERS cost and effectiveness assumptions. Sub-group analyses explored the cost-effectiveness of ERS in sedentary people with an underlying condition. Results Compared with usual care, the mean incremental lifetime cost per patient for ERS was £169 and the mean incremental QALY was 0.008, generating a base-case incremental cost-effectiveness ratio (ICER) for ERS at £20,876 per QALY in sedentary individuals without a diagnosed medical condition. There was a 51% probability that ERS was cost-effective at £20,000 per QALY and 88% probability that ERS was cost-effective at £30,000 per QALY. In sub-group analyses, cost per QALY for ERS in sedentary obese individuals was £14,618, and in sedentary hypertensives and sedentary individuals with depression the estimated cost per QALY was £12,834 and £8,414 respectively. Incremental lifetime costs and benefits associated with ERS were small, reflecting the preventative public health context of the intervention, with this resulting in estimates of cost-effectiveness that are sensitive to variations in the relative risk of becoming physically active and cost of ERS. Conclusions ERS is associated with modest increase in lifetime costs and benefits. The cost-effectiveness of ERS is highly sensitive to small changes in the effectiveness and cost of ERS and is subject to some significant uncertainty mainly due to limitations in the clinical effectiveness evidence base.
Resumo:
In this work, we examine unbalanced computation between an initiator and a responder that leads to resource exhaustion attacks in key exchange protocols. We construct models for two cryp-tographic protocols; one is the well-known Internet protocol named Secure Socket Layer (SSL) protocol, and the other one is the Host Identity Protocol (HIP) which has built-in DoS-resistant mechanisms. To examine such protocols, we develop a formal framework based on Timed Coloured Petri Nets (Timed CPNs) and use a simulation approach provided in CPN Tools to achieve a formal analysis. By adopting the key idea of Meadows' cost-based framework and re¯ning the de¯nition of operational costs during the protocol execution, our simulation provides an accurate cost estimate of protocol execution compar- ing among principals, as well as the percentage of successful connections from legitimate users, under four di®erent strategies of DoS attack.
Resumo:
Hazard and reliability prediction of an engineering asset is one of the significant fields of research in Engineering Asset Health Management (EAHM). In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset can be influenced and/or indicated by different factors that are termed as covariates. The Explicit Hazard Model (EHM) as a covariate-based hazard model is a new approach for hazard prediction which explicitly incorporates both internal and external covariates into one model. EHM is an appropriate model to use in the analysis of lifetime data in presence of both internal and external covariates in the reliability field. This paper presents applications of the methodology which is introduced and illustrated in the theory part of this study. In this paper, the semi-parametric EHM is applied to a case study so as to predict the hazard and reliability of resistance elements on a Resistance Corrosion Sensor Board (RCSB).
Resumo:
Purpose: Although the branding literature emerged during the 1940s, research relating to tourism destination branding has only gained momentum since the late 1990s. There remains a lack of theory in particular that addresses the measurement of the effectiveness of destination branding over time. The purpose of the research was to test the effectiveness of a model of consumer-based brand equity (CBBE) for a country destination.---------- Design/methodology: A model of consumer-based brand equity was adapted from the marketing literature and applied to a nation context. The model was tested by using structural equation modelling with data from a large Chilean sample (n=845), comprising a mix of previous visitors and non-visitors. The model fits the data well. Findings: This paper reports the results of an investigation into brand equity for Australia as a long haul destination in an emerging market. The research took place just before the launch of the nation’s fourth new brand campaign in six years. The results indicate Australia is a well known but not compelling destination brand for tourists in Chile, which reflects the lower priority the South American market has been given by the national tourism office (NTO).---------- Practical implications: It is suggested that CBBE measures could be analysed at various points in time to track any strengthening or weakening of market perceptions in relation to brand objectives. A standard CBBE instrument could provide long-term effectiveness performance measures regardless of changes in destination marketing organisation (DMO) staff, advertising agency, other stakeholders, and budget.---------- Originality/value: This study contributes to the nation-branding literature by being one of the first to test the efficacy of a model of consumer-based brand equity for a tourism destination brand.
Resumo:
This thesis presents the outcomes of a comprehensive research study undertaken to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The knowledge created is expected to contribute to a greater understanding of urban stormwater quality and thereby enhance the design of stormwater quality treatment systems. The research study was undertaken based on selected urban catchments in Gold Coast, Australia. The research methodology included field investigations, laboratory testing, computer modelling and data analysis. Both univariate and multivariate data analysis techniques were used to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The rainfall characteristics investigated included average rainfall intensity and rainfall duration whilst catchment characteristics included land use, impervious area percentage, urban form and pervious area location. The catchment scale data for the analysis was obtained from four residential catchments, including rainfall-runoff records, drainage network data, stormwater quality data and land use and land cover data. Pollutants build-up samples were collected from twelve road surfaces in residential, commercial and industrial land use areas. The relationships between rainfall characteristics, catchment characteristics and urban stormwater quality were investigated based on residential catchments and then extended to other land uses. Based on the influence rainfall characteristics exert on urban stormwater quality, rainfall events can be classified into three different types, namely, high average intensity-short duration (Type 1), high average intensity-long duration (Type 2) and low average intensity-long duration (Type 3). This provides an innovative approach to conventional modelling which does not commonly relate stormwater quality to rainfall characteristics. Additionally, it was found that the threshold intensity for pollutant wash-off from urban catchments is much less than for rural catchments. High average intensity-short duration rainfall events are cumulatively responsible for the generation of a major fraction of the annual pollutants load compared to the other rainfall event types. Additionally, rainfall events less than 1 year ARI such as 6- month ARI should be considered for treatment design as they generate a significant fraction of the annual runoff volume and by implication a significant fraction of the pollutants load. This implies that stormwater treatment designs based on larger rainfall events would not be feasible in the context of cost-effectiveness, efficiency in treatment performance and possible savings in land area needed. This also suggests that the simulation of long-term continuous rainfall events for stormwater treatment design may not be needed and that event based simulations would be adequate. The investigations into the relationship between catchment characteristics and urban stormwater quality found that other than conventional catchment characteristics such as land use and impervious area percentage, other catchment characteristics such as urban form and pervious area location also play important roles in influencing urban stormwater quality. These outcomes point to the fact that the conventional modelling approach in the design of stormwater quality treatment systems which is commonly based on land use and impervious area percentage would be inadequate. It was also noted that the small uniformly urbanised areas within a larger mixed catchment produce relatively lower variations in stormwater quality and as expected lower runoff volume with the opposite being the case for large mixed use urbanised catchments. Therefore, a decentralised approach to water quality treatment would be more effective rather than an "end-of-pipe" approach. The investigation of pollutants build-up on different land uses showed that pollutant build-up characteristics vary even within the same land use. Therefore, the conventional approach in stormwater quality modelling, which is based solely on land use, may prove to be inappropriate. Industrial land use has relatively higher variability in maximum pollutant build-up, build-up rate and particle size distribution than the other two land uses. However, commercial and residential land uses had relatively higher variations of nutrients and organic carbon build-up. Additionally, it was found that particle size distribution had a relatively higher variability for all three land uses compared to the other build-up parameters. The high variability in particle size distribution for all land uses illustrate the dissimilarities associated with the fine and coarse particle size fractions even within the same land use and hence the variations in stormwater quality in relation to pollutants adsorbing to different sizes of particles.
Resumo:
This thesis has contributed to the advancement of knowledge in disease modelling by addressing interesting and crucial issues relevant to modelling health data over space and time. The research has led to the increased understanding of spatial scales, temporal scales, and spatial smoothing for modelling diseases, in terms of their methodology and applications. This research is of particular significance to researchers seeking to employ statistical modelling techniques over space and time in various disciplines. A broad class of statistical models are employed to assess what impact of spatial and temporal scales have on simulated and real data.
Resumo:
Suboptimal restraint use, particularly the incorrect use of restraints, is a significant and widespread problem among child vehicle occupants, and increases the risk of injury. Previous research has identified comfort as a potential factor influencing suboptimal restraint use. Both the real comfort experienced by the child and the parent’s perception of the child’s comfort are reported to influence the optimal use of restraints. Problems with real comfort may lead the child to misuse the restraint in their attempt to achieve better comfort whilst parent-perceived discomfort has been reported as a driver for premature graduation and inappropriate restraint choice. However, this work has largely been qualitative. There has been no research that objectively studies either the association between real and parental perceived comfort, or any association between comfort and suboptimal restraint use. One barrier to such studies is the absence of validated tools for quantifying real comfort in children. We aimed to develop methods to examine both real and parent-perceived comfort and examine their effects on suboptimal restraint use. We conducted online parent surveys (n=470) to explore what drives parental perceptions of their child’s comfort in restraint systems (study 1) and used data from field observation studies (n=497) to examine parent-perceived comfort and its relationship with observed restraint use (study 2). We developed methods to measure comfort in children in a laboratory setting (n=14) using video analysis to estimate a Discomfort Avoidance Behaviour (DAB) score, pressure mapping and adapted survey tools to differentiate between comfortable and induced discomfort conditions (study 3). Preliminary analysis of our recent online survey of Australian parents (study 1) indicates that 23% of parents report comfort as a consideration when making a decision to change restraints. Logistic regression modelling of data collected during the field observation study (study 2) revealed that parent-perceived discomfort was not significantly associated with premature graduation. Contrary to expectation, children of parents who reported that their child was comfortable were almost twice as likely to have been incorrectly restrained (p<0.01, 95% CI 1.24 - 2.77). In the laboratory study (study 3) we found our adapted survey tools did not provide a reliable measurement of real comfort among children. However our DAB score was able to differentiate between comfortable and induced discomfort conditions and correlated well with pressure mapping. Our results suggest that while some parents report concern about their child’s comfort, parent-reported comfort levels were not associated with restraint choice. If comfort is important for optimal restraint use, it is likely to be the real comfort of the child rather than that reported by the parent. The method we have developed for studying real comfort can be used in naturalistic studies involving child occupants to further understand this relationship. This work will be of interest to vehicle and child restraint manufacturers interested in improving restraint design for young occupants as well as researchers and other stakeholders interested in reducing the incidence of restraint misuse among children.
Resumo:
Climbing Mountains, Building Bridges is a rich theme for exploring some of the “challenges, obstacles, links, and connections” facing mathematics education within the current STEM climate (Science, Technology, Engineering and Mathematics). This paper first considers some of the issues and debates surrounding the nature of STEM education, including perspectives on its interdisciplinary nature. It is next argued that mathematics is in danger of being overshadowed, in particular by science, in the global urgency to advance STEM competencies in schools and the workforce. Some suggestions are offered for lifting the profile of mathematics education within an integrated STEM context, with examples drawn from modelling with data in the sixth grade.
Resumo:
This study examines a matrix of synthetic water samples designed to include conditions that favour brominated disinfection by-product (Br-DBP) formation, in order to provide predictive models suitable for high Br-DBP forming waters such as salinity-impacted waters. Br-DBPs are known to be more toxic than their chlorinated analogues, in general, and their formation may be favoured by routine water treatment practices such as coagulation/flocculation under specific conditions; therefore, circumstances surrounding their formation must be understood. The chosen factors were bromide concentration, mineral alkalinity, bromide to dissolved organic carbon (Br/DOC) ratio and Suwannee River natural organic matter concentration. The relationships between these parameters and DBP formation were evaluated by response surface modelling of data generated using a face-centred central composite experimental design. Predictive models for ten brominated and/or chlorinated DBPs are presented, as well as models for total trihalomethanes (tTHMs) and total dihaloacetonitriles (tDHANs), and bromide substitution factors for the THMs and DHANs classes. The relationships described revealed that increasing alkalinity and increasing Br/DOC ratio were associated with increasing bromination of THMs and DHANs, suggesting that DOC lowering treatment methods that do not also remove bromide such as enhanced coagulation may create optimal conditions for Br-DBP formation in waters in which bromide is present.