867 resultados para Bayesian hierarchical model
Resumo:
Precise identification of the time when a change in a hospital outcome has occurred enables clinical experts to search for a potential special cause more effectively. In this paper, we develop change point estimation methods for survival time of a clinical procedure in the presence of patient mix in a Bayesian framework. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the mean survival time of patients who underwent cardiac surgery. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time CUSUM control charts for different magnitude scenarios. The proposed estimator shows a better performance where a longer follow-up period, censoring time, is applied. In comparison with the alternative built-in CUSUM estimator, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
Navigational collisions are one of the major safety concerns in many seaports. Despite the extent of recent works done on port navigational safety research, little is known about harbor pilot’s perception of collision risks in port fairways. This paper uses a hierarchical ordered probit model to investigate associations between perceived risks and the geometric and traffic characteristics of fairways and the pilot attributes. Perceived risk data, collected through a risk perception survey conducted among the Singapore port pilots, are used to calibrate the model. Intra-class correlation coefficient justifies use of the hierarchical model in comparison with an ordinary model. Results show higher perceived risks in fairways attached to anchorages, and in those featuring sharper bends and higher traffic operating speeds. Lesser risks are perceived in fairways attached to shoreline and confined waters, and in those with one-way traffic, traffic separation scheme, cardinal marks and isolated danger marks. Risk is also found to be perceived higher in night.
Resumo:
Navigational collisions are a major safety concern in many seaports. Despite the recent advances in port navigational safety research, little is known about harbor pilot’s perception of collision risks in anchorages. This study attempts to model such risks by employing a hierarchical ordered probit model, which is calibrated by using data collected through a risk perception survey conducted on Singapore port pilots. The hierarchical model is found to be useful to account for correlations in risks perceived by individual pilots. Results show higher perceived risks in anchorages attached to intersection, local and international fairway; becoming more critical at night. Lesser risks are perceived in anchorages featuring shoreline in boundary, higher water depth, lower density of stationary ships, cardinal marks and isolated danger marks. Pilotage experience shows a negative effect on perceived risks. This study indicates that hierarchical modeling would be useful for treating correlations in navigational safety data.
Resumo:
A novel in-cylinder pressure method for determining ignition delay has been proposed and demonstrated. This method proposes a new Bayesian statistical model to resolve the start of combustion, defined as being the point at which the band-pass in-cylinder pressure deviates from background noise and the combustion resonance begins. Further, it is demonstrated that this method is still accurate in situations where there is noise present. The start of combustion can be resolved for each cycle without the need for ad hoc methods such as cycle averaging. Therefore, this method allows for analysis of consecutive cycles and inter-cycle variability studies. Ignition delay obtained by this method and by the net rate of heat release have been shown to give good agreement. However, the use of combustion resonance to determine the start of combustion is preferable over the net rate of heat release method because it does not rely on knowledge of heat losses and will still function accurately in the presence of noise. Results for a six-cylinder turbo-charged common-rail diesel engine run with neat diesel fuel at full, three quarters and half load have been presented. Under these conditions the ignition delay was shown to increase as the load was decreased with a significant increase in ignition delay at half load, when compared with three quarter and full loads.
Resumo:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.
Resumo:
Passenger flow studies in airport terminals have shown consistent statistical relationships between airport spatial layout and pedestrian movement, facilitating prediction of movement from terminal designs. However, these studies are done at an aggregate level and do not incorporate how individual passengers make decisions at a microscopic level. Therefore, they do not explain the formation of complex movement flows. In addition, existing models mostly focus on standard airport processing procedures such as immigration and security, but seldom consider discretionary activities of passengers, and thus are not able to truly describe the full range of passenger flows within airport terminals. As the route-choice decision-making of passengers involves many uncertain factors within the airport terminals, the mechanisms to fulfill the capacity of managing the route-choice have proven difficult to acquire and quantify. Could the study of cognitive factors of passengers (i.e. human mental preferences of deciding which on-airport facility to use) be useful to tackle these issues? Assuming the movement in virtual simulated environments can be analogous to movement in real environments, passenger behaviour dynamics can be similar to those generated in virtual experiments. Three levels of dynamics have been devised for motion control: the localised field, tactical level, and strategic level. A localised field refers to basic motion capabilities, such as walking speed, direction and avoidance of obstacles. The other two fields represent cognitive route-choice decision-making. This research views passenger flow problems via a "bottom-up approach", regarding individual passengers as independent intelligent agents who can behave autonomously and are able to interact with others and the ambient environment. In this regard, passenger flow formation becomes an emergent phenomenon of large numbers of passengers interacting with others. In the thesis, first, the passenger flow in airport terminals was investigated. Discretionary activities of passengers were integrated with standard processing procedures in the research. The localised field for passenger motion dynamics was constructed by a devised force-based model. Next, advanced traits of passengers (such as their desire to shop, their comfort with technology and their willingness to ask for assistance) were formulated to facilitate tactical route-choice decision-making. The traits consist of quantified measures of mental preferences of passengers when they travel through airport terminals. Each category of the traits indicates a decision which passengers may take. They were inferred through a Bayesian network model by analysing the probabilities based on currently available data. Route-choice decision-making was finalised by calculating corresponding utility results based on those probabilities observed. Three sorts of simulation outcomes were generated: namely, queuing length before checkpoints, average dwell time of passengers at service facilities, and instantaneous space utilisation. Queuing length reflects the number of passengers who are in a queue. Long queues no doubt cause significant delay in processing procedures. The dwell time of each passenger agent at the service facilities were recorded. The overall dwell time of passenger agents at typical facility areas were analysed so as to demonstrate portions of utilisation in the temporal aspect. For the spatial aspect, the number of passenger agents who were dwelling within specific terminal areas can be used to estimate service rates. All outcomes demonstrated specific results by typical simulated passenger flows. They directly reflect terminal capacity. The simulation results strongly suggest that integrating discretionary activities of passengers makes the passenger flows more intuitive, observing probabilities of mental preferences by inferring advanced traits make up an approach capable of carrying out tactical route-choice decision-making. On the whole, the research studied passenger flows in airport terminals by an agent-based model, which investigated individual characteristics of passengers and their impact on psychological route-choice decisions of passengers. Finally, intuitive passenger flows in airport terminals were able to be realised in simulation.
Resumo:
The use of hierarchical Bayesian spatial models in the analysis of ecological data is increasingly prevalent. The implementation of these models has been heretofore limited to specifically written software that required extensive programming knowledge to create. The advent of WinBUGS provides access to Bayesian hierarchical models for those without the programming expertise to create their own models and allows for the more rapid implementation of new models and data analysis. This facility is demonstrated here using data collected by the Missouri Department of Conservation for the Missouri Turkey Hunting Survey of 1996. Three models are considered, the first uses the collected data to estimate the success rate for individual hunters at the county level and incorporates a conditional autoregressive (CAR) spatial effect. The second model builds upon the first by simultaneously estimating the success rate and harvest at the county level, while the third estimates the success rate and hunting pressure at the county level. These models are discussed in detail as well as their implementation in WinBUGS and the issues arising therein. Future areas of application for WinBUGS and the latest developments in WinBUGS are discussed as well.
Resumo:
We consider the problem of combining opinions from different experts in an explicitly model-based way to construct a valid subjective prior in a Bayesian statistical approach. We propose a generic approach by considering a hierarchical model accounting for various sources of variation as well as accounting for potential dependence between experts. We apply this approach to two problems. The first problem deals with a food risk assessment problem involving modelling dose-response for Listeria monocytogenes contamination of mice. Two hierarchical levels of variation are considered (between and within experts) with a complex mathematical situation due to the use of an indirect probit regression. The second concerns the time taken by PhD students to submit their thesis in a particular school. It illustrates a complex situation where three hierarchical levels of variation are modelled but with a simpler underlying probability distribution (log-Normal).
Resumo:
Ecological studies are based on characteristics of groups of individuals, which are common in various disciplines including epidemiology. It is of great interest for epidemiologists to study the geographical variation of a disease by accounting for the positive spatial dependence between neighbouring areas. However, the choice of scale of the spatial correlation requires much attention. In view of a lack of studies in this area, this study aims to investigate the impact of differing definitions of geographical scales using a multilevel model. We propose a new approach -- the grid-based partitions and compare it with the popular census region approach. Unexplained geographical variation is accounted for via area-specific unstructured random effects and spatially structured random effects specified as an intrinsic conditional autoregressive process. Using grid-based modelling of random effects in contrast to the census region approach, we illustrate conditions where improvements are observed in the estimation of the linear predictor, random effects, parameters, and the identification of the distribution of residual risk and the aggregate risk in a study region. The study has found that grid-based modelling is a valuable approach for spatially sparse data while the SLA-based and grid-based approaches perform equally well for spatially dense data.
Resumo:
Conservation planning and management programs typically assume relatively homogeneous ecological landscapes. Such “ecoregions” serve multiple purposes: they support assessments of competing environmental values, reveal priorities for allocating scarce resources, and guide effective on-ground actions such as the acquisition of a protected area and habitat restoration. Ecoregions have evolved from a history of organism–environment interactions, and are delineated at the scale or level of detail required to support planning. Depending on the delineation method, scale, or purpose, they have been described as provinces, zones, systems, land units, classes, facets, domains, subregions, and ecological, biological, biogeographical, or environmental regions. In each case, they are essential to the development of conservation strategies and are embedded in government policies at multiple scales.
Resumo:
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions then places them into the BN is a common method. This paper firstly proposes an alternative pooling method, Posterior Linear Pooling (PoLP). This method constructs a BN for each expert, then pools the resulting probabilities at the nodes of interest. Secondly, it investigates the advantages and disadvantages of using these pooling methods to combine the opinions of multiple experts. Finally, the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behaviour of different groups of people and how these different methods may be able to capture such differences. The paper focusses on 6 nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (female, male),Travel Experience (experienced, inexperienced), and Travel Purpose (business, personal) and finds that different behaviors can indeed be captured by the different methods.
Resumo:
Cancer is the leading contributor to the disease burden in Australia. This thesis develops and applies Bayesian hierarchical models to facilitate an investigation of the spatial and temporal associations for cancer diagnosis and survival among Queenslanders. The key objectives are to document and quantify the importance of spatial inequalities, explore factors influencing these inequalities, and investigate how spatial inequalities change over time. Existing Bayesian hierarchical models are refined, new models and methods developed, and tangible benefits obtained for cancer patients in Queensland. The versatility of using Bayesian models in cancer control are clearly demonstrated through these detailed and comprehensive analyses.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Strong statistical evidence was found for differences in tolerance to natural infections of Tobacco streak virus (TSV) in sunflower hybrids. Data from 470 plots involving 23 different sunflower hybrids tested in multiple trials over 5 years in Australia were analysed. Using a Bayesian Hierarchical Logistic Regression model for analysis provided: (i) a rigorous method for investigating the relative effects of hybrid, seasonal rainfall and proximity to inoculum source on the incidence of severe TSV disease; (ii) a natural method for estimating the probability distributions of disease incidence in different hybrids under historical rainfall conditions; and (iii) a method for undertaking all pairwise comparisons of disease incidence between hybrids whilst controlling the familywise error rate without any drastic reduction in statistical power. The tolerance identified in field trials was effective against the main TSV strain associated with disease outbreaks, TSV-parthenium. Glasshouse tests indicate this tolerance to also be effective against the other TSV strain found in central Queensland, TSV-crownbeard. The use of tolerant germplasm is critical to minimise the risk of TSV epidemics in sunflower in this region. We found strong statistical evidence that rainfall during the early growing months of March and April had a negative effect on the incidence of severe infection with greatly reduced disease incidence in years that had high rainfall during this period.
Resumo:
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.