244 resultados para Exponential random graph models

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Velocity jump processes are discrete random walk models that have many applications including the study of biological and ecological collective motion. In particular, velocity jump models are often used to represent a type of persistent motion, known as a “run and tumble”, which is exhibited by some isolated bacteria cells. All previous velocity jump processes are non-interacting, which means that crowding effects and agent-to-agent interactions are neglected. By neglecting these agent-to-agent interactions, traditional velocity jump models are only applicable to very dilute systems. Our work is motivated by the fact that many applications in cell biology, such as wound healing, cancer invasion and development, often involve tissues that are densely packed with cells where cell-to-cell contact and crowding effects can be important. To describe these kinds of high cell density problems using a velocity jump process we introduce three different classes of crowding interactions into a one-dimensional model. Simulation data and averaging arguments lead to a suite of continuum descriptions of the interacting velocity jump processes. We show that the resulting systems of hyperbolic partial differential equations predict the mean behavior of the stochastic simulations very well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random walk models are often used to interpret experimental observations of the motion of biological cells and molecules. A key aim in applying a random walk model to mimic an in vitro experiment is to estimate the Fickian diffusivity (or Fickian diffusion coefficient),D. However, many in vivo experiments are complicated by the fact that the motion of cells and molecules is hindered by the presence of obstacles. Crowded transport processes have been modeled using repeated stochastic simulations in which a motile agent undergoes a random walk on a lattice that is populated by immobile obstacles. Early studies considered the most straightforward case in which the motile agent and the obstacles are the same size. More recent studies considered stochastic random walk simulations describing the motion of an agent through an environment populated by obstacles of different shapes and sizes. Here, we build on previous simulation studies by analyzing a general class of lattice-based random walk models with agents and obstacles of various shapes and sizes. Our analysis provides exact calculations of the Fickian diffusivity, allowing us to draw conclusions about the role of the size, shape and density of the obstacles, as well as examining the role of the size and shape of the motile agent. Since our analysis is exact, we calculateDdirectly without the need for random walk simulations. In summary, we find that the shape, size and density of obstacles has a major influence on the exact Fickian diffusivity. Furthermore, our results indicate that the difference in diffusivity for symmetric and asymmetric obstacles is significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the train scheduling problem is modelled as a blocking parallel-machine job shop scheduling (BPMJSS) problem. In the model, trains, single-track sections and multiple-track sections, respectively, are synonymous with jobs, single machines and parallel machines, and an operation is regarded as the movement/traversal of a train across a section. Due to the lack of buffer space, the real-life case should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold the train until next section on the routing becomes available. Based on literature review and our analysis, it is very hard to find a feasible complete schedule directly for BPMJSS problems. Firstly, a parallel-machine job-shop-scheduling (PMJSS) problem is solved by an improved shifting bottleneck procedure (SBP) algorithm without considering blocking conditions. Inspired by the proposed SBP algorithm, feasibility satisfaction procedure (FSP) algorithm is developed to solve and analyse the BPMJSS problem, by an alternative graph model that is an extension of the classical disjunctive graph models. The proposed algorithms have been implemented and validated using real-world data from Queensland Rail. Sensitivity analysis has been applied by considering train length, upgrading track sections, increasing train speed and changing bottleneck sections. The outcomes show that the proposed methodology would be a very useful tool for the real-life train scheduling problems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce the concept of attribute-based authenticated key exchange (AB-AKE) within the framework of ciphertext policy attribute-based systems. A notion of AKE-security for AB-AKE is presented based on the security models for group key exchange protocols and also taking into account the security requirements generally considered in the ciphertext policy attribute-based setting. We also extend the paradigm of hybrid encryption to the ciphertext policy attribute-based encryption schemes. A new primitive called encapsulation policy attribute-based key encapsulation mechanism (EP-AB-KEM) is introduced and a notion of chosen ciphertext security is de�ned for EP-AB-KEMs. We propose an EP-AB-KEM from an existing attribute-based encryption scheme and show that it achieves chosen ciphertext security in the generic group and random oracle models. We present a generic one-round AB-AKE protocol that satis�es our AKE-security notion. The protocol is generically constructed from any EP-AB-KEM that satis�es chosen ciphertext security. Instantiating the generic AB-AKE protocol with our EP-AB-KEM will result in a concrete one-round AB-AKE protocol also secure in the generic group and random oracle models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random walk models based on an exclusion process with contact effects are often used to represent collective migration where individual agents are affected by agent-to-agent adhesion. Traditional mean field representations of these processes take the form of a nonlinear diffusion equation which, for strong adhesion, does not predict the averaged discrete behavior. We propose an alternative suite of mean-field representations, showing that collective migration with strong adhesion can be accurately represented using a moment closure approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Factors associated with the development of hallux valgus (HV) are multifactorial and remain unclear. The objective of this systematic review and meta-analysis was to investigate characteristics of foot structure and footwear associated with HV. Design Electronic databases (Medline, Embase, and CINAHL) were searched to December 2010. Cross-sectional studies with a valid definition of HV and a non-HV comparison group were included. Two independent investigators quality rated all included papers. Effect sizes and 95% confidence intervals (CIs) were calculated (standardized mean differences (SMDs) for continuous data and risk ratios (RRs) for dichotomous data). Where studies were homogeneous, pooling of SMDs was conducted using random effects models. Results A total of 37 papers (34 unique studies) were quality rated. After exclusion of studies without reported measurement reliability for associated factors, data were extracted and analysed from 16 studies reporting results for 45 different factors. Significant factors included: greater first intermetatarsal angle (pooled SMD = 1.5, CI: 0.88–2.1), longer first metatarsal (pooled SMD = 1.0, CI: 0.48–1.6), round first metatarsal head (RR: 3.1–5.4), and lateral sesamoid displacement (RR: 5.1–5.5). Results for clinical factors (e.g., first ray mobility, pes planus, footwear) were less conclusive regarding their association with HV. Conclusions Although conclusions regarding causality cannot be made from cross-sectional studies, this systematic review highlights important factors to monitor in HV assessment and management. Further studies with rigorous methodology are warranted to investigate clinical factors associated with HV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To calculate pooled risk estimates of the association between pigmentary characteristics and basal cell carcinoma (BCC) of the skin. Methods: We searched three electronic databases and reviewed the reference lists of the retrieved articles until July 2012 to identify eligible epidemiologic studies. Eligible studies were those published in between 1965 and July 2012 that permitted quantitative assessment of the association between histologically-confirmed BCC and any of the following characteristics: hair colour, eye colour, skin colour, skin phototype, tanning and burning ability, and presence of freckling or melanocytic nevi. We included 29 studies from 2236 initially identified. We calculated summary odds ratios (ORs) using weighted averages of the log OR, using random effects models. Results: We found strongest associations with red hair (OR 2.02; 95% CI: 1.68, 2.44), fair skin colour (OR 2.11; 95% CI: 1.56, 2.86), and having skin that burns and never tans (OR 2.03; 95% CI: 1.73, 2.38). All other factors had weaker but positive associations with BCC, with the exception of freckling of the face in adulthood which showed no association. Conclusions: Although most studies report risk estimates that are in the same direction, there is significant heterogeneity in the size of the estimates. The associations were quite modest and remarkably similar, with ORs between about 1.5 and 2.5 for the highest risk level for each factor. Given the public health impact of BCC, this meta-analysis will make a valuable contribution to our understanding of BCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The body of evidence related to breast-cancer-related lymphoedema incidence and risk factors has substantially grown and improved in quality over the past decade. We assessed the incidence of unilateral arm lymphoedema after breast cancer and explored the evidence available for lymphoedema risk factors. Methods We searched Academic Search Elite, Cumulative Index to Nursing and Allied Health, Cochrane Central Register of Controlled Trials (clinical trials), and Medline for research articles that assessed the incidence or prevalence of, or risk factors for, arm lymphoedema after breast cancer, published between January 1, 2000, and June 30, 2012. We extracted incidence data and calculated corresponding exact binomial 95% CIs. We used random effects models to calculate a pooled overall estimate of lymphoedema incidence, with subgroup analyses to assess the effect of different study designs, countries of study origin, diagnostic methods, time since diagnosis, and extent of axillary surgery. We assessed risk factors and collated them into four levels of evidence, depending on consistency of findings and quality and quantity of studies contributing to findings. Findings 72 studies met the inclusion criteria for the assessment of lymphoedema incidence, giving a pooled estimate of 16·6% (95% CI 13·6–20·2). Our estimate was 21·4% (14·9–29·8) when restricted to data from prospective cohort studies (30 studies). The incidence of arm lymphoedema seemed to increase up to 2 years after diagnosis or surgery of breast cancer (24 studies with time since diagnosis or surgery of 12 to <24 months; 18·9%, 14·2–24·7), was highest when assessed by more than one diagnostic method (nine studies; 28·2%, 11·8–53·5), and was about four times higher in women who had an axillary-lymph-node dissection (18 studies; 19·9%, 13·5–28·2) than it was in those who had sentinel-node biopsy (18 studies; 5·6%, 6·1–7·9). 29 studies met the inclusion criteria for the assessment of risk factors. Risk factors that had a strong level of evidence were extensive surgery (ie, axillary-lymph-node dissection, greater number of lymph nodes dissected, mastectomy) and being overweight or obese. Interpretation Our findings suggest that more than one in five women who survive breast cancer will develop arm lymphoedema. A clear need exists for improved understanding of contributing risk factors, as well as of prevention and management strategies to reduce the individual and public health burden of this disabling and distressing disorder.