519 resultados para models, theoretical

em Queensland University of Technology - ePrints Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The epidermal growth factor receptor (EGFR) is commonly expressed in non-small-cell lung cancer (NSCLC) and promotes a host of mechanisms involved in tumorigenesis. However, EGFR expression does not reliably predict prognosis or response to EGFR-targeted therapies. The data from two previous studies of a series of 181 consecutive surgically resected stage I-IIIA NSCLC patients who had survived in excess of 60 days were explored. Of these patients, tissue was available for evaluation of EGFR in 179 patients, carbonic anhydrase (CA) IX in 177 patients and matrix metalloproteinase-9 (MMP-9) in 169 patients. We have previously reported an association between EGFR expression and MMP-9 expression. We have also reported that MMP-9 (P=0.001) and perinuclear (p)CA IX (P=0.03) but not EGFR expression were associated with a poor prognosis. Perinuclear CA IX expression was also associated with EGFR expression (P<0.001). Multivariate analysis demonstrated that coexpression of MMP-9 with EGFR conferred a worse prognosis than the expression of MMP-9 alone (P<0.001) and coexpression of EGFR and pCA IX conferred a worse prognosis than pCA IX alone (P=0.05). A model was then developed where the study population was divided into three groups: group 1 had expression of EGFR without coexpression of MMP-9 or pCA IX (number=21); group 2 had no expression of EGFR (number=75); and group 3 had coexpression of EGFR with pCA IX or MMP-9 or both (number=70). Group 3 had a worse prognosis than either groups 1 or 2 (P=0.0003 and 0.027, respectively) and group 1 had a better prognosis than group 2 (P=0.036). These data identify two cohorts of EGFR-positive patients with diametrically opposite prognoses. The group expressing either EGFR and or both MMP-9 and pCA IX may identify a group of patients with activated EGFR, which is of clinical relevance with the advent of EGFR-targeted therapies. © 2004 Cancer Research UK.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective To evaluate a conceptual model linking parent physical activity (PA) orientations, parental support for PA, and PA behavior in preschool children. Methods Participants were 156 parent-child dyads from 13 child care centers in Queensland, Australia. Parents completed a questionnaire measuring parental PA, parental enjoyment of PA, perceived importance of PA, parental support for PA, parents' perceptions of competence, and child PA at home. MVPA while attending child care was measured via accelerometry. Data were collected between May and August of 2003. The relationships between the study variables and child PA were tested using observed variable path analysis. Results Parental PA and parents' perceptions of competence were positively associated with parental support for PA (β= 0.23 and 0.18, respectively, p<0.05). Parental support, in turn, was positively associated with child PA at home (β= 0.16, p<0.05), but not at child care (β= 0.01, p= 0.94). Parents' perceptions of competence was positively associated with both home-based and child care PA (β= 0.20 and 0.28, respectively, p<0.05). Conclusions Family-based interventions targeting preschoolers should include strategies to increase parental support for PA. Parents who perceive their child to have low physical competence should be encouraged to provide adequate support for PA. © 2009 Elsevier Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Although the detrimental impact of major depressive disorder (MDD) at the individual level has been described, its global epidemiology remains unclear given limitations in the data. Here we present the modelled epidemiological profile of MDD dealing with heterogeneity in the data, enforcing internal consistency between epidemiological parameters and making estimates for world regions with no empirical data. These estimates were used to quantify the burden of MDD for the Global Burden of Disease Study 2010 (GBD 2010). Method Analyses drew on data from our existing literature review of the epidemiology of MDD. DisMod-MR, the latest version of the generic disease modelling system redesigned as a Bayesian meta-regression tool, derived prevalence by age, year and sex for 21 regions. Prior epidemiological knowledge, study- and country-level covariates adjusted sub-optimal raw data. Results There were over 298 million cases of MDD globally at any point in time in 2010, with the highest proportion of cases occurring between 25 and 34 years. Global point prevalence was very similar across time (4.4% (95% uncertainty: 4.2–4.7%) in 1990, 4.4% (4.1–4.7%) in 2005 and 2010), but higher in females (5.5% (5.0–6.0%) compared to males (3.2% (3.0–3.6%) in 2010. Regions in conflict had higher prevalence than those with no conflict. The annual incidence of an episode of MDD followed a similar age and regional pattern to prevalence but was about one and a half times higher, consistent with an average duration of 37.7 weeks. Conclusion We were able to integrate available data, including those from high quality surveys and sub-optimal studies, into a model adjusting for known methodological sources of heterogeneity. We were also able to estimate the epidemiology of MDD in regions with no available data. This informed GBD 2010 and the public health field, with a clearer understanding of the global distribution of MDD.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy. We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the problems to be solved in attaining the full potentials of hematopoietic stem cell (HSC) applications is the limited availability of the cells. Growing HSCs in a bioreactor offers an alternative solution to this problem. Besides, it also offers the advantages of eliminating labour intensive process as well as the possible contamination involved in the periodic nutrient replenishments in the traditional T-flask stem cell cultivation. In spite of this, the optimization of HSC cultivation in a bioreactor has been barely explored. This manuscript discusses the development of a mathematical model to describe the dynamics in nutrient distribution and cell concentration of an ex vivo HSC cultivation in a microchannel perfusion bioreactor. The model was further used to optimize the cultivation by proposing three alternative feeding strategies in order to prevent the occurrence of nutrient limitation in the bioreactor. The evaluation of these strategies, the periodic step change increase in the inlet oxygen concentration, the periodic step change increase in the media inflow, and the feedback control of media inflow, shows that these strategies can successfully improve the cell yield of the bioreactor. In general, the developed model is useful for the design and optimization of bioreactor operation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Moose populations are managed for sustainable yield balanced against costs caused by damage to forestry or agriculture and collisions with vehicles. Optimal harvests can be calculated based on a structured population model driven by data on abundance and the composition of bulls, cows, and calves obtained by aerial-survey monitoring during winter. Quotas are established by the respective government agency and licenses are issued to hunters to harvest an animal of specified age or sex during the following autumn. Because the cost of aerial monitoring is high, we use a Management Strategy Evaluation to evaluate the costs and benefits of periodic aerial surveys in the context of moose management. Our on-the-fly "seat of your pants" alternative to independent monitoring is management based solely on the kill of moose by hunters, which is usually sufficient to alert the manager to declines in moose abundance that warrant adjustments to harvest strategies. Harvests are relatively cheap to monitor; therefore, data can be obtained each year facilitating annual adjustments to quotas. Other sources of "cheap" monitoring data such as records of the number of moose seen by hunters while hunting also might be obtained, and may provide further useful insight into population abundance, structure and health. Because conservation dollars are usually limited, the high cost of aerial surveys is difficult to justify when alternative methods exist. © 2012 Elsevier Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Our objective was to determine the factors that lead users to continue working with process modeling grammars after their initial adoption. We examined the explanatory power of three theoretical models of IT usage by applying them to two popular process modeling grammars. We found that a hybrid model of technology acceptance and expectation-confirmation best explained user intentions to continue using the grammars. We examined differences in the model results, and used them to provide three contributions. First, the study confirmed the applicability of IT usage models to the domain of process modeling. Second, we discovered that differences in continued usage intentions depended on the grammar type instead of the user characteristics. Third, we suggest implications and practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, two ideal formation models of serrated chips, the symmetric formation model and the unilateral right-angle formation model, have been established for the first time. Based on the ideal models and related adiabatic shear theory of serrated chip formation, the theoretical relationship among average tooth pitch, average tooth height and chip thickness are obtained. Further, the theoretical relation of the passivation coefficient of chip's sawtooth and the chip thickness compression ratio is deduced as well. The comparison between these theoretical prediction curves and experimental data shows good agreement, which well validates the robustness of the ideal chip formation models and the correctness of the theoretical deducing analysis. The proposed ideal models may have provided a simple but effective theoretical basis for succeeding research on serrated chip morphology. Finally, the influences of most principal cutting factors on serrated chip formation are discussed on the basis of a series of finite element simulation results for practical advices of controlling serrated chips in engineering application.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Changing the topology of a railway network can greatly affect its capacity. Railway networks however can be altered in a multitude of different ways. As each way has significant immediate and long term financial ramifications, it is a difficult task to decide how and where to expand the network. In response some railway capacity expansion models (RCEM) have been developed to help capacity planning activities, and to remove physical bottlenecks in the current railway system. The exact purpose of these models is to decide given a fixed budget, where track duplications and track sub divisions should be made, in order to increase theoretical capacity most. These models are high level and strategic, and this is why increases to the theoretical capacity is concentrated upon. The optimization models have been applied to a case study to demonstrate their application and their worth. The case study evidently shows how automated approaches of this nature could be a formidable alternative to current manual planning techniques and simulation. If the exact effect of track duplications and sub-divisions can be sufficiently approximated, this approach will be very applicable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.