820 resultados para EXPONENTIAL MODEL
em Queensland University of Technology - ePrints Archive
Resumo:
In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
Here, we present the results of two genome-wide scans in two diverse populations in which a consistent use of recently introduced migraine-phenotyping methods detects and replicates a locus on 10q22-q23, with an additional independent replication. No genetic variants have been convincingly established in migraine, and although several loci have been reported, none of them has been consistently replicated. We employed the three known migraine-phenotyping methods (clinical end diagnosis, latent-class analysis, and trait-component analysis) with robust multiple testing correction in a large sample set of 1675 individuals from 210 migraine families from Finland and Australia. Genome-wide multipoint linkage analysis that used the Kong and Cox exponential model in Finns detected a locus on 10q22-q23 with highly significant evidence of linkage (LOD 7.68 at 103 cM in female-specific analysis). The Australian sample showed a LOD score of 3.50 at the same locus (100 cM), as did the independent Finnish replication study (LOD score 2.41, at 102 cM). In addition, four previously reported loci on 8q21, 14q21, 18q12, and Xp21 were also replicated. A shared-segment analysis of 10q22-q23 linked Finnish families identified a 1.6-9.5 cM segment, centered on 101 cM, which shows in-family homology in 95% of affected Finns. This region was further studied with 1323 SNPs. Although no significant association was observed, four regions warranting follow-up studies were identified. These results support the use of symptomology-based phenotyping in migraine and suggest that the 10q22-q23 locus probably contains one or more migraine susceptibility variants.
Resumo:
Passive air samplers (PAS) consisting of polyurethane foam (PUF) disks were deployed at 6 outdoor air monitoring stations in different land use categories (commercial, industrial, residential and semi-rural) to assess the spatial distribution of polybrominated diphenyl ethers (PBDEs) in the Brisbane airshed. Air monitoring sites covered an area of 1143 km2 and PAS were allowed to accumulate PBDEs in the city's airshed over three consecutive seasons commencing in the winter of 2008. The average sum of five (∑5) PBDEs (BDEs 28, 47, 99, 100 and 209) levels were highest at the commercial and industrial sites (12.7 ± 5.2 ng PUF−1), which were relatively close to the city center and were a factor of 8 times higher than residential and semi-rural sites located in outer Brisbane. To estimate the magnitude of the urban ‘plume’ an empirical exponential decay model was used to fit PAS data vs. distance from the CBD, with the best correlation observed when the particulate bound BDE-209 was not included (∑5-209) (r2 = 0.99), rather than ∑5 (r2 = 0.84). At 95% confidence intervals the model predicts that regardless of site characterization, ∑5-209 concentrations in a PAS sample taken between 4–10 km from the city centre would be half that from a sample taken from the city centre and reach a baseline or plateau (0.6 to 1.3 ng PUF−1), approximately 30 km from the CBD. The observed exponential decay in ∑5-209 levels over distance corresponded with Brisbane's decreasing population density (persons/km2) from the city center. The residual error associated with the model increased significantly when including BDE-209 levels, primarily due to the highest level (11.4 ± 1.8 ng PUF−1) being consistently detected at the industrial site, indicating a potential primary source at this site. Active air samples collected alongside the PAS at the industrial air monitoring site (B) indicated BDE-209 dominated congener composition and was entirely associated with the particulate phase. This study demonstrates that PAS are effective tools for monitoring citywide regional differences however, interpretation of spatial trends for POPs which are predominantly associated with the particulate phase such as BDE-209, may be restricted to identifying ‘hotspots’ rather than broad spatial trends.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
We have used electronic structure calculations to investigate the 1,2-dehydration of alcohols as a model for water loss during the pyrolysis of carbohydrates found in biomass. Reaction enthalpies and energy barriers have been calculated for neat alcohols, protonated alcohols and alcohols complexed to alkali metal ions (Li + and Na +). We have estimated pre-exponential A factors in order to obtain gas phase rate constants. For neat alcohols, the barrier to 1,2-dehydration is about 67 kcal mol -1, which is consistent with the limited experimental data. Protonation and metal complexation significantly reduce this activation barrier and thus, facilitate more rapid reaction. With the addition of alkali metals, the rate of dehydration can increase by a factor of 10 8 while addition of a proton can lead to an increase of a factor of 10 23.
Resumo:
This paper develops maximum likelihood (ML) estimation schemes for finite-state semi-Markov chains in white Gaussian noise. We assume that the semi-Markov chain is characterised by transition probabilities of known parametric from with unknown parameters. We reformulate this hidden semi-Markov model (HSM) problem in the scalar case as a two-vector homogeneous hidden Markov model (HMM) problem in which the state consist of the signal augmented by the time to last transition. With this reformulation we apply the expectation Maximumisation (EM ) algorithm to obtain ML estimates of the transition probabilities parameters, Markov state levels and noise variance. To demonstrate our proposed schemes, motivated by neuro-biological applications, we use a damped sinusoidal parameterised function for the transition probabilities.
Resumo:
Background The benign reputation of Plasmodium vivax is at odds with the burden and severity of the disease. This reputation, combined with restricted in vitro techniques, has slowed efforts to gain an understanding of the parasite biology and interaction with its human host. Methods A simulation model of the within-host dynamics of P. vivax infection is described, incorporating distinctive characteristics of the parasite such as the preferential invasion of reticulocytes and hypnozoite production. The developed model is fitted using digitized time-series’ from historic neurosyphilis studies, and subsequently validated against summary statistics from a larger study of the same population. The Chesson relapse pattern was used to demonstrate the impact of released hypnozoites. Results The typical pattern for dynamics of the parasite population is a rapid exponential increase in the first 10 days, followed by a gradual decline. Gametocyte counts follow a similar trend, but are approximately two orders of magnitude lower. The model predicts that, on average, an infected naïve host in the absence of treatment becomes infectious 7.9 days post patency and is infectious for a mean of 34.4 days. In the absence of treatment, the effect of hypnozoite release was not apparent as newly released parasites were obscured by the existing infection. Conclusions The results from the model provides useful insights into the dynamics of P. vivax infection in human hosts, in particular the timing of host infectiousness and the role of the hypnozoite in perpetuating infection.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.