116 resultados para Asymptotic Normality
Resumo:
The exchange of iron species from iron (III) chloride solutions with a strong acid cation resin has been investigated in relation to a variety of water and wastewater applications. A detailed equilibrium isotherm analysis was conducted wherein models such as Langmuir Vageler, Competitive Langmuir, Freundlich, Temkin, Dubinin Astakhov, Sips and Brouers-Sotolongo were applied to the experimental data. An important conclusion was that both the bottle-point method and solution normality used to generate the ion exchange equilibrium information influenced which sorption model fitted the isotherm profiles optimally. Invariably, the calculated value for the maximum loading of iron on strong acid cation resin was substantially higher than the value of 47.1 g/kg of resin which would occur if one Fe3+ ion exchanged for three “H+” sites on the resin surface. Consequently, it was suggested that above pH 1, various iron complexes sorbed to the resin in a manner which required less than 3 sites per iron moiety. Column trials suggested that the iron loading was 86.6 g/kg of resin when 1342 mg/L Fe (III) ions in water were flowed at 31.7 bed volumes per hour. Regeneration with 5 to 10 % HCl solutions reclaimed approximately 90 % of exchange sites.
Resumo:
The mathematical model of a steadily propagating Saffman-Taylor finger in a Hele-Shaw channel has applications to two-dimensional interacting streamer discharges which are aligned in a periodic array. In the streamer context, the relevant regularisation on the interface is not provided by surface tension, but instead has been postulated to involve a mechanism equivalent to kinetic undercooling, which acts to penalise high velocities and prevent blow-up of the unregularised solution. Previous asymptotic results for the Hele-Shaw finger problem with kinetic undercooling suggest that for a given value of the kinetic undercooling parameter, there is a discrete set of possible finger shapes, each analytic at the nose and occupying a different fraction of the channel width. In the limit in which the kinetic undercooling parameter vanishes, the fraction for each family approaches 1/2, suggesting that this selection of 1/2 by kinetic undercooling is qualitatively similar to the well-known analogue with surface tension. We treat the numerical problem of computing these Saffman-Taylor fingers with kinetic undercooling, which turns out to be more subtle than the analogue with surface tension, since kinetic undercooling permits finger shapes which are corner-free but not analytic. We provide numerical evidence for the selection mechanism by setting up a problem with both kinetic undercooling and surface tension, and numerically taking the limit that the surface tension vanishes.
Resumo:
This paper introduces the smooth transition logit (STL) model that is designed to detect and model situations in which there is structural change in the behaviour underlying the latent index from which the binary dependent variable is constructed. The maximum likelihood estimators of the parameters of the model are derived along with their asymptotic properties, together with a Lagrange multiplier test of the null hypothesis of linearity in the underlying latent index. The development of the STL model is motivated by the desire to assess the impact of deregulation in the Queensland electricity market and ascertain whether increased competition has resulted in significant changes in the behaviour of the spot price of electricity, specifically with respect to the occurrence of periodic abnormally high prices. The model allows the timing of any change to be endogenously determined and also market participants' behaviour to change gradually over time. The main results provide clear evidence in support of a structural change in the nature of price events, and the endogenously determined timing of the change is consistent with the process of deregulation in Queensland.
Resumo:
Common to many types of water and wastewater is the presence of sodium ions which can be removed by desalination technologies, such as reverse osmosis and ion exchange. The focus of this investigation was ion exchange as it potentially offered several advantages compared to competing methods. The equilibrium and column behaviour of a strong acid cation (SAC) resin was examined for the removal of sodium ions from aqueous sodium chloride solutions of varying normality as well as a coal seam gas water sample. The influence of the bottle-point method to generate the sorption isotherms was evaluated and data interpreted with the Langmuir Vageler, Competitive Langmuir, Freundlich, and Dubinin-Astakhov models. With the constant concentration bottle point method, the predicted maximum exchange levels of sodium ions on the resin ranged from 61.7 to 67.5 g Na/kg resin. The general trend was that the lower the initial concentration of sodium ions in the solution, the lower the maximum capacity of the resin for sodium ions. In contrast, the constant mass bottle point method was found to be problematic in that the isotherm profiles may not be complete, if experimental parameters were not chosen carefully. Column studies supported the observations of the equilibrium studies, with maximum sodium loading of ca. 62.9 g Na/kg resin measured, which was in excellent agreement with the predictions of the data from the constant concentration bottle point method. Equilibria involving coal seam gas water were more complex due to the presence of sodium bicarbonate in solution, albeit the maximum loading capacity for sodium ions was in agreement with the results from the more simple sodium chloride solutions.
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
This paper presents a maximum likelihood method for estimating growth parameters for an aquatic species that incorporates growth covariates, and takes into consideration multiple tag-recapture data. Individual variability in asymptotic length, age-at-tagging, and measurement error are also considered in the model structure. Using distribution theory, the log-likelihood function is derived under a generalised framework for the von Bertalanffy and Gompertz growth models. Due to the generality of the derivation, covariate effects can be included for both models with seasonality and tagging effects investigated. Method robustness is established via comparison with the Fabens, improved Fabens, James and a non-linear mixed-effects growth models, with the maximum likelihood method performing the best. The method is illustrated further with an application to blacklip abalone (Haliotis rubra) for which a strong growth-retarding tagging effect that persisted for several months was detected
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Ordinal qualitative data are often collected for phenotypical measurements in plant pathology and other biological sciences. Statistical methods, such as t tests or analysis of variance, are usually used to analyze ordinal data when comparing two groups or multiple groups. However, the underlying assumptions such as normality and homogeneous variances are often violated for qualitative data. To this end, we investigated an alternative methodology, rank regression, for analyzing the ordinal data. The rank-based methods are essentially based on pairwise comparisons and, therefore, can deal with qualitative data naturally. They require neither normality assumption nor data transformation. Apart from robustness against outliers and high efficiency, the rank regression can also incorporate covariate effects in the same way as the ordinary regression. By reanalyzing a data set from a wheat Fusarium crown rot study, we illustrated the use of the rank regression methodology and demonstrated that the rank regression models appear to be more appropriate and sensible for analyzing nonnormal data and data with outliers.
Resumo:
For clustered survival data, the traditional Gehan-type estimator is asymptotically equivalent to using only the between-cluster ranks, and the within-cluster ranks are ignored. The contribution of this paper is two fold: - (i) incorporating within-cluster ranks in censored data analysis, and; - (ii) applying the induced smoothing of Brown and Wang (2005, Biometrika) for computational convenience. Asymptotic properties of the resulting estimating functions are given. We also carry out numerical studies to assess the performance of the proposed approach and conclude that the proposed approach can lead to much improved estimators when strong clustering effects exist. A dataset from a litter-matched tumorigenesis experiment is used for illustration.
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
In this paper, we address the problem of stabilisation of robots subject to nonholonommic constraints and external disturbances using port-Hamiltonian theory and smooth time-invariant control laws. This should be contrasted with the commonly used switched or time-varying laws. We propose a control design that provides asymptotic stability of an manifold (also called relative equilibria)-due to the Brockett condition this is the only type of stabilisation possible using smooth time-invariant control laws. The equilibrium manifold can be shaped to certain extent to satisfy specific control objectives. The proposed control law also incorporates integral action, and thus the closed-loop system is robust to unknown constant disturbances. A key step in the proposed design is a change of coordinates not only in the momentum, but also in the position vector, which differs from coordinate transformations previously proposed in the literature for the control of nonholonomic systems. The theoretical properties of the control law are verified via numerical simulation based on a robotic ground vehicle model with differential traction wheels and non co-axial centre of mass and point of contact.
Resumo:
Robust estimation often relies on a dispersion function that is more slowly varying at large values than the square function. However, the choice of tuning constant in dispersion functions may impact the estimation efficiency to a great extent. For a given family of dispersion functions such as the Huber family, we suggest obtaining the "best" tuning constant from the data so that the asymptotic efficiency is maximized. This data-driven approach can automatically adjust the value of the tuning constant to provide the necessary resistance against outliers. Simulation studies show that substantial efficiency can be gained by this data-dependent approach compared with the traditional approach in which the tuning constant is fixed. We briefly illustrate the proposed method using two datasets.