965 resultados para deduced optical model parameters
Resumo:
1. Management decisions regarding invasive plants often have to be made quickly and in the face of fragmentary knowledge of their population dynamics. However, recommendations are commonly made on the basis of only a restricted set of parameters. Without addressing uncertainty and variability in model parameters we risk ineffective management, resulting in wasted resources and an escalating problem if early chances to control spread are missed. 2. Using available data for Pinus nigra in ungrazed and grazed grassland and shrubland in New Zealand, we parameterized a stage-structured spread model to calculate invasion wave speed, population growth rate and their sensitivities and elasticities to population parameters. Uncertainty distributions of parameters were used with the model to generate confidence intervals (CI) about the model predictions. 3. Ungrazed grassland environments were most vulnerable to invasion and the highest elasticities and sensitivities of invasion speed were to long-distance dispersal parameters. However, there was overlap between the elasticity and sensitivity CI on juvenile survival, seedling establishment and long-distance dispersal parameters, indicating overlap in their effects on invasion speed. 4. While elasticity of invasion speed to long-distance dispersal was highest in shrubland environments, there was overlap with the CI of elasticity to juvenile survival. In shrubland invasion speed was most sensitive to the probability of establishment, especially when establishment was low. In the grazed environment elasticity and sensitivity of invasion speed to the severity of grazing were consistently highest. Management recommendations based on elasticities and sensitivities depend on the vulnerability of the habitat. 5. Synthesis and applications. Despite considerable uncertainty in demography and dispersal, robust management recommendations emerged from the model. Proportional or absolute reductions in long-distance dispersal, juvenile survival and seedling establishment parameters have the potential to reduce wave speed substantially. Plantations of wind-dispersed invasive conifers should not be sited on exposed sites vulnerable to long-distance dispersal events, and trees in these sites should be removed. Invasion speed can also be reduced by removing seedlings, establishing competitive shrubs and grazing. Incorporating uncertainty into the modelling process increases our confidence in the wide applicability of the management strategies recommended here.
Resumo:
The rate of generation of fluctuations with respect to the scalar values conditioned on the mixture fraction, which significantly affects turbulent nonpremixed combustion processes, is examined. Simulation of the rate in a major mixing model is investigated and the derived equations can assist in selecting the model parameters so that the level of conditional fluctuations is better reproduced by the models. A more general formulation of the multiple mapping conditioning (MMC) model that distinguishes the reference and conditioning variables is suggested. This formulation can be viewed as a methodology of enforcing certain desired conditional properties onto conventional mixing models. Examples of constructing consistent MMC models with dissipation and velocity conditioning as well as of combining MMC with large eddy simulations (LES) are also provided. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Water recovery is one of the key parameters in flotation modelling for the purposes of plant design and process control, as it determines the circulating flow and residence time in the individual process units in the plant and has a significant effect on entrainment and froth recovery. This paper reviews some of the water recovery models available in the literature, including both empirical and fundamental models. The selected models are tested using the data obtained from the experimental work conducted in an Outokumpu 3 m(3) tank cell at the Xstrata Mt Isa copper concentrator. It is found that all the models fit the experimental data reasonably well for a given flotation system. However, the empirical models are either unable to distinguish the effect of different cell operating conditions or required to determine the empirical model parameters to be derived in an existing flotation system. The model developed by [Neethling, SJ., Lee, H.T., Cilliers, J.J., 2003, Simple relationships for predicting the recovery of liquid from flowing foams and froths. Minerals Engineering 16, 1123-1130] is based on fundamental understanding of the froth structure and transfer of the water in the froth. It describes the water recovery as a function of the cell operating conditions and the froth properties which can all be determined on-line. Hence, the fundamental model can be used for process control purposes in practice. By incorporating additional models to relate the air recovery and surface bubble size directly to the cell operating conditions, the fundamental model can also be used for prediction purposes. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
An expanding human population and associated demands for goods and services continues to exert an increasing pressure on ecological systems. Although the rate of expansion of agricultural lands has slowed since 1960, rapid deforestation still occurs in many tropical countries, including Colombia. However, the location and extent of deforestation and associated ecological impacts within tropical countries is often not well known. The primary aim of this study was to obtain an understanding of the spatial patterns of forest conversion for agricultural land uses in Colombia. We modeled native forest conversion in Colombia at regional and national-levels using logistic regression and classification trees. We investigated the impact of ignoring the regional variability of model parameters, and identified biophysical and socioeconomic factors that best explain the current spatial pattern and inter-regional variation in forest cover. We validated our predictions for the Amazon region using MODIS satellite imagery. The regional-level classification tree that accounted for regional heterogeneity had the greatest discrimination ability. Factors related to accessibility (distance to roads and towns) were related to the presence of forest cover, although this relationship varied regionally. In order to identify areas with a high risk of deforestation, we used predictions from the best model, refined by areas with rural population growth rates of > 2%. We ranked forest ecosystem types in terms of levels of threat of conversion. Our results provide useful inputs to planning for biodiversity conservation in Colombia, by identifying areas and ecosystem types that are vulnerable to deforestation. Several of the predicted deforestation hotspots coincide with areas that are outstanding in terms of biodiversity value.
Resumo:
Industrial flotation plant design is a complex process involving many aspects, one of which is the use of pilot-scale plants to test industrial plant flow sheets. Once test work on a pilot-scale has been performed, scale-up of these results to the full-scale plant must be performed. This paper describes scale-up test work performed on the Floatability Characterisation Test Rig (FCTR). The FCTR is a self-contained, highly instrumented mobile pilot plant designed to determine flotation model parameters and to develop and validate flotation plant modelling, scale-up and simulation methodologies.
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.
Resumo:
Objective: To investigate the population pharmacokinetics and the enteral bioavailability of phenytoin in neonates and infants with seizures. Methods: Data (5 mg kg-1 day-1) from 83 patients were obtained retrospectively from the medical records following written ethical approval. A one-compartment model was fitted to the data using NONMEM with FOCE-interaction. Between-subject variability (BSV) and interoccasion variability (IOV) were modelled exponentially together with a log transform-both-sides exponential residual unexplained variance (RUV) model. Covariates in nested models were screened for significance (X2, 1, 0.01). Model validity was determined by bootstrapping with replacement (N=500 samples) from the dataset. Results: The parameters of final pharmacokinetic were: Clearance (L h-1) = 0.826.(current Weight [kg]/70)0.75.(1+0.0692.(Postnatal age [days]-11)); Volume of distribution (L) = 74.2.(current Weight [kg]/70); Enteral bioavailability = 0.76; Absorption rate constant (h-1) = 0.167. BSV for clearance and volume of distribution were 74.2% and 65.6%, respectively. The IOV in clearance was 54.4%. The RUV was 51.1%. Final model parameters deviated from mean bootstrap estimates by
Resumo:
Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.
Resumo:
The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.
Resumo:
Gaussian Processes provide good prior models for spatial data, but can be too smooth. In many physical situations there are discontinuities along bounding surfaces, for example fronts in near-surface wind fields. We describe a modelling method for such a constrained discontinuity and demonstrate how to infer the model parameters in wind fields with MCMC sampling.
Resumo:
A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]