53 resultados para MCDONALD EXTENDED EXPONENTIAL MODEL
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Resumo:
Brief periods of high temperature which occur near flowering can severely reduce the yield of annual crops such as wheat and groundnut. A parameterisation of this well-documented effect is presented for groundnut (i.e. peanut; Arachis hypogaeaL.). This parameterisation was combined with an existing crop model, allowing the impact of season-mean temperature, and of brief high-temperature episodes at various times near flowering, to be both independently and jointly examined. The extended crop model was tested with independent data from controlled environment experiments and field experiments. The impact of total crop duration was captured, with simulated duration being within 5% of observations for the range of season-mean temperatures used (20-28 degrees C). In simulations across nine differently timed high temperature events, eight of the absolute differences between observed and simulated yield were less than 10% of the control (no-stress) yield. The parameterisation of high temperature stress also allows the simulation of heat tolerance across different genotypes. Three parameter sets, representing tolerant, moderately sensitive and sensitive genotypes were developed and assessed. The new parameterisation can be used in climate change studies to estimate the impact of heat stress on yield. It can also be used to assess the potential for adaptation of cropping systems to increased temperature threshold exceedance via the choice of genotype characteristics. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The primary purpose of this study was to model the partitioning of evapotranspiration in a maize-sunflower intercrop at various canopy covers. The Shuttleworth-Wallace (SW) model was extended for intercropping systems to include both crop transpiration and soil evaporation and allowing interaction between the two. To test the accuracy of the extended SW model, two field experiments of maize-sunflower intercrop were conducted in 1998 and 1999. Plant transpiration and soil evaporation were measured using sap flow gauges and lysimeters, respectively. The mean prediction error (simulated minus measured values) for transpiration was zero (which indicated no overall bias in estimation error), and its accuracy was not affected by the plant growth stages, but simulated transpiration during high measured transpiration rates tended to be slightly underestimated. Overall, the predictions for daily soil evaporation were also accurate. Model estimation errors were probably due to the simplified modelling of soil water content, stomatal resistances and soil heat flux as well as due to the uncertainties in characterising the 2 micrometeorological conditions. The SW’s prediction of transpiration was most sensitive to parameters most directly related to the canopy characteristics such as the partitioning of captured solar radiation, canopy resistance, and bulk boundary layer resistance.
Resumo:
Models of windblown pollen or spore movement are required to predict gene flow from genetically modified (GM) crops and the spread of fungal diseases. We suggest a simple form for a function describing the distance moved by a pollen grain or fungal spore, for use in generic models of dispersal. The function has power-law behaviour over sub-continental distances. We show that air-borne dispersal of rapeseed pollen in two experiments was inconsistent with an exponential model, but was fitted by power-law models, implying a large contribution from distant fields to the catches observed. After allowance for this 'background' by applying Fourier transforms to deconvolve the mixture of distant and local sources, the data were best fit by power-laws with exponents between 1.5 and 2. We also demonstrate that for a simple model of area sources, the median dispersal distance is a function of field radius and that measurement from the source edge can be misleading. Using an inverse-square dispersal distribution deduced from the experimental data and the distribution of rapeseed fields deduced by remote sensing, we successfully predict observed rapeseed pollen density in the city centres of Derby and Leicester (UK).
Resumo:
In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.
Resumo:
Ensembles of extended Atmospheric Model Intercomparison Project (AMIP) runs from the general circulation models of the National Centers for Environmental Prediction (formerly the National Meteorological Center) and the Max-Planck Institute (Hamburg, Germany) are used to estimate the potential predictability (PP) of an index of the Pacific–North America (PNA) mode of climate change. The PP of this pattern in “perfect” prediction experiments is 20%–25% of the index’s variance. The models, particularly that from MPI, capture virtually all of this variance in their hindcasts of the winter PNA for the period 1970–93. The high levels of internally generated model noise in the PNA simulations reconfirm the need for an ensemble averaging approach to climate prediction. This means that the forecasts ought to be expressed in a probabilistic manner. It is shown that the models’ skills are higher by about 50% during strong SST events in the tropical Pacific, so the probabilistic forecasts need to be conditional on the tropical SST. Taken together with earlier studies, the present results suggest that the original set of AMIP integrations (single 10-yr runs) is not adequate to reliably test the participating models’ simulations of interannual climate variability in the midlatitudes.
Resumo:
Activating transcription factor 3 (Atf3) is rapidly and transiently upregulated in numerous systems, and is associated with various disease states. Atf3 is required for negative feedback regulation of other genes, but is itself subject to negative feedback regulation possibly by autorepression. In cardiomyocytes, Atf3 and Egr1 mRNAs are upregulated via ERK1/2 signalling and Atf3 suppresses Egr1 expression. We previously developed a mathematical model for the Atf3-Egr1 system. Here, we adjusted and extended the model to explore mechanisms of Atf3 feedback regulation. Introduction of an autorepressive loop for Atf3 tuned down its expression and inhibition of Egr1 was lost, demonstrating that negative feedback regulation of Atf3 by Atf3 itself is implausible in this context. Experimentally, signals downstream from ERK1/2 suppress Atf3 expression. Mathematical modelling indicated that this cannot occur by phosphorylation of pre-existing inhibitory transcriptional regulators because the time delay is too short. De novo synthesis of an inhibitory transcription factor (ITF) with a high affinity for the Atf3 promoter could suppress Atf3 expression, but (as with the Atf3 autorepression loop) inhibition of Egr1 was lost. Developing the model to include newly-synthesised miRNAs very efficiently terminated Atf3 protein expression and, with a 4-fold increase in the rate of degradation of mRNA from the mRNA/miRNA complex, profiles for Atf3 mRNA, Atf3 protein and Egr1 mRNA approximated to the experimental data. Combining the ITF model with that of the miRNA did not improve the profiles suggesting that miRNAs are likely to play a dominant role in switching off Atf3 expression post-induction.
Resumo:
There are well-known difficulties in making measurements of the moisture content of baked goods (such as bread, buns, biscuits, crackers and cake) during baking or at the oven exit; in this paper several sensing methods are discussed, but none of them are able to provide direct measurement with sufficient precision. An alternative is to use indirect inferential methods. Some of these methods involve dynamic modelling, with incorporation of thermal properties and using techniques familiar in computational fluid dynamics (CFD); a method of this class that has been used for the modelling of heat and mass transfer in one direction during baking is summarized, which may be extended to model transport of moisture within the product and also within the surrounding atmosphere. The concept of injecting heat during the baking process proportional to the calculated heat load on the oven has been implemented in a control scheme based on heat balance zone by zone through a continuous baking oven, taking advantage of the high latent heat of evaporation of water. Tests on biscuit production ovens are reported, with results that support a claim that the scheme gives more reproducible water distribution in the final product than conventional closed loop control of zone ambient temperatures, thus enabling water content to be held more closely within tolerance.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
We consider a non-local version of the NJL model, based on a separable quark-quark interaction. The interaction is extended to include terms that bind vector and axial-vector mesons. The non-locality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Working in the ladder approximation, we calculate amplitudes in Euclidean space and discuss features of their continuation to Minkowski energies. Conserved currents are constructed and we demonstrate their consistency with various Ward identities. Various meson masses are calculated, along with their strong and electromagnetic decay amplitudes. We also calculate the electromagnetic form factor of the pion, as well as form factors associated with the processes γγ* → π0 and ω → π0γ*. The results are found to lead to a satisfactory phenomenology and lend some dynamical support to the idea of vector-meson dominance.
Resumo:
A nonlocal version of the NJL model is investigated. It is based on a separable quark-quark interaction, as suggested by the instanton liquid picture of the QCD vacuum. The interaction is extended to include terms that bind vector and axial-vector mesons. The nonlocality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Features of the continuation of amplitudes from Euclidean space to Minkowski energies are discussed. These features lead to restrictions on the model parameters as well as on the range of applicability of the model. Conserved currents are constructed, and their consistency with various Ward identities is demonstrated. In particular, the Gell-Mann-Oakes-Renner relation is derived both in the ladder approximation and at meson loop level. The importance of maintaining chiral symmetry in the calculations is stressed throughout. Calculations with the model are performed to all orders in momentum. Meson masses are determined, along with their strong and electromagnetic decay amplitudes. Also calculated are the electromagnetic form factor of the pion and form factors associated with the processes gamma gamma* --> pi0 and omega --> pi0 gamma*. The results are found to lead to a satisfactory phenomenology and demonstrate a possible dynamical origin for vector-meson dominance. In addition, the results produced at meson loop level validate the use of 1/Nc as an expansion parameter and indicate that a light and broad scalar state is inherent in models of the NJL type.
Resumo:
The climatology of a stratosphere-resolving version of the Met Office’s climate model is studied and validated against ECMWF reanalysis data. Ensemble integrations are carried out at two different horizontal resolutions. Along with a realistic climatology and annual cycle in zonal mean zonal wind and temperature, several physical effects are noted in the model. The time of final warming of the winter polar vortex is found to descend monotonically in the Southern Hemisphere, as would be expected for purely radiative forcing. In the Northern Hemisphere, however, the time of final warming is driven largely by dynamical effects in the lower stratosphere and radiative effects in the upper stratosphere, leading to the earliest transition to westward winds being seen in the midstratosphere. A realistic annual cycle in stratospheric water vapor concentrations—the tropical “tape recorder”—is captured. Tropical variability in the zonal mean zonal wind is found to be in better agreement with the reanalysis for the model run at higher horizontal resolution because the simulated quasi-biennial oscillation has a more realistic amplitude. Unexpectedly, variability in the extratropics becomes less realistic under increased resolution because of reduced resolved wave drag and increased orographic gravity wave drag. Overall, the differences in climatology between the simulations at high and moderate horizontal resolution are found to be small.
Resumo:
Pardo, Patie, and Savov derived, under mild conditions, a Wiener-Hopf type factorization for the exponential functional of proper Lévy processes. In this paper, we extend this factorization by relaxing a finite moment assumption as well as by considering the exponential functional for killed Lévy processes. As a by-product, we derive some interesting fine distributional properties enjoyed by a large class of this random variable, such as the absolute continuity of its distribution and the smoothness, boundedness or complete monotonicity of its density. This type of results is then used to derive similar properties for the law of maxima and first passage time of some stable Lévy processes. Thus, for example, we show that for any stable process with $\rho\in(0,\frac{1}{\alpha}-1]$, where $\rho\in[0,1]$ is the positivity parameter and $\alpha$ is the stable index, then the first passage time has a bounded and non-increasing density on $\mathbb{R}_+$. We also generate many instances of integral or power series representations for the law of the exponential functional of Lévy processes with one or two-sided jumps. The proof of our main results requires different devices from the one developed by Pardo, Patie, Savov. It relies in particular on a generalization of a transform recently introduced by Chazal et al together with some extensions to killed Lévy process of Wiener-Hopf techniques. The factorizations developed here also allow for further applications which we only indicate here also allow for further applications which we only indicate here.
Resumo:
An in vitro colon extended physiologically based extraction test (CEPBET) which incorporates human gastrointestinal tract (GIT) parameters (including pH and chemistry, solid-to-fluid ratio, mixing and emptying rates) was applied for the first time to study the bioaccessibility of brominated flame retardants (BFRs) from the 3 main GIT compartments (stomach, small intestine and colon) following ingestion of indoor dust. Results revealed the bioaccessibility of γ-HBCD (72%) was less than that for α- and β-isomers (92% and 80% respectively) which may be attributed to the lower aqueous solubility of the γ-isomer (2 μg L−1) compared to the α- and β-isomers (45 and 15 μg L−1 respectively). No significant change in the enantiomeric fractions of HBCDs was observed in any of the studied samples. However, this does not completely exclude the possibility of in vivo enantioselective absorption of HBCDs, as the GIT cell lining and bacterial flora – which may act enantioselectively – are not included in the current CE-PBET model. While TBBP-A was almost completely (94%) bioaccessible, BDE-209 was the least (14%) bioaccessible of the studied BFRs. Bioaccessibility of tri-hepta BDEs ranged from 32–58%. No decrease in the bioaccessibility with increasing level of bromination was observed in the studied PBDEs.