20 resultados para conventional model

em CentAUR: Central Archive University of Reading - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are at least three distinct time scales that are relevant for the evolution of atmospheric convection. These are the time scale of the forcing mechanism, the time scale governing the response to a steady forcing, and the time scale of the response to variations in the forcing. The last of these, tmem, is associated with convective life cycles, which provide an element of memory in the system. A highly simplified model of convection is introduced, which allows for investigation of the character of convection as a function of the three time scales. For short tmem, the convective response is strongly tied to the forcing as in conventional equilibrium parameterization. For long tmem, the convection responds only to the slowly evolving component of forcing, and any fluctuations in the forcing are essentially suppressed. At intermediate tmem, convection becomes less predictable: conventional equilibrium closure breaks down and current levels of convection modify the subsequent response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian method of classifying observations that are assumed to come from a number of distinct subpopulations is outlined. The method is illustrated with simulated data and applied to the classification of farms according to their level and variability of income. The resultant classification shows a greater diversity of technical charactersitics within farm types than is conventionally the case. The range of mean farm income between groups in the new classification is wider than that of the conventional method and the variability of income within groups is narrower. Results show that the highest income group in 2000 included large specialist dairy farmers and pig and poultry producers, whilst in 2001 it included large and small specialist dairy farms and large mixed dairy and arable farms. In both years the lowest income group is dominated by non-milk producing livestock farms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a general likelihood-based 'mixture model' for inferring phylogenetic trees from gene-sequence or other character-state data. The model accommodates cases in which different sites in the alignment evolve in qualitatively distinct ways, but does not require prior knowledge of these patterns or partitioning of the data. We call this qualitative variability in the pattern of evolution across sites "pattern-heterogeneity" to distinguish it from both a homogenous process of evolution and from one characterized principally by differences in rates of evolution. We present studies to show that the model correctly retrieves the signals of pattern-heterogeneity from simulated gene-sequence data, and we apply the method to protein-coding genes and to a ribosomal 12S data set. The mixture model outperforms conventional partitioning in both these data sets. We implement the mixture model such that it can simultaneously detect rate- and pattern-heterogeneity. The model simplifies to a homogeneous model or a rate- variability model as special cases, and therefore always performs at least as well as these two approaches, and often considerably improves upon them. We make the model available within a Bayesian Markov-chain Monte Carlo framework for phylogenetic inference, as an easy-to-use computer program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fermentation system was designed to model the human colonic microflora in vitro. The system provided a framework of mucin beads to encourage the adhesion of bacteria, which was encased within a dialysis membrane. The void between the beads was inoculated with faeces from human donors. Water and metabolites were removed from the fermentation by osmosis using a solution of polyethylene glycol (PEG). The system was concomitantly inoculated alongside a conventional single-stage chemostat. Three fermentations were carried out using inocula from three healthy human donors. Bacterial populations from the chemostat and biofilm system were enumerated using fluorescence in situ hybridization. The culture fluid was also analysed for its short-chain fatty acid (SCFA) content. A higher cell density was achieved in the biofilm fermentation system (taking into account the contribution made by the bead-associated bacteria) as compared with the chemostat, owing to the removal of water and metabolites. Evaluation of the bacterial populations revealed that the biofilm system was able to support two distinct groups of bacteria: bacteria growing in association with the mucin beads and planktonic bacteria in the culture fluid. Furthermore, distinct differences were observed between populations in the biofilm fermenter system and the chemostat, with the former supporting higher populations of clostridia and Escherichia coli. SCFA levels were lower in the biofilm system than in the chemostat, as in the former they were removed via the osmotic effect of the PEG. These experiments demonstrated the potential usefulness of the biofilm system for investigating the complexity of the human colonic microflora and the contribution made by sessile bacterial populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pollen-mediated gene flow is one of the main concerns associated with the introduction of genetically modified (GM) crops. Should a premium for non-GM varieties emerge on the market, ‘contamination’ by GM pollen would generate a revenue loss for growers of non-GM varieties. This paper analyses the problem of pollen-mediated gene flow as a particular type of production externality. The model, although simple, provides useful insights into coexistence policies. Following on from this and taking GM herbicide-tolerant oilseed rape (Brassica napus) as a model crop, a Monte Carlo simulation is used to generate data and then estimate the effect of several important policy variables (including width of buffer zones and spatial aggregation) on the magnitude of the externality associated with pollen-mediated gene flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiments assimilating the RAPID dataset of deep temperature and salinity profiles at 26.5°N on the western and eastern Atlantic boundaries into a 1° global NEMO ocean model have been performed. The meridional overturning circulation (MOC) is then assessed against the transports calculated directly from observations. The best initialization found for this short period was obtained by assimilating the EN3 upper-ocean hydrography database prior to 2004, after which different methods of assimilating 5-day average RAPID profiles at the western boundary were tested. The model MOC is strengthened by ∼ 2 Sv giving closer agreement with the RAPID array transports, when the western boundary profiles are assimilated only below 900 m (the approximate depth of the Florida Straits, which are not well resolved) and when the T,S observations are spread meridionally from 10 to 35°N along the deep western boundary. The use of boundary-focused covariances has the largest impact on the assimilation results, otherwise using more conventional Gaussian covariances has a very local impact on the MOC at 26°N with strong adverse impacts on the MOC stream function at higher and lower latitudes. Even using boundary-focused covariances only enables the MOC to be strengthened for ∼ 2 years, after which the increased transport of warm waters leads to a negative feedback on water formation in the subpolar gyre which then reduces the MOC. This negative feedback can be mitigated if EN3 hydrography data continue to be assimilated along with the RAPID array boundary data. Copyright © 2012 Royal Meteorological Society and Crown in the right of Canada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent global tropospheric temperature trend can be reproduced by climate models that are forced only by observed sea surface temperature (SST) anomalies. In this study, simulations with the Hamburg climate model (ECHAM) are compared to temperatures from microwave sounding units (MSU) and to reanalyses from the European Centre for Medium-Range Weather Forecasts. There is overall agreement of observed and simulated tropospheric temperature anomalies in many regions, in particular in the tropics and over the oceans, which lack conventional observing systems. This provides the opportunity to link physically different quantities, such as surface observations or analyses (SST) and satellite soundings (MSU) by means of a general circulation model. The proposed method can indicate inconsistencies between MSU temperatures and SSTs and has apparently done so. Differences between observed and simulated tropospheric temperature anomalies can partly be attributed to stratospheric aerosol variations due to major volcanic eruptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An updated empirical approach is proposed for specifying coexistence requirements for genetically modified (GM) maize (Zea mays L.) production to ensure compliance with the 0.9% labeling threshold for food and feed in the European Union. The model improves on a previously published (Gustafson et al., 2006) empirical model by adding recent data sources to supplement the original database and including the following additional cases: (i) more than one GM maize source field adjacent to the conventional or organic field, (ii) the possibility of so-called “stacked” varieties with more than one GM trait, and (iii) lower pollen shed in the non-GM receptor field. These additional factors lead to the possibility for somewhat wider combinations of isolation distance and border rows than required in the original version of the empirical model. For instance, in the very conservative case of a 1-ha square non-GM maize field surrounded on all four sides by homozygous GM maize with 12 m isolation (the effective isolation distance for a single GM field), non-GM border rows of 12 m are required to be 95% confident of gene flow less than 0.9% in the non-GM field (with adventitious presence of 0.3%). Stacked traits of higher GM mass fraction and receptor fields of lower pollen shed would require a greater number of border rows to comply with the 0.9% threshold, and an updated extension to the model is provided to quantify these effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.