957 resultados para Minimum Variance Model
Resumo:
A more complete understanding of amino acid ( AA) metabolism by the various tissues of the body is required to improve upon current systems for predicting the use of absorbed AA. The objective of this work was to construct and parameterize a model of net removal of AA by the portal-drained viscera (PDV). Six cows were prepared with arterial, portal, and hepatic catheters and infused abomasally with 0, 200, 400, or 600 g of casein daily. Casein infusion increased milk yield quadratically and tended to increase milk protein yield quadratically. Arterial concentrations of a number of essential AA increased linearly with respect to infusion amount. When infused casein was assumed to have a true digestion coefficient of 0.95, the minimum likely true digestion coefficient for noninfused duodenal protein was found to be 0.80. Net PDV use of AA appeared to be linearly related to total supply (arterial plus absorption), and extraction percentages ranged from 0.5 to 7.25% for essential AA. Prediction errors for portal vein AA concentrations ranged from 4 to 9% of the observed mean concentrations. Removal of AA by PDV represented approximately 33% of total postabsorptive catabolic use, including use during absorption but excluding use for milk protein synthesis, and was apparently adequate to support endogenous N losses in feces of 18.4 g/d. As 69% of this use was from arterial blood, increased PDV catabolism of AA in part represents increased absorption of AA in excess of amounts required by other body tissues. Based on the present model, increased anabolic use of AA in the mammary and other tissues would reduce the catabolic use of AA by the PDV.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.
Resumo:
Disease-weather relationships influencing Septoria leaf blotch (SLB) preceding growth stage (GS) 31 were identified using data from 12 sites in the UK covering 8 years. Based on these relationships, an early-warning predictive model for SLB on winter wheat was formulated to predict the occurrence of a damaging epidemic (defined as disease severity of 5% or > 5% on the top three leaf layers). The final model was based on accumulated rain > 3 mm in the 80-day period preceding GS 31 (roughly from early-February to the end of April) and accumulated minimum temperature with a 0A degrees C base in the 50-day period starting from 120 days preceding GS 31 (approximately January and February). The model was validated on an independent data set on which the prediction accuracy was influenced by cultivar resistance. Over all observations, the model had a true positive proportion of 0.61, a true negative proportion of 0.73, a sensitivity of 0.83, and a specificity of 0.18. True negative proportion increased to 0.85 for resistant cultivars and decreased to 0.50 for susceptible cultivars. Potential fungicide savings are most likely to be made with resistant cultivars, but such benefits would need to be identified with an in-depth evaluation.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
This paper highlights the key role played by solubility in influencing gelation and demonstrates that many facets of the gelation process depend on this vital parameter. In particular, we relate thermal stability (T-gel) and minimum gelation concentration (MGC) values of small-molecule gelation in terms of the solubility and cooperative self-assembly of gelator building blocks. By employing a van't Hoff analysis of solubility data, determined from simple NMR measurements, we are able to generate T-calc values that reflect the calculated temperature for complete solubilization of the networked gelator. The concentration dependence of T-calc allows the previously difficult to rationalize "plateau-region" thermal stability values to be elucidated in terms of gelator molecular design. This is demonstrated for a family of four gelators with lysine units attached to each end of an aliphatic diamine, with different peripheral groups (Z or Bee) in different locations on the periphery of the molecule. By tuning the peripheral protecting groups of the gelators, the solubility of the system is modified, which in turn controls the saturation point of the system and hence controls the concentration at which network formation takes place. We report that the critical concentration (C-crit) of gelator incorporated into the solid-phase sample-spanning network within the gel is invariant of gelator structural design. However, because some systems have higher solubilities, they are less effective gelators and require the application of higher total concentrations to achieve gelation, hence shedding light on the role of the MGC parameter in gelation. Furthermore, gelator structural design also modulates the level of cooperative self-assembly through solubility effects, as determined by applying a cooperative binding model to NMR data. Finally, the effect of gelator chemical design on the spatial organization of the networked gelator was probed by small-angle neutron and X-ray scattering (SANS/SAXS) on the native gel, and a tentative self-assembly model was proposed.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.
Resumo:
In this paper, observations by a ground-based vertically pointing Doppler lidar and sonic anemometer are used to investigate the diurnal evolution of boundary-layer turbulence in cloudless, cumulus and stratocumulus conditions. When turbulence is driven primarily by surface heating, such as in cloudless and cumulus-topped boundary layers, both the vertical velocity variance and skewness follow similar profiles, on average, to previous observational studies of turbulence in convective conditions, with a peak skewness of around 0.8 in the upper third of the mixed layer. When the turbulence is driven primarily by cloud-top radiative cooling, such as in the presence of nocturnal stratocumulus, it is found that the skewness is inverted in both sign and height: its minimum value of around −0.9 occurs in the lower third of the mixed layer. The profile of variance is consistent with a cloud-top cooling rate of around 30Wm−2. This is also consistent with the evolution of the thermodynamic profile and the rate of growth of the mixed layer into the stable nocturnal boundary layer from above. In conditions where surface heating occurs simultaneously with cloud-top cooling, the skewness is found to be useful for diagnosing the source of the turbulence, suggesting that long-term Doppler lidar observations would be valuable for evaluating boundary-layer parametrization schemes. Copyright c 2009 Royal Meteorological Society
Resumo:
In molecular mechanics simulations of biological systems, the solvation water is typically represented by a default water model which is an integral part of the force field. Indeed, protein nonbonding parameters are chosen in order to obtain a balance between water-water and protein-water interactions and hence a reliable description of protein solvation. However, less attention has been paid to the question of whether the water model provides a reliable description of the water properties under the chosen simulation conditions, for which more accurate water models often exist. Here we consider the case of the CHARMM protein force field, which was parametrized for use with a modified TIP3P model. Using quantum mechanical and molecular mechanical calculations, we investigate whether the CHARMM force field can be used with other water models: TIP4P and TIP5P. Solvation properties of N-methylacetamide (NMA), other small solute molecules, and a small protein are examined. The results indicate differences in binding energies and minimum energy geometries, especially for TIP5P, but the overall description of solvation is found to be similar for all models tested. The results provide an indication that molecular mechanics simulations with the CHARMM force field can be performed with water models other than TIP3P, thus enabling an improved description of the solvent water properties.
Resumo:
A polynomial-based ARMA model, when posed in a state-space framework can be regarded in many different ways. In this paper two particular state-space forms of the ARMA model are considered, and although both are canonical in structure they differ in respect of the mode in which disturbances are fed into the state and output equations. For both forms a solution is found to the optimal discrete-time observer problem and algebraic connections between the two optimal observers are shown. The purpose of the paper is to highlight the fact that the optimal observer obtained from the first state-space form, commonly known as the innovations form, is not that employed in an optimal controller, in the minimum-output variance sense, whereas the optimal observer obtained from the second form is. Hence the second form is a much more appropriate state-space description to use for controller design, particularly when employed in self-tuning control schemes.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
An alternative blind deconvolution algorithm for white-noise driven minimum phase systems is presented and verified by computer simulation. This algorithm uses a cost function based on a novel idea: variance approximation and series decoupling (VASD), and suggests that not all autocorrelation function values are necessary to implement blind deconvolution.
Resumo:
Smooth trajectories are essential for safe interaction in between human and a haptic interface. Different methods and strategies have been introduced to create such smooth trajectories. This paper studies the creation of human-like movements in haptic interfaces, based on the study of human arm motion. These motions are intended to retrain the upper limb movements of patients that lose manipulation functions following stroke. We present a model that uses higher degree polynomials to define a trajectory and control the robot arm to achieve minimum jerk movements. It also studies different methods that can be driven from polynomials to create more realistic human-like movements for therapeutic purposes.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
It took the solar polar passage of Ulysses in the early 1990s to establish the global structure of the solar wind speed during solar minimum. However, it remains unclear if the solar wind is composed of two distinct populations of solar wind from different sources (e.g., closed loops which open up to produce the slow solar wind) or if the fast and slow solar wind rely on the superradial expansion of the magnetic field to account for the observed solar wind speed variation. We investigate the solar wind in the inner corona using the Wang-Sheeley-Arge (WSA) coronal model incorporating a new empirical magnetic topology–velocity relationship calibrated for use at 0.1 AU. In this study the empirical solar wind speed relationship was determined by using Helios perihelion observations, along with results from Riley et al. (2003) and Schwadron et al. (2005) as constraints. The new relationship was tested by using it to drive the ENLIL 3-D MHD solar wind model and obtain solar wind parameters at Earth (1.0 AU) and Ulysses (1.4 AU). The improvements in speed, its variability, and the occurrence of high-speed enhancements provide confidence that the new velocity relationship better determines the solar wind speed in the outer corona (0.1 AU). An analysis of this improved velocity field within the WSA model suggests the existence of two distinct mechanisms of the solar wind generation, one for fast and one for slow solar wind, implying that a combination of present theories may be necessary to explain solar wind observations.