67 resultados para Minimum Variance Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate model simulations consistently show that in response to greenhouse gas forcing surface temperatures over land increase more rapidly than over sea. The enhanced warming over land is not simply a transient effect, since it is also present in equilibrium conditions. We examine 20 models from the IPCC AR4 database. The global land/sea warming ratio varies in the range 1.36–1.84, independent of global mean temperature change. In the presence of increasing radiative forcing, the warming ratio for a single model is fairly constant in time, implying that the land/sea temperature difference increases with time. The warming ratio varies with latitude, with a minimum in equatorial latitudes, and maxima in the subtropics. A simple explanation for these findings is provided, and comparisons are made with observations. For the low-latitude (40°S–40°N) mean, the models suggest a warming ratio of 1.51 ± 0.13, while recent observations suggest a ratio of 1.54 ± 0.09.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change science is increasingly concerned with methods for managing and integrating sources of uncertainty from emission storylines, climate model projections, and ecosystem model parameterizations. In tropical ecosystems, regional climate projections and modeled ecosystem responses vary greatly, leading to a significant source of uncertainty in global biogeochemical accounting and possible future climate feedbacks. Here, we combine an ensemble of IPCC-AR4 climate change projections for the Amazon Basin (eight general circulation models) with alternative ecosystem parameter sets for the dynamic global vegetation model, LPJmL. We evaluate LPJmL simulations of carbon stocks and fluxes against flux tower and aboveground biomass datasets for individual sites and the entire basin. Variability in LPJmL model sensitivity to future climate change is primarily related to light and water limitations through biochemical and water-balance-related parameters. Temperature-dependent parameters related to plant respiration and photosynthesis appear to be less important than vegetation dynamics (and their parameters) for determining the magnitude of ecosystem response to climate change. Variance partitioning approaches reveal that relationships between uncertainty from ecosystem dynamics and climate projections are dependent on geographic location and the targeted ecosystem process. Parameter uncertainty from the LPJmL model does not affect the trajectory of ecosystem response for a given climate change scenario and the primary source of uncertainty for Amazon 'dieback' results from the uncertainty among climate projections. Our approach for describing uncertainty is applicable for informing and prioritizing policy options related to mitigation and adaptation where long-term investments are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the atmospheric component of the new Hadley Centre Global Environmental Model (HadGEM1) is assessed in terms of its ability to represent a selection of key aspects of variability in the Tropics and extratropics. These include midlatitude storm tracks and blocking activity, synoptic variability over Europe, and the North Atlantic Oscillation together with tropical convection, the Madden-Julian oscillation, and the Asian summer monsoon. Comparisons with the previous model, the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3), demonstrate that there has been a considerable increase in the transient eddy kinetic energy (EKE), bringing HadGEM1 into closer agreement with current reanalyses. This increase in EKE results from the increased horizontal resolution and, in combination with the improved physical parameterizations, leads to improvements in the representation of Northern Hemisphere storm tracks and blocking. The simulation of synoptic weather regimes over Europe is also greatly improved compared to HadCM3, again due to both increased resolution and other model developments. The variability of convection in the equatorial region is generally stronger and closer to observations than in HadCM3. There is, however, still limited convective variance coincident with several of the observed equatorial wave modes. Simulation of the Madden-Julian oscillation is improved in HadGEM1: both the activity and interannual variability are increased and the eastward propagation, although slower than observed, is much better simulated. While some aspects of the climatology of the Asian summer monsoon are improved in HadGEM1, the upper-level winds are too weak and the simulation of precipitation deteriorates. The dominant modes of monsoon interannual variability are similar in the two models, although in HadCM3 this is linked to SST forcing, while in HadGEM1 internal variability dominates. Overall, analysis of the phenomena considered here indicates that HadGEM1 performs well and, in many important respects, improves upon HadCM3. Together with the improved representation of the mean climate, this improvement in the simulation of atmospheric variability suggests that HadGEM1 provides a sound basis for future studies of climate and climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice clouds are an important yet largely unvalidated component of weather forecasting and climate models, but radar offers the potential to provide the necessary data to evaluate them. First in this paper, coordinated aircraft in situ measurements and scans by a 3-GHz radar are presented, demonstrating that, for stratiform midlatitude ice clouds, radar reflectivity in the Rayleigh-scattering regime may be reliably calculated from aircraft size spectra if the "Brown and Francis" mass-size relationship is used. The comparisons spanned radar reflectivity values from -15 to +20 dBZ, ice water contents (IWCs) from 0.01 to 0.4 g m(-3), and median volumetric diameters between 0.2 and 3 mm. In mixed-phase conditions the agreement is much poorer because of the higher-density ice particles present. A large midlatitude aircraft dataset is then used to derive expressions that relate radar reflectivity and temperature to ice water content and visible extinction coefficient. The analysis is an advance over previous work in several ways: the retrievals vary smoothly with both input parameters, different relationships are derived for the common radar frequencies of 3, 35, and 94 GHz, and the problem of retrieving the long-term mean and the horizontal variance of ice cloud parameters is considered separately. It is shown that the dependence on temperature arises because of the temperature dependence of the number concentration "intercept parameter" rather than mean particle size. A comparison is presented of ice water content derived from scanning 3-GHz radar with the values held in the Met Office mesoscale forecast model, for eight precipitating cases spanning 39 h over Southern England. It is found that the model predicted mean I WC to within 10% of the observations at temperatures between -30 degrees and - 10 degrees C but tended to underestimate it by around a factor of 2 at colder temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations suggest a possible link between the Atlantic Multidecadal Oscillation (AMO) and El Nino Southern Oscillation (ENSO) variability, with the warm AMO phase being related to weaker ENSO variability. A coupled ocean-atmosphere model is used to investigate this relationship and to elucidate mechanisms responsible for it. Anomalous sea surface temperatures (SSTs) associated with the positive AMO lead to change in the basic state in the tropical Pacific Ocean. This basic state change is associated with a deepened thermocline and reduced vertical stratification of the equatorial Pacific ocean, which in turn leads to weakened ENSO variability. We suggest a role for an atmospheric bridge that rapidly conveys the influence of the Atlantic Ocean to the tropical Pacific. The results suggest a non-local mechanism for changes in ENSO statistics and imply that anomalous Atlantic ocean SSTs can modulate both mean climate and climate variability over the Pacific.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A more complete understanding of amino acid ( AA) metabolism by the various tissues of the body is required to improve upon current systems for predicting the use of absorbed AA. The objective of this work was to construct and parameterize a model of net removal of AA by the portal-drained viscera (PDV). Six cows were prepared with arterial, portal, and hepatic catheters and infused abomasally with 0, 200, 400, or 600 g of casein daily. Casein infusion increased milk yield quadratically and tended to increase milk protein yield quadratically. Arterial concentrations of a number of essential AA increased linearly with respect to infusion amount. When infused casein was assumed to have a true digestion coefficient of 0.95, the minimum likely true digestion coefficient for noninfused duodenal protein was found to be 0.80. Net PDV use of AA appeared to be linearly related to total supply (arterial plus absorption), and extraction percentages ranged from 0.5 to 7.25% for essential AA. Prediction errors for portal vein AA concentrations ranged from 4 to 9% of the observed mean concentrations. Removal of AA by PDV represented approximately 33% of total postabsorptive catabolic use, including use during absorption but excluding use for milk protein synthesis, and was apparently adequate to support endogenous N losses in feces of 18.4 g/d. As 69% of this use was from arterial blood, increased PDV catabolism of AA in part represents increased absorption of AA in excess of amounts required by other body tissues. Based on the present model, increased anabolic use of AA in the mammary and other tissues would reduce the catabolic use of AA by the PDV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disease-weather relationships influencing Septoria leaf blotch (SLB) preceding growth stage (GS) 31 were identified using data from 12 sites in the UK covering 8 years. Based on these relationships, an early-warning predictive model for SLB on winter wheat was formulated to predict the occurrence of a damaging epidemic (defined as disease severity of 5% or > 5% on the top three leaf layers). The final model was based on accumulated rain > 3 mm in the 80-day period preceding GS 31 (roughly from early-February to the end of April) and accumulated minimum temperature with a 0A degrees C base in the 50-day period starting from 120 days preceding GS 31 (approximately January and February). The model was validated on an independent data set on which the prediction accuracy was influenced by cultivar resistance. Over all observations, the model had a true positive proportion of 0.61, a true negative proportion of 0.73, a sensitivity of 0.83, and a specificity of 0.18. True negative proportion increased to 0.85 for resistant cultivars and decreased to 0.50 for susceptible cultivars. Potential fungicide savings are most likely to be made with resistant cultivars, but such benefits would need to be identified with an in-depth evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper highlights the key role played by solubility in influencing gelation and demonstrates that many facets of the gelation process depend on this vital parameter. In particular, we relate thermal stability (T-gel) and minimum gelation concentration (MGC) values of small-molecule gelation in terms of the solubility and cooperative self-assembly of gelator building blocks. By employing a van't Hoff analysis of solubility data, determined from simple NMR measurements, we are able to generate T-calc values that reflect the calculated temperature for complete solubilization of the networked gelator. The concentration dependence of T-calc allows the previously difficult to rationalize "plateau-region" thermal stability values to be elucidated in terms of gelator molecular design. This is demonstrated for a family of four gelators with lysine units attached to each end of an aliphatic diamine, with different peripheral groups (Z or Bee) in different locations on the periphery of the molecule. By tuning the peripheral protecting groups of the gelators, the solubility of the system is modified, which in turn controls the saturation point of the system and hence controls the concentration at which network formation takes place. We report that the critical concentration (C-crit) of gelator incorporated into the solid-phase sample-spanning network within the gel is invariant of gelator structural design. However, because some systems have higher solubilities, they are less effective gelators and require the application of higher total concentrations to achieve gelation, hence shedding light on the role of the MGC parameter in gelation. Furthermore, gelator structural design also modulates the level of cooperative self-assembly through solubility effects, as determined by applying a cooperative binding model to NMR data. Finally, the effect of gelator chemical design on the spatial organization of the networked gelator was probed by small-angle neutron and X-ray scattering (SANS/SAXS) on the native gel, and a tentative self-assembly model was proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, observations by a ground-based vertically pointing Doppler lidar and sonic anemometer are used to investigate the diurnal evolution of boundary-layer turbulence in cloudless, cumulus and stratocumulus conditions. When turbulence is driven primarily by surface heating, such as in cloudless and cumulus-topped boundary layers, both the vertical velocity variance and skewness follow similar profiles, on average, to previous observational studies of turbulence in convective conditions, with a peak skewness of around 0.8 in the upper third of the mixed layer. When the turbulence is driven primarily by cloud-top radiative cooling, such as in the presence of nocturnal stratocumulus, it is found that the skewness is inverted in both sign and height: its minimum value of around −0.9 occurs in the lower third of the mixed layer. The profile of variance is consistent with a cloud-top cooling rate of around 30Wm−2. This is also consistent with the evolution of the thermodynamic profile and the rate of growth of the mixed layer into the stable nocturnal boundary layer from above. In conditions where surface heating occurs simultaneously with cloud-top cooling, the skewness is found to be useful for diagnosing the source of the turbulence, suggesting that long-term Doppler lidar observations would be valuable for evaluating boundary-layer parametrization schemes. Copyright c 2009 Royal Meteorological Society