46 resultados para mean-variance estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of the UK Universities Global Atmospheric Modelling Programme (UGAMP) General Circulation Model (UGCM) to two very different approaches to convective parametrization is described. Comparison is made between a Kuo scheme, which is constrained by large-scale moisture convergence, and a convective-adjustment scheme, which relaxes to observed thermodynamic states. Results from 360-day integrations with perpetual January conditions are used to describe the model's tropical time-mean climate and its variability. Both convection schemes give reasonable simulations of the time-mean climate, but the representation of the main modes of tropical variability is markedly different. The Kuo scheme has much weaker variance, confined to synoptic frequencies near 4 days, and a poor simulation of intraseasonal variability. In contrast, the convective-adjustment scheme has much more transient activity at all time-scales. The various aspects of the two schemes which might explain this difference are discussed. The particular closure on moisture convergence used in this version of the Kuo scheme is identified as being inappropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

21st century climate change is projected to result in an intensification of the global hydrological cycle, but there is substantial uncertainty in how this will impact freshwater availability. A relatively overlooked aspect of this uncertainty pertains to how different methods of estimating potential evapotranspiration (PET) respond to changing climate. Here we investigate the global response of six different PET methods to a 2 °C rise in global mean temperature. All methods suggest an increase in PET associated with a warming climate. However, differences in PET climate change signal of over 100% are found between methods. Analysis of a precipitation/PET aridity index and regional water surplus indicates that for certain regions and GCMs, choice of PET method can actually determine the direction of projections of future water resources. As such, method dependence of the PET climate change signal is an important source of uncertainty in projections of future freshwater availability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations suggest a possible link between the Atlantic Multidecadal Oscillation (AMO) and El Nino Southern Oscillation (ENSO) variability, with the warm AMO phase being related to weaker ENSO variability. A coupled ocean-atmosphere model is used to investigate this relationship and to elucidate mechanisms responsible for it. Anomalous sea surface temperatures (SSTs) associated with the positive AMO lead to change in the basic state in the tropical Pacific Ocean. This basic state change is associated with a deepened thermocline and reduced vertical stratification of the equatorial Pacific ocean, which in turn leads to weakened ENSO variability. We suggest a role for an atmospheric bridge that rapidly conveys the influence of the Atlantic Ocean to the tropical Pacific. The results suggest a non-local mechanism for changes in ENSO statistics and imply that anomalous Atlantic ocean SSTs can modulate both mean climate and climate variability over the Pacific.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study was carried out on 92 smallholder farms in Kwale district in Coast Province of Kenya to estimate the milk yield. The effect of concentrate feed supplementation on milk yield was also evaluated. Data were collected during a one-year observational longitudinal study. Analysis was done for 371 observations following 63 calving events. The mean annual milk offtake was estimated at 2021 kg/cow. Forty-nine (77.8%) of the lactating cows were supplemented with concentrate feeds at varying rates of 0.5-3.0 kg/cow per day. Supplementary feeding of lactating cows led to a significantly higher mean daily milk yield compared to non-supplemented cows throughout the year (p<0.05). The mean annual milk offtake from supplemented cows (2195 kg/cow) was 18.6% more than offtake from non-supplemented cows, a difference that was statistically significant (p<0.05). Therefore, supplementary feeding of commercial feed concentrates was a rational management practice. It was also concluded that milk production from smallholder dairy cows in the coastal lowlands of Kenya was comparable to that from similar production systems but lower than national targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied growth and estimated recruitment of massive coral colonies at three sites, Kaledupa, Hoga and Sampela, separated by about 1.5 km in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. There was significantly higher species richness (P<0.05), coral cover (P<0.05) and rugosity (P<0.01) at Kaledupa than at Sampela. A model for coral reef growth has been developed based on a rational polynomial function, where dx/dt is an index of coral growth with time; W is the variable (for example, coral weight, coral length or coral area), up to the power of n in the numerator and m in the denominator; a1……an and b1…bm are constants. The values for n and m represent the degree of the polynomial, and can relate to the morphology of the coral. The model was used to simulate typical coral growth curves, and tested using published data obtained by weighing coral colonies underwater in reefs on the south-west coast of Curaçao [‘Neth. J. Sea Res. 10 (1976) 285’]. The model proved an accurate fit to the data, and parameters were obtained for a number of coral species. Surface area data was obtained on over 1200 massive corals at three different sites in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. The year of an individual's recruitment was calculated from knowledge of the growth rate modified by application of the rational polynomial model. The estimated pattern of recruitment was variable, with little numbers of massive corals settling and growing before 1950 at the heavily used site, Sampela, relative to the reef site with little or no human use, Kaledupa, and the intermediate site, Hoga. There was a significantly greater sedimentation rate at Sampela than at either Kaledupa (P<0.0001) or Hoga (P<0.0005). The relative mean abundance of fish families present at the reef crests at the three sites, determined using digital video photography, did not correlate with sedimentation rates, underwater visibility or lack of large non-branching coral colonies. Radial growth rates of three genera of non-branching corals were significantly lower at Sampela than at Kaledupa or at Hoga, and there was a high correlation (r=0.89) between radial growth rates and underwater visibility. Porites spp. was the most abundant coral over all the sites and at all depths followed by Favites (P<0.04) and Favia spp. (P<0.03). Colony ages of Porites corals were significantly lower at the 5 m reef flat on the Sampela reef than at the same depth on both other reefs (P<0.005). At Sampela, only 2.8% of corals on the 5 m reef crest are of a size to have survived from before 1950. The Scleractinian coral community of Sampela is severely impacted by depositing sediments which can lead to the suffocation of corals, whilst also decreasing light penetration resulting in decreased growth and calcification rates. The net loss of material from Sampela, if not checked, could result in the loss of this protective barrier which would be to the detriment of the sublittoral sand flats and hence the Sampela village.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding an estimate of the channel impulse response (CIR) by correlating a received known (training) sequence with the sent training sequence is commonplace. Where required, it is also common to truncate the longer correlation to a sub-set of correlation coefficients by finding the set of N sequential correlation coefficients with the maximum power. This paper presents a new approach to selecting the optimal set of N CIR coefficients from the correlation rather than relying on power. The algorithm reconstructs a set of predicted symbols using the training sequence and various sub-sets of the correlation to find the sub-set that results in the minimum mean squared error between the actual received symbols and the reconstructed symbols. The application of the algorithm is presented in the context of the TDMA based GSM/GPRS system to demonstrate an improvement in the system performance with the new algorithm and the results are presented in the paper. However, the application lends itself to any training sequence based communication system often found within wireless consumer electronic device(1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new parameter-estimation algorithm, which minimises the cross-validated prediction error for linear-in-the-parameter models, is proposed, based on stacked regression and an evolutionary algorithm. It is initially shown that cross-validation is very important for prediction in linear-in-the-parameter models using a criterion called the mean dispersion error (MDE). Stacked regression, which can be regarded as a sophisticated type of cross-validation, is then introduced based on an evolutionary algorithm, to produce a new parameter-estimation algorithm, which preserves the parsimony of a concise model structure that is determined using the forward orthogonal least-squares (OLS) algorithm. The PRESS prediction errors are used for cross-validation, and the sunspot and Canadian lynx time series are used to demonstrate the new algorithms.