12 resultados para size-extensivity error

em CentAUR: Central Archive University of Reading - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio) provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC) and other moments of the drizzle drop distribution. The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m) profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R) and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here). Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theta-logistic is a widely used generalisation of the logistic model of regulated biological processes which is used in particular to model population regulation. Then the parameter theta gives the shape of the relationship between per-capita population growth rate and population size. Estimation of theta from population counts is however subject to bias, particularly when there are measurement errors. Here we identify factors disposing towards accurate estimation of theta by simulation of populations regulated according to the theta-logistic model. Factors investigated were measurement error, environmental perturbation and length of time series. Large measurement errors bias estimates of theta towards zero. Where estimated theta is close to zero, the estimated annual return rate may help resolve whether this is due to bias. Environmental perturbations help yield unbiased estimates of theta. Where environmental perturbations are large, estimates of theta are likely to be reliable even when measurement errors are also large. By contrast where the environment is relatively constant, unbiased estimates of theta can only be obtained if populations are counted precisely Our results have practical conclusions for the design of long-term population surveys. Estimation of the precision of population counts would be valuable, and could be achieved in practice by repeating counts in at least some years. Increasing the length of time series beyond ten or 20 years yields only small benefits. if populations are measured with appropriate accuracy, given the level of environmental perturbation, unbiased estimates can be obtained from relatively short censuses. These conclusions are optimistic for estimation of theta. (C) 2008 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe and evaluate a new estimator of the effective population size (N-e), a critical parameter in evolutionary and conservation biology. This new "SummStat" N-e. estimator is based upon the use of summary statistics in an approximate Bayesian computation framework to infer N-e. Simulations of a Wright-Fisher population with known N-e show that the SummStat estimator is useful across a realistic range of individuals and loci sampled, generations between samples, and N-e values. We also address the paucity of information about the relative performance of N-e estimators by comparing the SUMMStat estimator to two recently developed likelihood-based estimators and a traditional moment-based estimator. The SummStat estimator is the least biased of the four estimators compared. In 32 of 36 parameter combinations investigated rising initial allele frequencies drawn from a Dirichlet distribution, it has the lowest bias. The relative mean square error (RMSE) of the SummStat estimator was generally intermediate to the others. All of the estimators had RMSE > 1 when small samples (n = 20, five loci) were collected a generation apart. In contrast, when samples were separated by three or more generations and Ne less than or equal to 50, the SummStat and likelihood-based estimators all had greatly reduced RMSE. Under the conditions simulated, SummStat confidence intervals were more conservative than the likelihood-based estimators and more likely to include true N-e. The greatest strength of the SummStat estimator is its flexible structure. This flexibility allows it to incorporate any, potentially informative summary statistic from Population genetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we generalise a previously-described model of the error-prone polymerase chain reaction (PCR) reaction to conditions of arbitrarily variable amplification efficiency and initial population size. Generalisation of the model to these conditions improves the correspondence to observed and expected behaviours of PCR, and restricts the extent to which the model may explore sequence space for a prescribed set of parameters. Error-prone PCR in realistic reaction conditions is predicted to be less effective at generating grossly divergent sequences than the original model. The estimate of mutation rate per cycle by sampling sequences from an in vitro PCR experiment is correspondingly affected by the choice of model and parameters. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal utilisation of hyper-spectral satellite observations in numerical weather prediction is often inhibited by incorrectly assuming independent interchannel observation errors. However, in order to represent these observation-error covariance structures, an accurate knowledge of the true variances and correlations is needed. This structure is likely to vary with observation type and assimilation system. The work in this article presents the initial results for the estimation of IASI interchannel observation-error correlations when the data are processed in the Met Office one-dimensional (1D-Var) and four-dimensional (4D-Var) variational assimilation systems. The method used to calculate the observation errors is a post-analysis diagnostic which utilises the background and analysis departures from the two systems. The results show significant differences in the source and structure of the observation errors when processed in the two different assimilation systems, but also highlight some common features. When the observations are processed in 1D-Var, the diagnosed error variances are approximately half the size of the error variances used in the current operational system and are very close in size to the instrument noise, suggesting that this is the main source of error. The errors contain no consistent correlations, with the exception of a handful of spectrally close channels. When the observations are processed in 4D-Var, we again find that the observation errors are being overestimated operationally, but the overestimation is significantly larger for many channels. In contrast to 1D-Var, the diagnosed error variances are often larger than the instrument noise in 4D-Var. It is postulated that horizontal errors of representation, not seen in 1D-Var, are a significant contributor to the overall error here. Finally, observation errors diagnosed from 4D-Var are found to contain strong, consistent correlation structures for channels sensitive to water vapour and surface properties.