906 resultados para Classical measurement error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The canopy interception capacity is a small but key part of the surface hydrology, which affects the amount of water intercepted by vegetation and therefore the partitioning of evaporation and transpiration. However, little research with climate models has been done to understand the effects of a range of possible canopy interception capacity parameter values. This is in part due to the assumption that it does not significantly affect climate. Near global evapotranspiration products now make evaluation of canopy interception capacity parameterisations possible. We use a range of canopy water interception capacity values from the literature to investigate the effect on climate within the climate model HadCM3. We find that the global mean temperature is affected by up to -0.64 K globally and -1.9 K regionally. These temperature impacts are predominantly due to changes in the evaporative fraction and top of atmosphere albedo. In the tropics, the variations in evapotranspiration affect precipitation, significantly enhancing rainfall. Comparing the model output to measurements, we find that the default canopy interception capacity parameterisation overestimates canopy interception loss (i.e. canopy evaporation) and underestimates transpiration. Overall, decreasing canopy interception capacity improves the evapotranspiration partitioning in HadCM3, though the measurement literature more strongly supports an increase. The high sensitivity of climate to the parameterisation of canopy interception capacity is partially due to the high number of light rain-days in the climate model that means that interception is overestimated. This work highlights the hitherto underestimated importance of canopy interception capacity in climate model hydroclimatology and the need to acknowledge the role of precipitation representation limitations in determining parameterisations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new class of parameter estimation algorithms is introduced for Gaussian process regression (GPR) models. It is shown that the integration of the GPR model with probability distance measures of (i) the integrated square error and (ii) Kullback–Leibler (K–L) divergence are analytically tractable. An efficient coordinate descent algorithm is proposed to iteratively estimate the kernel width using golden section search which includes a fast gradient descent algorithm as an inner loop to estimate the noise variance. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop an on-line Gaussian mixture density estimator (OGMDE) in the complex-valued domain to facilitate adaptive minimum bit-error-rate (MBER) beamforming receiver for multiple antenna based space-division multiple access systems. Specifically, the novel OGMDE is proposed to adaptively model the probability density function of the beamformer’s output by tracking the incoming data sample by sample. With the aid of the proposed OGMDE, our adaptive beamformer is capable of updating the beamformer’s weights sample by sample to directly minimize the achievable bit error rate (BER). We show that this OGMDE based MBER beamformer outperforms the existing on-line MBER beamformer, known as the least BER beamformer, in terms of both the convergence speed and the achievable BER.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Representation error arises from the inability of the forecast model to accurately simulate the climatology of the truth. We present a rigorous framework for understanding this kind of error of representation. This framework shows that the lack of an inverse in the relationship between the true climatology (true attractor) and the forecast climatology (forecast attractor) leads to the error of representation. A new gain matrix for the data assimilation problem is derived that illustrates the proper approaches one may take to perform Bayesian data assimilation when the observations are of states on one attractor but the forecast model resides on another. This new data assimilation algorithm is the optimal scheme for the situation where the distributions on the true attractor and the forecast attractors are separately Gaussian and there exists a linear map between them. The results of this theory are illustrated in a simple Gaussian multivariate model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent work has shown that both the amplitude of upper-level Rossby waves and the tropopause sharpness decrease with forecast lead time for several days in some operational weather forecast systems. In this contribution, the evolution of error growth in a case study of this forecast error type is diagnosed through analysis of operational forecasts and hindcast simulations. Potential vorticity (PV) on the 320-K isentropic surface is used to diagnose Rossby waves. The Rossby-wave forecast error in the operational ECMWF high-resolution forecast is shown to be associated with errors in the forecast of a warm conveyor belt (WCB) through trajectory analysis and an error metric for WCB outflows. The WCB forecast error is characterised by an overestimation of WCB amplitude, a location of the WCB outflow regions that is too far to the southeast, and a resulting underestimation of the magnitude of the negative PV anomaly in the outflow. Essentially the same forecast error development also occurred in all members of the ECMWF Ensemble Prediction System and the Met Office MOGREPS-15 suggesting that in this case model error made an important contribution to the development of forecast error in addition to initial condition error. Exploiting this forecast error robustness, a comparison was performed between the realised flow evolution, proxied by a sequence of short-range simulations, and a contemporaneous forecast. Both the proxy to the realised flow and the contemporaneous forecast a were produced with the Met Office Unified Model enhanced with tracers of diabatic processes modifying potential temperature and PV. Clear differences were found in the way potential temperature and PV are modified in the WCB between proxy and forecast. These results demonstrate that differences in potential temperature and PV modification in the WCB can be responsible for forecast errors in Rossby waves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Georeferencing is one of the major tasks of satellite-borne remote sensing. Compared to traditional indirect methods, direct georeferencing through a Global Positioning System/inertial navigation system requires fewer and simpler steps to obtain exterior orientation parameters of remotely sensed images. However, the pixel shift caused by geographic positioning error, which is generally derived from boresight angle as well as terrain topography variation, can have a great impact on the precision of georeferencing. The distribution of pixel shifts introduced by the positioning error on a satellite linear push-broom image is quantitatively analyzed. We use the variation of the object space coordinate to simulate different kinds of positioning errors and terrain topography. Then a total differential method was applied to establish a rigorous sensor model in order to mathematically obtain the relationship between pixel shift and positioning error. Finally, two simulation experiments are conducted using the imaging parameters of Chang’ E-1 satellite to evaluate two different kinds of positioning errors. The experimental results have shown that with the experimental parameters, the maximum pixel shift could reach 1.74 pixels. The proposed approach can be extended to a generic application for imaging error modeling in remote sensing with terrain variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the Cloud Feedback Model Intercomparison (CFMIP) Observation Simulation Package (COSP) is expanded to include scattering and emission effects of clouds and precipitation at passive microwave frequencies. This represents an advancement over the official version of COSP (version 1.4.0) in which only clear-sky brightness temperatures are simulated. To highlight the potential utility of this new microwave simulator, COSP results generated using the climate model EC-Earth's version 3 atmosphere as input are compared with Microwave Humidity Sounder (MHS) channel (190.311 GHz) observations. Specifically, simulated seasonal brightness temperatures (TB) are contrasted with MHS observations for the period December 2005 to November 2006 to identify possible biases in EC-Earth's cloud and atmosphere fields. The EC-Earth's atmosphere closely reproduces the microwave signature of many of the major large-scale and regional scale features of the atmosphere and surface. Moreover, greater than 60 % of the simulated TB are within 3 K of the NOAA-18 observations. However, COSP is unable to simulate sufficiently low TB in areas of frequent deep convection. Within the Tropics, the model's atmosphere can yield an underestimation of TB by nearly 30 K for cloudy areas in the ITCZ. Possible reasons for this discrepancy include both incorrect amount of cloud ice water in the model simulations and incorrect ice particle scattering assumptions used in the COSP microwave forward model. These multiple sources of error highlight the non-unique nature of the simulated satellite measurements, a problem exacerbated by the fact that EC-Earth lacks detailed micro-physical parameters necessary for accurate forward model calculations. Such issues limit the robustness of our evaluation and suggest a general note of caution when making COSP-satellite observation evaluations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive atmospheric boundary layer (ABL) data set was collected in eight fi eld experiments (two during each season) over open water and sea ice in the Baltic Sea during 1998–2001 with the primary objective to validate the coupled atmospheric- ice-ocean-land surface model BALTIMOS (BALTEX Integrated Model System). Measurements were taken by aircraft, ships and surface stations and cover the mean and turbulent structure of the ABL including turbulent fl uxes, radiation fl uxes, and cloud conditions. Measurement examples of the spatial variability of the ABL over the ice edge zone and of the stable ABL over open water demonstrate the wide range of ABL conditions collected and the strength of the data set which can also be used to validate other regional models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. Methods We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m2 [underweight], 18·5 kg/m2 to <20 kg/m2, 20 kg/m2 to <25 kg/m2, 25 kg/m2 to <30 kg/m2, 30 kg/m2 to <35 kg/m2, 35 kg/m2 to <40 kg/m2, ≥40 kg/m2 [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. Findings We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m2 (95% credible interval 21·3–22·1) in 1975 to 24·2 kg/m2 (24·0–24·4) in 2014 in men, and from 22·1 kg/m2 (21·7–22·5) in 1975 to 24·4 kg/m2 (24·2–24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m2 in central Africa and south Asia to 29·2 kg/m2 (28·6–29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m2 (21·4–22·3) in south Asia to 32·2 kg/m2 (31·5–32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5–17·4) to 8·8% (7·4–10·3) in men and from 14·6% (11·6–17·9) to 9·7% (8·3–11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8–29·2) in men and 24·0% (18·9–29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4–4·1) in 1975 to 10·8% (9·7–12·0) in 2014 in men, and from 6·4% (5·1–7·8) to 14·9% (13·6–16·1) in women. 2·3% (2·0–2·7) of the world's men and 5·0% (4·4–5·6) of women were severely obese (ie, have BMI ≥35 kg/m2). Globally, prevalence of morbid obesity was 0·64% (0·46–0·86) in men and 1·6% (1·3–1·9) in women. Interpretation If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia.