100 resultados para DIMENSIONAL MODEL
Resumo:
We present a simple, generic model of annual tree growth, called "T". This model accepts input from a first-principles light-use efficiency model (the "P" model). The P model provides values for gross primary production (GPP) per unit of absorbed photosynthetically active radiation (PAR). Absorbed PAR is estimated from the current leaf area. GPP is allocated to foliage, transport tissue, and fine-root production and respiration in such a way as to satisfy well-understood dimensional and functional relationships. Our approach thereby integrates two modelling approaches separately developed in the global carbon-cycle and forest-science literature. The T model can represent both ontogenetic effects (the impact of ageing) and the effects of environmental variations and trends (climate and CO2) on growth. Driven by local climate records, the model was applied to simulate ring widths during the period 1958–2006 for multiple trees of Pinus koraiensis from the Changbai Mountains in northeastern China. Each tree was initialised at its actual diameter at the time when local climate records started. The model produces realistic simulations of the interannual variability in ring width for different age cohorts (young, mature, and old). Both the simulations and observations show a significant positive response of tree-ring width to growing-season total photosynthetically active radiation (PAR0) and the ratio of actual to potential evapotranspiration (α), and a significant negative response to mean annual temperature (MAT). The slopes of the simulated and observed relationships with PAR0 and α are similar; the negative response to MAT is underestimated by the model. Comparison of simulations with fixed and changing atmospheric CO2 concentration shows that CO2 fertilisation over the past 50 years is too small to be distinguished in the ring-width data, given ontogenetic trends and interannual variability in climate.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.
Resumo:
Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
Initialising the ocean internal variability for decadal predictability studies is a new area of research and a variety of ad hoc methods are currently proposed. In this study, we explore how nudging with sea surface temperature (SST) and salinity (SSS) can reconstruct the three-dimensional variability of the ocean in a perfect model framework. This approach builds on the hypothesis that oceanic processes themselves will transport the surface information into the ocean interior as seen in ocean-only simulations. Five nudged simulations are designed to reconstruct a 150 years “target” simulation, defined as a portion of a long control simulation. The nudged simulations differ by the variables restored to, SST or SST + SSS, and by the area where the nudging is applied. The strength of the heat flux feedback is diagnosed from observations and the restoring coefficients for SSS use the same time-scale. We observed that this choice prevents spurious convection at high latitudes and near sea-ice border when nudging both SST and SSS. In the tropics, nudging the SST is enough to reconstruct the tropical atmosphere circulation and the associated dynamical and thermodynamical impacts on the underlying ocean. In the tropical Pacific Ocean, the profiles for temperature show a significant correlation from the surface down to 2,000 m, due to dynamical adjustment of the isopycnals. At mid-to-high latitudes, SSS nudging is required to reconstruct both the temperature and the salinity below the seasonal thermocline. This is particularly true in the North Atlantic where adding SSS nudging enables to reconstruct the deep convection regions of the target. By initiating a previously documented 20-year cycle of the model, the SST + SSS nudging is also able to reproduce most of the AMOC variations, a key source of decadal predictability. Reconstruction at depth does not significantly improve with amount of time spent nudging and the efficiency of the surface nudging rather depends on the period/events considered. The joint SST + SSS nudging applied everywhere is the most efficient approach. It ensures that the right water masses are formed at the right surface density, the subsequent circulation, subduction and deep convection further transporting them at depth. The results of this study underline the potential key role of SSS for decadal predictability and further make the case for sustained large-scale observations of this field.
Resumo:
We test the ability of a two-dimensional flux model to simulate polynya events with narrow open-water zones by comparing model results to ice-thickness and ice-production estimates derived from thermal infrared Moderate Resolution Imaging Spectroradiometer (MODIS) observations in conjunction with an atmospheric dataset. Given a polynya boundary and an atmospheric dataset, the model correctly reproduces the shape of an 11 day long event, using only a few simple conservation laws. Ice production is slightly overestimated by the model, owing to an underestimated ice thickness. We achieved best model results with the consolidation thickness parameterization developed by Biggs and others (2000). Observed regional discrepancies between model and satellite estimates might be a consequence of the missing representation of the dynamic of the thin-ice thickening (e.g. rafting). We conclude that this simplified polynya model is a valuable tool for studying polynya dynamics and estimating associated fluxes of single polynya events.
Resumo:
Terrain following coordinates are widely used in operational models but the cut cell method has been proposed as an alternative that can more accurately represent atmospheric dynamics over steep orography. Because the type of grid is usually chosen during model implementation, it becomes necessary to use different models to compare the accuracy of different grids. In contrast, here a C-grid finite volume model enables a like-for-like comparison of terrain following and cut cell grids. A series of standard two-dimensional tests using idealised terrain are performed: tracer advection in a prescribed horizontal velocity field, a test starting from resting initial conditions, and orographically induced gravity waves described by nonhydrostatic dynamics. In addition, three new tests are formulated: a more challenging resting atmosphere case, and two new advection tests having a velocity field that is everywhere tangential to the terrain following coordinate surfaces. These new tests present a challenge on cut cell grids. The results of the advection tests demonstrate that accuracy depends primarily upon alignment of the flow with the grid rather than grid orthogonality. A resting atmosphere is well-maintained on all grids. In the gravity waves test, results on all grids are in good agreement with existing results from the literature, although terrain following velocity fields lead to errors on cut cell grids. Due to semi-implicit timestepping and an upwind-biased, explicit advection scheme, there are no timestep restrictions associated with small cut cells. We do not find the significant advantages of cut cells or smoothed coordinates that other authors find.