191 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, we generalise a previously-described model of the error-prone polymerase chain reaction (PCR) reaction to conditions of arbitrarily variable amplification efficiency and initial population size. Generalisation of the model to these conditions improves the correspondence to observed and expected behaviours of PCR, and restricts the extent to which the model may explore sequence space for a prescribed set of parameters. Error-prone PCR in realistic reaction conditions is predicted to be less effective at generating grossly divergent sequences than the original model. The estimate of mutation rate per cycle by sampling sequences from an in vitro PCR experiment is correspondingly affected by the choice of model and parameters. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The Newton‐Raphson method is proposed for the solution of the nonlinear equation arising from a theoretical model of an acid/base titration. It is shown that it is necessary to modify the form of the equation in order that the iteration is guaranteed to converge. A particular example is considered to illustrate the analysis and method, and a BASIC program is included that can be used to predict the pH of any weak acid/weak base titration.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP) in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) includes a comparison of observation-based and modeling-based estimates of the aerosol direct radiative forcing. In this comparison, satellite-based studies suggest a more negative aerosol direct radiative forcing than modeling studies. A previous satellite-based study, part of the IPCC comparison, uses aerosol optical depths and accumulation-mode fractions retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) at collection 4. The latest version of MODIS products, named collection 5, improves aerosol retrievals. Using these products, the direct forcing in the shortwave spectrum defined with respect to present-day natural aerosols is now estimated at −1.30 and −0.65 Wm−2 on a global clear-sky and all-sky average, respectively, for 2002. These values are still significantly more negative than the numbers reported by modeling studies. By accounting for differences between present-day natural and preindustrial aerosol concentrations, sampling biases, and investigating the impact of differences in the zonal distribution of anthropogenic aerosols, good agreement is reached between the direct forcing derived from MODIS and the Hadley Centre climate model HadGEM2-A over clear-sky oceans. Results also suggest that satellite estimates of anthropogenic aerosol optical depth over land should be coupled with a robust validation strategy in order to refine the observation-based estimate of aerosol direct radiative forcing. In addition, the complex problem of deriving the aerosol direct radiative forcing when aerosols are located above cloud still needs to be addressed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Atmospheric aerosols cause scattering and absorption of incoming solar radiation. Additional anthropogenic aerosols released into the atmosphere thus exert a direct radiative forcing on the climate system1. The degree of present-day aerosol forcing is estimated from global models that incorporate a representation of the aerosol cycles1–3. Although the models are compared and validated against observations, these estimates remain uncertain. Previous satellite measurements of the direct effect of aerosols contained limited information about aerosol type, and were confined to oceans only4,5. Here we use state-of-the-art satellitebased measurements of aerosols6–8 and surface wind speed9 to estimate the clear-sky direct radiative forcing for 2002, incorporating measurements over land and ocean. We use a Monte Carlo approach to account for uncertainties in aerosol measurements and in the algorithm used. Probability density functions obtained for the direct radiative forcing at the top of the atmosphere give a clear-sky, global, annual average of 21.9Wm22 with standard deviation, 60.3Wm22. These results suggest that present-day direct radiative forcing is stronger than present model estimates, implying future atmospheric warming greater than is presently predicted, as aerosol emissions continue to decline10.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Snow provides large seasonal storage of freshwater, and information about the distribution of snow mass as Snow Water Equivalent (SWE) is important for hydrological planning and detecting climate change impacts. Large regional disagreements remain between estimates from reanalyses, remote sensing and modelling. Assimilating passive microwave information improves SWE estimates in many regions but the assimilation must account for how microwave scattering depends on snow stratigraphy. Physical snow models can estimate snow stratigraphy, but users must consider the computational expense of model complexity versus acceptable errors. Using data from the National Aeronautics and Space Administration Cold Land Processes Experiment (NASA CLPX) and the Helsinki University of Technology (HUT) microwave emission model of layered snowpacks, it is shown that simulations of the brightness temperature difference between 19 GHz and 37 GHz vertically polarised microwaves are consistent with Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) and Special Sensor Microwave Imager (SSM/I) retrievals once known stratigraphic information is used. Simulated brightness temperature differences for an individual snow profile depend on the provided stratigraphic detail. Relative to a profile defined at the 10 cm resolution of density and temperature measurements, the error introduced by simplification to a single layer of average properties increases approximately linearly with snow mass. If this brightness temperature error is converted into SWE using a traditional retrieval method then it is equivalent to ±13 mm SWE (7% of total) at a depth of 100 cm. This error is reduced to ±5.6 mm SWE (3 % of total) for a two-layer model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Diabatic processes can alter Rossby wave structure; consequently errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root mean square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter-season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead time up to about five days, consistent with under-representation of diabatic modification and transport of air from the lower troposphere into upper-tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We compare the quasi-equilibrium heat balances, as well as their responses to 4×CO2 perturbation, among three global climate models with the aim to identify and explain inter-model differences in ocean heat uptake (OHU) processes. We find that, in quasi-equilibrium, convective and mixed layer processes, as well as eddy-related processes, cause cooling of the subsurface ocean. The cooling is balanced by warming caused by advective and diapycnally diffusive processes. We also find that in the CO2-perturbed climates the largest contribution to OHU comes from changes in vertical mixing processes and the mean circulation, particularly in the extra-tropics, caused both by changes in wind forcing, and by changes in high-latitude buoyancy forcing. There is a substantial warming in the tropics, a significant part of which occurs because of changes in horizontal advection in extra-tropics. Diapycnal diffusion makes only a weak contribution to the OHU, mainly in the tropics, due to increased stratification. There are important qualitative differences in the contribution of eddy-induced advection and isopycnal diffusion to the OHU among the models. The former is related to the different values of the coefficients used in the corresponding scheme. The latter is related to the different tapering formulations of the isopycnal diffusion scheme. These differences affect the OHU in the deep ocean, which is substantial in two of the models, the dominant region of deep warming being the Southern Ocean. However, most of the OHU takes place above 2000 m, and the three models are quantitatively similar in their global OHU efficiency and its breakdown among processes and as a function of latitude.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Two methods are developed to estimate net surface energy fluxes based upon satellite-based reconstructions of radiative fluxes at the top of atmosphere and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis. Method 1 applies the mass adjusted energy divergence from ERA-Interim while method 2 estimates energy divergence based upon the net energy difference at the top of atmosphere and the surface from ERA-Interim. To optimise the surface flux and its variability over ocean, the divergences over land are constrained to match the monthly area mean surface net energy flux variability derived from a simple relationship between the surface net energy flux and the surface temperature change. The energy divergences over the oceans are then adjusted to remove an unphysical residual global mean atmospheric energy divergence. The estimated net surface energy fluxes are compared with other data sets from reanalysis and atmospheric model simulations. The spatial correlation coefficients of multi-annual means between the estimations made here and other data sets are all around 0.9. There are good agreements in area mean anomaly variability over the global ocean, but discrepancies in the trend over the eastern Pacific are apparent.