951 resultados para Nonlinear simulations
Resumo:
We present a new parameterisation that relates surface mass balance (SMB: the sum of surface accumulation and surface ablation) to changes in surface elevation of the Greenland ice sheet (GrIS) for the MAR (Modèle Atmosphérique Régional: Fettweis, 2007) regional climate model. The motivation is to dynamically adjust SMB as the GrIS evolves, allowing us to force ice sheet models with SMB simulated by MAR while incorporating the SMB–elevation feedback, without the substantial technical challenges of coupling ice sheet and climate models. This also allows us to assess the effect of elevation feedback uncertainty on the GrIS contribution to sea level, using multiple global climate and ice sheet models, without the need for additional, expensive MAR simulations. We estimate this relationship separately below and above the equilibrium line altitude (ELA, separating negative and positive SMB) and for regions north and south of 77� N, from a set of MAR simulations in which we alter the ice sheet surface elevation. These give four “SMB lapse rates”, gradients that relate SMB changes to elevation changes. We assess uncertainties within a Bayesian framework, estimating probability distributions for each gradient from which we present best estimates and credibility intervals (CI) that bound 95% of the probability. Below the ELA our gradient estimates are mostly positive, because SMB usually increases with elevation: 0.56 (95% CI: −0.22 to 1.33) kgm−3 a−1 for the north, and 1.91 (1.03 to 2.61) kgm−3 a−1 for the south. Above the ELA, the gradients are much smaller in magnitude: 0.09 (−0.03 to 0.23) kgm−3 a−1 in the north, and 0.07 (−0.07 to 0.59) kgm−3 a−1 in the south, because SMB can either increase or decrease in response to increased elevation. Our statistically founded approach allows us to make probabilistic assessments for the effect of elevation feedback uncertainty on sea level projections (Edwards et al., 2014).
Resumo:
This article shows how one can formulate the representation problem starting from Bayes’ theorem. The purpose of this article is to raise awareness of the formal solutions,so that approximations can be placed in a proper context. The representation errors appear in the likelihood, and the different possibilities for the representation of reality in model and observations are discussed, including nonlinear representation probability density functions. Specifically, the assumptions needed in the usual procedure to add a representation error covariance to the error covariance of the observations are discussed,and it is shown that, when several sub-grid observations are present, their mean still has a representation error ; socalled ‘superobbing’ does not resolve the issue. Connection is made to the off-line or on-line retrieval problem, providing a new simple proof of the equivalence of assimilating linear retrievals and original observations. Furthermore, it is shown how nonlinear retrievals can be assimilated without loss of information. Finally we discuss how errors in the observation operator model can be treated consistently in the Bayesian framework, connecting to previous work in this area.
Resumo:
We use a stratosphere–troposphere composition–climate model with interactive sulfur chemistry and aerosol microphysics, to investigate the effect of the 1991 Mount Pinatubo eruption on stratospheric aerosol properties. Satellite measurements indicate that shortly after the eruption, between 14 and 23 Tg of SO2 (7 to 11.5 Tg of sulfur) was present in the tropical stratosphere. Best estimates of the peak global stratospheric aerosol burden are in the range 19 to 26 Tg, or 3.7 to 6.7 Tg of sulfur assuming a composition of between 59 and 77 % H2SO4. In light of this large uncertainty range, we performed two main simulations with 10 and 20 Tg of SO2 injected into the tropical lower stratosphere. Simulated stratospheric aerosol properties through the 1991 to 1995 period are compared against a range of available satellite and in situ measurements. Stratospheric aerosol optical depth (sAOD) and effective radius from both simulations show good qualitative agreement with the observations, with the timing of peak sAOD and decay timescale matching well with the observations in the tropics and mid-latitudes. However, injecting 20 Tg gives a factor of 2 too high stratospheric aerosol mass burden compared to the satellite data, with consequent strong high biases in simulated sAOD and surface area density, with the 10 Tg injection in much better agreement. Our model cannot explain the large fraction of the injected sulfur that the satellite-derived SO2 and aerosol burdens indicate was removed within the first few months after the eruption. We suggest that either there is an additional alternative loss pathway for the SO2 not included in our model (e.g. via accommodation into ash or ice in the volcanic cloud) or that a larger proportion of the injected sulfur was removed via cross-tropopause transport than in our simulations. We also critically evaluate the simulated evolution of the particle size distribution, comparing in detail to balloon-borne optical particle counter (OPC) measurements from Laramie, Wyoming, USA (41° N). Overall, the model captures remarkably well the complex variations in particle concentration profiles across the different OPC size channels. However, for the 19 to 27 km injection height-range used here, both runs have a modest high bias in the lowermost stratosphere for the finest particles (radii less than 250 nm), and the decay timescale is longer in the model for these particles, with a much later return to background conditions. Also, whereas the 10 Tg run compared best to the satellite measurements, a significant low bias is apparent in the coarser size channels in the volcanically perturbed lower stratosphere. Overall, our results suggest that, with appropriate calibration, aerosol microphysics models are capable of capturing the observed variation in particle size distribution in the stratosphere across both volcanically perturbed and quiescent conditions. Furthermore, additional sensitivity simulations suggest that predictions with the models are robust to uncertainties in sub-grid particle formation and nucleation rates in the stratosphere.
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux is considered by conducting magnetic flux transport simulations over many solar cycles. The simulations include the effects of differential rotation, meridional flow and supergranular diffusion on the radial magnetic field at the surface of the Sun as new magnetic bipoles emerge and are transported poleward. In each cycle the emergence of roughly 2100 bipoles is considered. The net open flux produced by the surface distribution is calculated by constructing potential coronal fields with a source surface from the surface distribution at regular intervals. In the simulations the net open magnetic flux closely follows the total dipole component at the source surface and evolves independently from the surface flux. The behaviour of the open flux is highly dependent on meridional flow and many observed features are reproduced by the model. However, when meridional flow is present at observed values the maximum value of the open flux occurs at cycle minimum when the polar caps it helps produce are the strongest. This is inconsistent with observations by Lockwood, Stamper and Wild (1999) and Wang, Sheeley, and Lean (2000) who find the open flux peaking 1–2 years after cycle maximum. Only in unrealistic simulations where meridional flow is much smaller than diffusion does a maximum in open flux consistent with observations occur. It is therefore deduced that there is no realistic parameter range of the flux transport variables that can produce the correct magnitude variation in open flux under the present approximations. As a result the present standard model does not contain the correct physics to describe the evolution of the Sun’s open magnetic flux over an entire solar cycle. Future possible improvements in modeling are suggested.
Resumo:
In this review I summarise some of the most significant advances of the last decade in the analysis and solution of boundary value problems for integrable partial differential equations in two independent variables. These equations arise widely in mathematical physics, and in order to model realistic applications, it is essential to consider bounded domain and inhomogeneous boundary conditions. I focus specifically on a general and widely applicable approach, usually referred to as the Unified Transform or Fokas Transform, that provides a substantial generalisation of the classical Inverse Scattering Transform. This approach preserves the conceptual efficiency and aesthetic appeal of the more classical transform approaches, but presents a distinctive and important difference. While the Inverse Scattering Transform follows the "separation of variables" philosophy, albeit in a nonlinear setting, the Unified Transform is a based on the idea of synthesis, rather than separation, of variables. I will outline the main ideas in the case of linear evolution equations, and then illustrate their generalisation to certain nonlinear cases of particular significance.
Resumo:
In this paper, we summarise this recent progress to underline the features specific to this nonlinear elliptic case, and we give a new classification of boundary conditions on the semistrip that satisfy a necessary condition for yielding a boundary value problem can be effectively linearised. This classification is based on formulation the equation in terms of an alternative Lax pair.
Resumo:
When considering adaptation measures and global climate mitigation goals, stakeholders need regional-scale climate projections, including the range of plausible warming rates. To assist these stakeholders, it is important to understand whether some locations may see disproportionately high or low warming from additional forcing above targets such as 2 K (ref. 1). There is a need to narrow uncertainty2 in this nonlinear warming, which requires understanding how climate changes as forcings increase from medium to high levels. However, quantifying and understanding regional nonlinear processes is challenging. Here we show that regional-scale warming can be strongly superlinear to successive CO2 doublings, using five different climate models. Ensemble-mean warming is superlinear over most land locations. Further, the inter-model spread tends to be amplified at higher forcing levels, as nonlinearities grow—especially when considering changes per kelvin of global warming. Regional nonlinearities in surface warming arise from nonlinearities in global-mean radiative balance, the Atlantic meridional overturning circulation, surface snow/ice cover and evapotranspiration. For robust adaptation and mitigation advice, therefore, potentially avoidable climate change (the difference between business-as-usual and mitigation scenarios) and unavoidable climate change (change under strong mitigation scenarios) may need different analysis methods.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
The new Max-Planck-Institute Earth System Model (MPI-ESM) is used in the Coupled Model Intercomparison Project phase 5 (CMIP5) in a series of climate change experiments for either idealized CO2-only forcing or forcings based on observations and the Representative Concentration Pathway (RCP) scenarios. The paper gives an overview of the model configurations, experiments related forcings, and initialization procedures and presents results for the simulated changes in climate and carbon cycle. It is found that the climate feedback depends on the global warming and possibly the forcing history. The global warming from climatological 1850 conditions to 2080–2100 ranges from 1.5°C under the RCP2.6 scenario to 4.4°C under the RCP8.5 scenario. Over this range, the patterns of temperature and precipitation change are nearly independent of the global warming. The model shows a tendency to reduce the ocean heat uptake efficiency toward a warmer climate, and hence acceleration in warming in the later years. The precipitation sensitivity can be as high as 2.5% K−1 if the CO2 concentration is constant, or as small as 1.6% K−1, if the CO2 concentration is increasing. The oceanic uptake of anthropogenic carbon increases over time in all scenarios, being smallest in the experiment forced by RCP2.6 and largest in that for RCP8.5. The land also serves as a net carbon sink in all scenarios, predominantly in boreal regions. The strong tropical carbon sources found in the RCP2.6 and RCP8.5 experiments are almost absent in the RCP4.5 experiment, which can be explained by reforestation in the RCP4.5 scenario.
Resumo:
Neural stem cells (NSCs) are early precursors of neuronal and glial cells. NSCs are capable of generating identical progeny through virtually unlimited numbers of cell divisions (cell proliferation), producing daughter cells committed to differentiation. Nuclear factor kappa B (NF-kappaB) is an inducible, ubiquitous transcription factor also expressed in neurones, glia and neural stem cells. Recently, several pieces of evidence have been provided for a central role of NF-kappaB in NSC proliferation control. Here, we propose a novel mathematical model for NF-kappaB-driven proliferation of NSCs. We have been able to reconstruct the molecular pathway of activation and inactivation of NF-kappaB and its influence on cell proliferation by a system of nonlinear ordinary differential equations. Then we use a combination of analytical and numerical techniques to study the model dynamics. The results obtained are illustrated by computer simulations and are, in general, in accordance with biological findings reported by several independent laboratories. The model is able to both explain and predict experimental data. Understanding of proliferation mechanisms in NSCs may provide a novel outlook in both potential use in therapeutic approaches, and basic research as well.
Resumo:
A ground source heat pump assisted by an array of photovoltaic (PV)-thermal modules was studied in this work. Extracting heat from an array of PV modules should improve the performance of both the PV cells and the heat pump. A series of computer simulations compare the performance of a ground source heat pump with a short ground circuit, used to provide space heating and domestic hot water at a house in southern England. The results indicate that extracting heat from an array of PV-thermal modules would improve the performance of a ground source heat pump with an undersized ground loop. Nevertheless, open air thermal collectors could be more effective, especially during winter. In one model more electricity was saved in ohmic heating than was generated by cooling the PV cells. Cooling the PV modules was found to increase their electrical output up to 4%, but much of the extra electricity was consumed by the cooling pumps.
Resumo:
Climate models are potentially useful tools for addressing human dispersals and demographic change. The Arabian Peninsula is becoming increasingly significant in the story of human dispersals out of Africa during the Late Pleistocene. Although characterised largely by arid environments today, emerging climate records indicate that the peninsula was wetter many times in the past, suggesting that the region may have been inhabited considerably more than hitherto thought. Explaining the origins and spatial distribution of increased rainfall is challenging because palaeoenvironmental research in the region is in an early developmental stage. We address environmental oscillations by assembling and analysing an ensemble of five global climate models (CCSM3, COSMOS, HadCM3, KCM, and NorESM). We focus on precipitation, as the variable is key for the development of lakes, rivers and savannas. The climate models generated here were compared with published palaeoenvironmental data such as palaeolakes, speleothems and alluvial fan records as a means of validation. All five models showed, to varying degrees, that the Arabia Peninsula was significantly wetter than today during the Last Interglacial (130 ka and 126/125 ka timeslices), and that the main source of increased rainfall was from the North African summer monsoon rather than the Indian Ocean monsoon or from Mediterranean climate patterns. Where available, 104 ka (MIS 5c), 56 ka (early MIS 3) and 21 ka (LGM) timeslices showed rainfall was present but not as extensive as during the Last Interglacial. The results favour the hypothesis that humans potentially moved out of Africa and into Arabia on multiple occasions during pluvial phases of the Late Pleistocene.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
A practical orthogonal frequency-division multiplexing (OFDM) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. In this contribution, we advocate a novel nonlinear equalization scheme for OFDM Hammerstein systems. We model the nonlinear HPA, which represents the static nonlinearity of the OFDM Hammerstein channel, by a B-spline neural network, and we develop a highly effective alternating least squares algorithm for estimating the parameters of the OFDM Hammerstein channel, including channel impulse response coefficients and the parameters of the B-spline model. Moreover, we also use another B-spline neural network to model the inversion of the HPA’s nonlinearity, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalization of the OFDM Hammerstein channel can then be accomplished by the usual one-tap linear equalization as well as the inverse B-spline neural network model obtained. The effectiveness of our nonlinear equalization scheme for OFDM Hammerstein channels is demonstrated by simulation results.