54 resultados para First order autoregressive model AR (1)
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
The frequency of persistent atmospheric blocking events in the 40-yr ECMWF Re-Analysis (ERA-40) is compared with the blocking frequency produced by a simple first-order Markov model designed to predict the time evolution of a blocking index [defined by the meridional contrast of potential temperature on the 2-PVU surface (1 PVU ≡ 1 × 10−6 K m2 kg−1 s−1)]. With the observed spatial coherence built into the model, it is able to reproduce the main regions of blocking occurrence and the frequencies of sector blocking very well. This underlines the importance of the climatological background flow in determining the locations of high blocking occurrence as being the regions where the mean midlatitude meridional potential vorticity (PV) gradient is weak. However, when only persistent blocking episodes are considered, the model is unable to simulate the observed frequencies. It is proposed that this persistence beyond that given by a red noise model is due to the self-sustaining nature of the blocking phenomenon.
Resumo:
Topography influences many aspects of forest-atmosphere carbon exchange; yet only a small number of studies have considered the role of topography on the structure of turbulence within and above vegetation and its effect on canopy photosynthesis and the measurement of net ecosystem exchange of CO2 (N-ee) using flux towers. Here, we focus on the interplay between radiative transfer, flow dynamics for neutral stratification, and ecophysiological controls on CO2 sources and sinks within a canopy on a gentle cosine hill. We examine how topography alters the forest-atmosphere CO2 exchange rate when compared to uniform flat terrain using a newly developed first-order closure model that explicitly accounts for the flow dynamics, radiative transfer, and nonlinear eco physiological processes within a plant canopy. We show that variation in radiation and airflow due to topography causes only a minor departure in horizontally averaged and vertically integrated photosynthesis from their flat terrain values. However, topography perturbs the airflow and concentration fields in and above plant canopies, leading to significant horizontal and vertical advection of CO2. Advection terms in the conservation equation may be neglected in flow over homogeneous, flat terrain, and then N-ee = F-c, the vertical turbulent flux of CO2. Model results suggest that vertical and horizontal advection terms are generally of opposite sign and of the same order as the biological sources and sinks. We show that, close to the hilltop, F-c departs by a factor of three compared to its flat terrain counterpart and that the horizontally averaged F-c-at canopy top differs by more than 20% compared to the flat-terrain case.
Resumo:
The combined effect of pressure and temperature on the rate of gelatinisation of starch present in Thai glutinous rice was investigated. Pressure was found to initiate gelatinisation when its value exceeded 200 MPa at ambient temperature. On the other hand, complete gelatinisation was observed at 500 and 600 MPa at 70 degrees C, when the rice was soaked in water under these conditions for 120 min. A first-order kinetic model describing the rate of gelatinisation was developed to estimate the values of the rate constants as a function of pressure and temperature in the range: 0.1-600 MPa and 20-70 degrees C. The model, based on the well-known Arrhenius and Eyring equations, assumed the form [GRAPHICS] The constants k(0), E-a, and Delta V were found to take values: 31.19 s(-1), 37.89 kJ mol(-1) and -9.98 cm(3) mol(-1), respectively. It was further noted that the extent of gelatinisation occurring at any time, temperature and pressure, could be exclusively correlated with the grain moisture content. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
We investigate Fréchet differentiability of the scattered field with respect to variation in the boundary in the case of time–harmonic acoustic scattering by an unbounded, sound–soft, one–dimensional rough surface. We rigorously prove the differentiability of the scattered field and derive a characterization of the Fréchet derivative as the solution to a Dirichlet boundary value problem. As an application of these results we give rigorous error estimates for first–order perturbation theory, justifying small perturbation methods that have a long history in the engineering literature. As an application of our rigorous estimates we show that a plane acoustic wave incident on a sound–soft rough surface can produce an unbounded scattered field.
Resumo:
We present an efficient method of combining wide angle neutron scattering data with detailed atomistic models, allowing us to perform a quantitative and qualitative mapping of the organisation of the chain conformation in both glass and liquid phases. The structural refinement method presented in this work is based on the exploitation of the intrachain features of the diffraction pattern and its intimate linkage with atomistic models by the use of internal coordinates for bond lengths, valence angles and torsion rotations. Atomic connectivity is defined through these coordinates that are in turn assigned by pre-defined probability distributions, thus allowing for the models in question to be built stochastically. Incremental variation of these coordinates allows for the construction of models that minimise the differences between the observed and calculated structure factors. We present a series of neutron scattering data of 1,2 polybutadiene at the region 120-400K. Analysis of the experimental data yield bond lengths for C-C and C=C of 1.54Å and 1.35Å respectively. Valence angles of the backbone were found to be at 112° and the torsion distributions are characterised by five rotational states, a three-fold trans-skew± for the backbone and gauche± for the vinyl group. Rotational states of the vinyl group were found to be equally populated, indicating a largely atactic chan. The two backbone torsion angles exhibit different behaviour with respect to temperature of their trans population, with one of them adopting an almost all trans sequence. Consequently the resulting configuration leads to a rather persistent chain, something indicated by the value of the characteristic ratio extrapolated from the model. We compare our results with theoretical predictions, computer simulations, RIS models and previously reported experimental results.
Resumo:
The general circulation models used to simulate global climate typically feature resolution too coarse to reproduce many smaller-scale processes, which are crucial to determining the regional responses to climate change. A novel approach to downscale climate change scenarios is presented which includes the interactions between the North Atlantic Ocean and the European shelves as well as their impact on the North Atlantic and European climate. The goal of this paper is to introduce the global ocean-regional atmosphere coupling concept and to show the potential benefits of this model system to simulate present-day climate. A global ocean-sea ice-marine biogeochemistry model (MPIOM/HAMOCC) with regionally high horizontal resolution is coupled to an atmospheric regional model (REMO) and global terrestrial hydrology model (HD) via the OASIS coupler. Moreover, results obtained with ROM using NCEP/NCAR reanalysis and ECHAM5/MPIOM CMIP3 historical simulations as boundary conditions are presented and discussed for the North Atlantic and North European region. The validation of all the model components, i.e., ocean, atmosphere, terrestrial hydrology, and ocean biogeochemistry is performed and discussed. The careful and detailed validation of ROM provides evidence that the proposed model system improves the simulation of many aspects of the regional climate, remarkably the ocean, even though some biases persist in other model components, thus leaving potential for future improvement. We conclude that ROM is a powerful tool to estimate possible impacts of climate change on the regional scale.
Resumo:
Simple first-order closure remains an attractive way of formulating equations for complex canopy flows when the aim is to find analytic or simple numerical solutions to illustrate fundamental physical processes. Nevertheless, the limitations of such closures must be understood if the resulting models are to illuminate rather than mislead. We propose five conditions that first-order closures must satisfy then test two widely used closures against them. The first is the eddy diffusivity based on a mixing length. We discuss the origins of this approach, its use in simple canopy flows and extensions to more complex flows. We find that it satisfies most of the conditions and, because the reasons for its failures are well understood, it is a reliable methodology. The second is the velocity-squared closure that relates shear stress to the square of mean velocity. Again we discuss the origins of this closure and show that it is based on incorrect physical principles and fails to satisfy any of the five conditions in complex canopy flows; consequently its use can lead to actively misleading conclusions.
Resumo:
If stock and stock index futures markets are functioning properly price movements in these markets should best be described by a first order vector error correction model with the error correction term being the price differential between the two markets (the basis). Recent evidence suggests that there are more dynamics present than should be in effectively functioning markets. Using self-exciting threshold autoregressive (SETAR) models, this study analyses whether such dynamics can be related to different regimes within which the basis can fluctuate in a predictable manner without triggering arbitrage. These findings reveal that the basis shows strong evidence of autoregressive behaviour when its value is between the two thresholds but that the extra dynamics disappear once the basis moves above the upper threshold and their persistence is reduced, although not eradicated, once the basis moves below the lower threshold. This suggests that once nonlinearity associated with transactions costs is accounted for, stock and stock index futures markets function more effectively than is suggested by linear models of the pricing relationship.
Resumo:
The effect of temperature on the degradation of blackcurrant anthocyanins in a model juice system was determined over a temperature range of 4–140 °C. The thermal degradation of anthocyanins followed pseudo first-order kinetics. From 4–100 °C an isothermal method was used to determine the kinetic parameters. In order to mimic the temperature profile in retort systems, a non-isothermal method was applied to determine the kinetic parameters in the model juice over the temperature range 110–140 °C. The results from both isothermal and non-isothermal methods fit well together, indicating that the non-isothermal procedure is a reliable mathematical method to determine the kinetics of anthocyanin degradation. The reaction rate constant (k) increased from 0.16 (±0.01) × 10−3 to 9.954 (±0.004) h−1 at 4 and 140 °C, respectively. The temperature dependence of the rate of anthocyanin degradation was modelled by an extension of the Arrhenius equation, which showed a linear increase in the activation energy with temperature.