93 resultados para Eigenvalue Bounds
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
Resumo:
Consider the massless Dirac operator on a 3-torus equipped with Euclidean metric and standard spin structure. It is known that the eigenvalues can be calculated explicitly: the spectrum is symmetric about zero and zero itself is a double eigenvalue. The aim of the paper is to develop a perturbation theory for the eigenvalue with smallest modulus with respect to perturbations of the metric. Here the application of perturbation techniques is hindered by the fact that eigenvalues of the massless Dirac operator have even multiplicity, which is a consequence of this operator commuting with the antilinear operator of charge conjugation (a peculiar feature of dimension 3). We derive an asymptotic formula for the eigenvalue with smallest modulus for arbitrary perturbations of the metric and present two particular families of Riemannian metrics for which the eigenvalue with smallest modulus can be evaluated explicitly. We also establish a relation between our asymptotic formula and the eta invariant.
Resumo:
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small.
Resumo:
The stability of stationary flow of a two-dimensional ice sheet is studied when the ice obeys a power flow law (Glen's flow law). The mass accumulation rate at the top is assumed to depend on elevation and span and the bed supporting the ice sheet consists of an elastic layer lying on a rigid surface. The normal perturbation of the free surface of the ice sheet is a singular eigenvalue problem. The singularity of the perturbation at the front of the ice sheet is considered using matched asymptotic expansions, and the eigenvalue problem is seen to reduce to that with fixed ice front. Numerical solution of the perturbation eigenvalue problem shows that the dependence of accumulation rate on elevation permits the existence of unstable solutions when the equilibrium line is higher than the bed at the ice divide. Alternatively, when the equilibrium line is lower than the bed, there are only stable solutions. Softening of the bed, expressed through a decrease of its elastic modulus, has a stabilising effect on the ice sheet.
Resumo:
Speculative bubbles are generated when investors include the expectation of the future price in their information set. Under these conditions, the actual market price of the security, that is set according to demand and supply, will be a function of the future price and vice versa. In the presence of speculative bubbles, positive expected bubble returns will lead to increased demand and will thus force prices to diverge from their fundamental value. This paper investigates whether the prices of UK equity-traded property stocks over the past 15 years contain evidence of a speculative bubble. The analysis draws upon the methodologies adopted in various studies examining price bubbles in the general stock market. Fundamental values are generated using two models: the dividend discount and the Gordon growth. Variance bounds tests are then applied to test for bubbles in the UK property asset prices. Finally, cointegration analysis is conducted to provide further evidence on the presence of bubbles. Evidence of the existence of bubbles is found, although these appear to be transitory and concentrated in the mid-to-late 1990s.
Resumo:
We study the approximation of harmonic functions by means of harmonic polynomials in two-dimensional, bounded, star-shaped domains. Assuming that the functions possess analytic extensions to a delta-neighbourhood of the domain, we prove exponential convergence of the approximation error with respect to the degree of the approximating harmonic polynomial. All the constants appearing in the bounds are explicit and depend only on the shape-regularity of the domain and on delta. We apply the obtained estimates to show exponential convergence with rate O(exp(−b square root N)), N being the number of degrees of freedom and b>0, of a hp-dGFEM discretisation of the Laplace equation based on piecewise harmonic polynomials. This result is an improvement over the classical rate O(exp(−b cubic root N )), and is due to the use of harmonic polynomial spaces, as opposed to complete polynomial spaces.
Resumo:
Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
Enrichment in resource availability theoretically destabilizes predator–prey dynamics (the paradox of enrichment). However, a minor change in the resource stoichiometry may make a prey toxic for the predator, and the presence of toxic prey affects the dynamics significantly. Here, theoretically we explore how, at increased carrying capacity, a toxic prey affects the oscillation or destabilization of predator–prey dynamics, and how its presence influences the growth of the predator as well as that of a palatable prey. Mathematical analysis determines the bounds on the food toxicity that allow the coexistence of a predator along with a palatable and a toxic prey. The overall results demonstrate that toxic food counteracts oscillation (destabilization) arising from enrichment of resource availability. Moreover, our results show that, at increased resource availability, toxic food that acts as a source of extra mortality may increase the abundance of the predator as well as that of the palatable prey.
Resumo:
In this paper we propose and analyse a hybrid numerical-asymptotic boundary element method for the solution of problems of high frequency acoustic scattering by a class of sound-soft nonconvex polygons. The approximation space is enriched with carefully chosen oscillatory basis functions; these are selected via a study of the high frequency asymptotic behaviour of the solution. We demonstrate via a rigorous error analysis, supported by numerical examples, that to achieve any desired accuracy it is sufficient for the number of degrees of freedom to grow only in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods. This appears to be the first such numerical analysis result for any problem of scattering by a nonconvex obstacle. Our analysis is based on new frequency-explicit bounds on the normal derivative of the solution on the boundary and on its analytic continuation into the complex plane.
Resumo:
We apply a new parameterisation of the Greenland ice sheet (GrIS) feedback between surface mass balance (SMB: the sum of surface accumulation and surface ablation) and surface elevation in the MAR regional climate model (Edwards et al., 2014) to projections of future climate change using five ice sheet models (ISMs). The MAR (Modèle Atmosphérique Régional: Fettweis, 2007) climate projections are for 2000–2199, forced by the ECHAM5 and HadCM3 global climate models (GCMs) under the SRES A1B emissions scenario. The additional sea level contribution due to the SMB– elevation feedback averaged over five ISM projections for ECHAM5 and three for HadCM3 is 4.3% (best estimate; 95% credibility interval 1.8–6.9 %) at 2100, and 9.6% (best estimate; 95% credibility interval 3.6–16.0 %) at 2200. In all results the elevation feedback is significantly positive, amplifying the GrIS sea level contribution relative to the MAR projections in which the ice sheet topography is fixed: the lower bounds of our 95% credibility intervals (CIs) for sea level contributions are larger than the “no feedback” case for all ISMs and GCMs. Our method is novel in sea level projections because we propagate three types of modelling uncertainty – GCM and ISM structural uncertainties, and elevation feedback parameterisation uncertainty – along the causal chain, from SRES scenario to sea level, within a coherent experimental design and statistical framework. The relative contributions to uncertainty depend on the timescale of interest. At 2100, the GCM uncertainty is largest, but by 2200 both the ISM and parameterisation uncertainties are larger. We also perform a perturbed parameter ensemble with one ISM to estimate the shape of the projected sea level probability distribution; our results indicate that the probability density is slightly skewed towards higher sea level contributions.
Resumo:
Using the GlobAEROSOL-AATSR dataset, estimates of the instantaneous, clear-sky, direct aerosol radiative effect and radiative forcing have been produced for the year 2006. Aerosol Robotic Network sun-photometer measurements have been used to characterise the random and systematic error in the GlobAEROSOL product for 22 regions covering the globe. Representative aerosol properties for each region were derived from the results of a wide range of literature sources and, along with the de-biased GlobAEROSOL AODs, were used to drive an offline version of the Met Office unified model radiation scheme. In addition to the mean AOD, best-estimate run of the radiation scheme, a range of additional calculations were done to propagate uncertainty estimates in the AOD, optical properties, surface albedo and errors due to the temporal and spatial averaging of the AOD fields. This analysis produced monthly, regional estimates of the clear-sky aerosol radiative effect and its uncertainty, which were combined to produce annual, global mean values of (−6.7±3.9)Wm−2 at the top of atmosphere (TOA) and (−12±6)Wm−2 at the surface. These results were then used to give estimates of regional, clear-sky aerosol direct radiative forcing, using modelled pre-industrial AOD fields for the year 1750 calculated for the AEROCOM PRE experiment. However, as it was not possible to quantify the uncertainty in the pre-industrial aerosol loading, these figures can only be taken as indicative and their uncertainties as lower bounds on the likely errors. Although the uncertainty on aerosol radiative effect presented here is considerably larger than most previous estimates, the explicit inclusion of the major sources of error in the calculations suggest that they are closer to the true constraint on this figure from similar methodologies, and point to the need for more, improved estimates of both global aerosol loading and aerosol optical properties.
Resumo:
To improve the quantity and impact of observations used in data assimilation it is necessary to take into account the full, potentially correlated, observation error statistics. A number of methods for estimating correlated observation errors exist, but a popular method is a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. The accuracy of the results it yields is unknown as the diagnostic is sensitive to the difference between the exact background and exact observation error covariances and those that are chosen for use within the assimilation. It has often been stated in the literature that the results using this diagnostic are only valid when the background and observation error correlation length scales are well separated. Here we develop new theory relating to the diagnostic. For observations on a 1D periodic domain we are able to the show the effect of changes in the assumed error statistics used in the assimilation on the estimated observation error covariance matrix. We also provide bounds for the estimated observation error variance and eigenvalues of the estimated observation error correlation matrix. We demonstrate that it is still possible to obtain useful results from the diagnostic when the background and observation error length scales are similar. In general, our results suggest that when correlated observation errors are treated as uncorrelated in the assimilation, the diagnostic will underestimate the correlation length scale. We support our theoretical results with simple illustrative examples. These results have potential use for interpreting the derived covariances estimated using an operational system.
Resumo:
We present and analyse a space–time discontinuous Galerkin method for wave propagation problems. The special feature of the scheme is that it is a Trefftz method, namely that trial and test functions are solution of the partial differential equation to be discretised in each element of the (space–time) mesh. The method considered is a modification of the discontinuous Galerkin schemes of Kretzschmar et al. (2014) and of Monk & Richter (2005). For Maxwell’s equations in one space dimension, we prove stability of the method, quasi-optimality, best approximation estimates for polynomial Trefftz spaces and (fully explicit) error bounds with high order in the meshwidth and in the polynomial degree. The analysis framework also applies to scalar wave problems and Maxwell’s equations in higher space dimensions. Some numerical experiments demonstrate the theoretical results proved and the faster convergence compared to the non-Trefftz version of the scheme.