42 resultados para Taylor Approximation
Resumo:
A cloud-resolving model is modified to implement the weak temperature gradient approximation in order to simulate the interactions between tropical convection and the large-scale tropical circulation. The instantaneous domain-mean potential temperature is relaxed toward a reference profile obtained from a radiative–convective equilibrium simulation of the cloud-resolving model. For homogeneous surface conditions, the model state at equilibrium is a large-scale circulation with its descending branch in the simulated column. This is similar to the equilibrium state found in some other studies, but not all. For this model, the development of such a circulation is insensitive to the relaxation profile and the initial conditions. Two columns of the cloud-resolving model are fully coupled by relaxing the instantaneous domain-mean potential temperature in both columns toward each other. This configuration is energetically closed in contrast to the reference-column configuration. No mean large-scale circulation develops over homogeneous surface conditions, regardless of the relative area of the two columns. The sensitivity to nonuniform surface conditions is similar to that obtained in the reference-column configuration if the two simulated columns have very different areas, but it is markedly weaker for columns of comparable area. The weaker sensitivity can be understood as being a consequence of a formulation for which the energy budget is closed. The reference-column configuration has been used to study the convection in a local region under the influence of a large-scale circulation. The extension to a two-column configuration is proposed as a methodology for studying the influence on local convection of changes in remote convection.
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.
Resumo:
A Landmark Case is one which stands out from other less remarkable cases. Landmark status is generally accorded because the case marks the beginning or the end of a course of legal development. Taylor v Caldwell is regarded as a landmark case because it marks the beginning of a legal development: the introduction of the doctrine of frustration into English contract law. This chapter explores the legal and historical background to the case to ascertain if it is a genuine landmark. A closer scrutiny reveals that while the legal significance of the case is exaggerated, the historical significance of the cases reveals an unknown irony: the case is a suitable landmark to the frustration of human endeavours. While the existence of the Surrey Music Hall was brief, it brought insanity, imprisonment, bankruptcy and death to its creators.
Resumo:
In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.
Resumo:
Using a linear factor model, we study the behaviour of French, Germany, Italian and British sovereign yield curves in the run up to EMU. This allows us to determine which of these yield curves might best approximate a benchmark yield curve post EMU. We find that the best approximation for the risk free yield is the UK three month T-bill yield, followed by the German three month T-bill yield. As no one sovereign yield curve dominates all others, we find that a composite yield curve, consisting of French, Italian and UK bonds at different maturity points along the yield curve should be the benchmark post EMU.
Resumo:
We study the approximation of harmonic functions by means of harmonic polynomials in two-dimensional, bounded, star-shaped domains. Assuming that the functions possess analytic extensions to a delta-neighbourhood of the domain, we prove exponential convergence of the approximation error with respect to the degree of the approximating harmonic polynomial. All the constants appearing in the bounds are explicit and depend only on the shape-regularity of the domain and on delta. We apply the obtained estimates to show exponential convergence with rate O(exp(−b square root N)), N being the number of degrees of freedom and b>0, of a hp-dGFEM discretisation of the Laplace equation based on piecewise harmonic polynomials. This result is an improvement over the classical rate O(exp(−b cubic root N )), and is due to the use of harmonic polynomial spaces, as opposed to complete polynomial spaces.
Resumo:
This paper examines the determinacy implications of forecast-based monetary policy rules that set the interest rate in response to expected future inflation in a Neo-Wicksellian model that incorporates real balance effects. We show that the presence of such effects in closed economies restricts the ability of the Taylor principle to prevent indeterminacy of the rational expectations equilibrium. The problem is exacerbated in open economies, particularly if the policy rule reacts to consumer-price, rather than domestic-price, inflation. However, determinacy can be restored in both closed and open economies with the addition of monetary policy inertia.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
In this paper an equation is derived for the mean backscatter cross section of an ensemble of snowflakes at centimeter and millimeter wavelengths. It uses the Rayleigh–Gans approximation, which has previously been found to be applicable at these wavelengths due to the low density of snow aggregates. Although the internal structure of an individual snowflake is random and unpredictable, the authors find from simulations of the aggregation process that their structure is “self-similar” and can be described by a power law. This enables an analytic expression to be derived for the backscatter cross section of an ensemble of particles as a function of their maximum dimension in the direction of propagation of the radiation, the volume of ice they contain, a variable describing their mean shape, and two variables describing the shape of the power spectrum. The exponent of the power law is found to be −. In the case of 1-cm snowflakes observed by a 3.2-mm-wavelength radar, the backscatter is 40–100 times larger than that of a homogeneous ice–air spheroid with the same mass, size, and aspect ratio.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.