54 resultados para Hyperbolic smoothing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ASTER Global Digital Elevation Model (GDEM) has made elevation data at 30 m spatial resolution freely available, enabling reinvestigation of morphometric relationships derived from limited field data using much larger sample sizes. These data are used to analyse a range of morphometric relationships derived for dunes (between dune height, spacing, and equivalent sand thickness) in the Namib Sand Sea, which was chosen because there are a number of extant studies that could be used for comparison with the results. The relative accuracy of GDEM for capturing dune height and shape was tested against multiple individual ASTER DEM scenes and against field surveys, highlighting the smoothing of the dune crest and resultant underestimation of dune height, and the omission of the smallest dunes, because of the 30 m sampling of ASTER DEM products. It is demonstrated that morphometric relationships derived from GDEM data are broadly comparable with relationships derived by previous methods, across a range of different dune types. The data confirm patterns of dune height, spacing and equivalent sand thickness mapped previously in the Namib Sand Sea, but add new detail to these patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Starting from the classical Saltzman two-dimensional convection equations, we derive via a severe spectral truncation a minimal 10 ODE system which includes the thermal effect of viscous dissipation. Neglecting this process leads to a dynamical system which includes a decoupled generalized Lorenz system. The consideration of this process breaks an important symmetry and couples the dynamics of fast and slow variables, with the ensuing modifications to the structural properties of the attractor and of the spectral features. When the relevant nondimensional number (Eckert number Ec) is different from zero, an additional time scale of O(Ec−1) is introduced in the system, as shown with standard multiscale analysis and made clear by several numerical evidences. Moreover, the system is ergodic and hyperbolic, the slow variables feature long-term memory with 1/f3/2 power spectra, and the fast variables feature amplitude modulation. Increasing the strength of the thermal-viscous feedback has a stabilizing effect, as both the metric entropy and the Kaplan-Yorke attractor dimension decrease monotonically with Ec. The analyzed system features very rich dynamics: it overcomes some of the limitations of the Lorenz system and might have prototypical value in relevant processes in complex systems dynamics, such as the interaction between slow and fast variables, the presence of long-term memory, and the associated extreme value statistics. This analysis shows how neglecting the coupling of slow and fast variables only on the basis of scale analysis can be catastrophic. In fact, this leads to spurious invariances that affect essential dynamical properties (ergodicity, hyperbolicity) and that cause the model losing ability in describing intrinsically multiscale processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The case for property has typically rested on the application of modern portfolio theory (MPT), in that property has been shown to offer increased diversification benefits within a multi asset portfolio without hurting portfolio returns especially for lower risk portfolios. However this view is based upon the use of historic, usually appraisal based, data for property. Recent research suggests strongly that such data significantly underestimates the risk characteristics of property, because appraisals explicitly or implicitly smooth out much of the real volatility in property returns. This paper examines the portfolio diversification effects of including property in a multi-asset portfolio, using UK appraisal based (smoothed) data and several derived de-smoothed series. Having considered the effects of de-smoothing, we then consider the inclusion of a further low risk asset (cash) in order to investigate further whether property's place in a low risk portfolio is maintained. The conclusions of this study are that the previous supposed benefits of including property have been overstated. Although property may still have a place in a 'balanced' institutional portfolio, the case for property needs to be reassessed and not be based simplistically on the application of MPT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissolved organic carbon (DOC) concentrations in surface waters have increased across much of Europe and North America, with implications for the terrestrial carbon balance, aquatic ecosystem functioning, water treatment costs and human health. Over the past decade, many hypotheses have been put forward to explain this phenomenon, from changing climate and land-management to eutrophication and acid deposition. Resolution of this debate has been hindered by a reliance on correlative analyses of time-series data, and a lack of robust experimental testing of proposed mechanisms. In a four-year, four-site replicated field experiment involving both acidifying and de-acidifying treatments, we tested the hypothesis that DOC leaching was previously suppressed by high levels of soil acidity in peat and organo-mineral soils, and therefore that observed DOC increases a consequence of decreasing soil acidity. We observed a consistent, positive relationship between DOC and acidity change at all sites. Responses were described by similar hyperbolic relationships between standardised changes in DOC and hydrogen ion concentrations at all sites, suggesting potentially general applicability. These relationships explained a substantial proportion of observed changes in peak DOC concentrations in nearby monitoring streams, and application to a UK-wide upland soil pH dataset suggests that recovery from acidification alone could have led to soil solution DOC increases in the range 46-126% by habitat type since 1978. Our findings raise the possibility that changing soil acidity may have wider impacts on ecosystem carbon balances. Decreasing sulphur deposition may be accelerating terrestrial carbon loss, and returning surface waters to a natural, high-DOC condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We prove essential self-adjointness of a class of Dirichlet operators in ℝn using the hyperbolic equation approach. This method allows one to prove essential self-adjointness under minimal conditions on the logarithmic derivative of the density and a condition of Muckenhoupt type on the density itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the boundedness and compactness of Toeplitz operators Ta on Bergman spaces , 1 < p < ∞. The novelty is that we allow distributional symbols. It turns out that the belonging of the symbol to a weighted Sobolev space of negative order is sufficient for the boundedness of Ta. We show the natural relation of the hyperbolic geometry of the disc and the order of the distribution. A corresponding sufficient condition for the compactness is also derived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the boundedness of Toeplitz operators $T_a$ with locally integrable symbols on Bergman spaces $A^p(\mathbb{D})$, $1 < p < \infty$. Our main result gives a sufficient condition for the boundedness of $T_a$ in terms of some ``averages'' (related to hyperbolic rectangles) of its symbol. If the averages satisfy an ${o}$-type condition on the boundary of $\mathbb{D}$, we show that the corresponding Toeplitz operator is compact on $A^p$. Both conditions coincide with the known necessary conditions in the case of nonnegative symbols and $p=2$. We also show that Toeplitz operators with symbols of vanishing mean oscillation are Fredholm on $A^p$ provided that the averages are bounded away from zero, and derive an index formula for these operators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The plume of Ice Shelf Water (ISW) flowing into the Weddell Sea over the Filchner sill contributes to the formation of Antarctic Bottom Water. The Filchner overflow is simulated using a hydrostatic, primitive equation three-dimensional ocean model with a 0.5–2 Sv ISW influx above the Filchner sill. The best fit to mooring temperature observations is found with influxes of 0.5 and 1 Sv, below a previous estimate of 1.6 ± 0.5 Sv based on sparse mooring velocities. The plume first moves north over the continental shelf, and then turns west, along slope of the continental shelf break where it breaks up into subplumes and domes, some of which then move downslope. Other subplumes run into the eastern submarine ridge and propagate along the ridge downslope in a chaotic manner. The next, western ridge is crossed by the plume through several paths. Despite a number of discrepancies with observational data, the model reproduces many attributes of the flow. In particular, we argue that the temporal variability shown by the observations can largely be attributed to the unstable structure of the flow, where the temperature fluctuations are determined by the motion of the domes past the moorings. Our sensitivity studies show that while thermobaricity plays a role, its effect is small for the flows considered. Smoothing the ridges out demonstrate that their presence strongly affects the plume shape around the ridges. An increase in the bottom drag or viscosity leads to slowing down, and hence thickening and widening of the plume

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to show that the group SE(3) with an imposed Lie-Poisson structure can be used to determine the trajectory in a spatial frame of a rigid body in Euclidean space. Identical results for the trajectory are obtained in spherical and hyperbolic space by scaling the linear displacements appropriately since the influence of the moments of inertia on the trajectories tends to zero as the scaling factor increases. The semidirect product of the linear and rotational motions gives the trajectory from a body frame perspective. It is shown that this cannot be used to determine the trajectory in the spatial frame. The body frame trajectory is thus independent of the velocity coupling. In addition, it is shown that the analysis can be greatly simplified by aligning the axes of the spatial frame with the axis of symmetry which is unchanging for a natural system with no forces and rotation about an axis of symmetry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many operational weather forecasting centres use semi-implicit time-stepping schemes because of their good efficiency. However, as computers become ever more parallel, horizontally explicit solutions of the equations of atmospheric motion might become an attractive alternative due to the additional inter-processor communication of implicit methods. Implicit and explicit (IMEX) time-stepping schemes have long been combined in models of the atmosphere using semi-implicit, split-explicit or HEVI splitting. However, most studies of the accuracy and stability of IMEX schemes have been limited to the parabolic case of advection–diffusion equations. We demonstrate how a number of Runge–Kutta IMEX schemes can be used to solve hyperbolic wave equations either semi-implicitly or HEVI. A new form of HEVI splitting is proposed, UfPreb, which dramatically improves accuracy and stability of simulations of gravity waves in stratified flow. As a consequence it is found that there are HEVI schemes that do not lose accuracy in comparison to semi-implicit ones. The stability limits of a number of variations of trapezoidal implicit and some Runge–Kutta IMEX schemes are found and the schemes are tested on two vertical slice cases using the compressible Boussinesq equations split into various combinations of implicit and explicit terms. Some of the Runge–Kutta schemes are found to be beneficial over trapezoidal, especially since they damp high frequencies without dropping to first-order accuracy. We test schemes that are not formally accurate for stiff systems but in stiff limits (nearly incompressible) and find that they can perform well. The scheme ARK2(2,3,2) performs the best in the tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sea surface temperature (SST) can be estimated from day and night observations of the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) by optimal estimation (OE). We show that exploiting the 8.7 μm channel, in addition to the “traditional” wavelengths of 10.8 and 12.0 μm, improves OE SST retrieval statistics in validation. However, the main benefit is an improvement in the sensitivity of the SST estimate to variability in true SST. In a fair, single-pixel comparison, the 3-channel OE gives better results than the SST estimation technique presently operational within the Ocean and Sea Ice Satellite Application Facility. This operational technique is to use SST retrieval coefficients, followed by a bias-correction step informed by radiative transfer simulation. However, the operational technique has an additional “atmospheric correction smoothing”, which improves its noise performance, and hitherto had no analogue within the OE framework. Here, we propose an analogue to atmospheric correction smoothing, based on the expectation that atmospheric total column water vapour has a longer spatial correlation length scale than SST features. The approach extends the observations input to the OE to include the averaged brightness temperatures (BTs) of nearby clear-sky pixels, in addition to the BTs of the pixel for which SST is being retrieved. The retrieved quantities are then the single-pixel SST and the clear-sky total column water vapour averaged over the vicinity of the pixel. This reduces the noise in the retrieved SST significantly. The robust standard deviation of the new OE SST compared to matched drifting buoys becomes 0.39 K for all data. The smoothed OE gives SST sensitivity of 98% on average. This means that diurnal temperature variability and ocean frontal gradients are more faithfully estimated, and that the influence of the prior SST used is minimal (2%). This benefit is not available using traditional atmospheric correction smoothing.