54 resultados para Hyperbolic smoothing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an analysis of the accuracy of the method introduced by Lockwood et al. (1994) for the determination of the magnetopause reconnection rate from the dispersion of precipitating ions in the ionospheric cusp region. Tests are made by applying the method to synthesised data. The simulated cusp ion precipitation data are produced by an analytic model of the evolution of newly-opened field lines, along which magnetosheath ions are firstly injected across the magnetopause and then dispersed as they propagate into the ionosphere. The rate at which these newly opened field lines are generated by reconnection can be varied. The derived reconnection rate estimates are then compared with the input variation to the model and the accuracy of the method assessed. Results are presented for steady-state reconnection, for continuous reconnection showing a sine-wave variation in rate and for reconnection which only occurs in square wave pulses. It is found that the method always yields the total flux reconnected (per unit length of the open-closed field-line boundary) to within an accuracy of better than 5%, but that pulses tend to be smoothed so that the peak reconnection rate within the pulse is underestimated and the pulse length is overestimated. This smoothing is reduced if the separation between energy channels of the instrument is reduced; however this also acts to increase the experimental uncertainty in the estimates, an effect which can be countered by improving the time resolution of the observations. The limited time resolution of the data is shown to set a minimum reconnection rate below which the method gives spurious short-period oscillations about the true value. Various examples of reconnection rate variations derived from cusp observations are discussed in the light of this analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the effect on balance of a number of Schur product-type localization schemes which have been designed with the primary function of reducing spurious far-field correlations in forecast error statistics. The localization schemes studied comprise a non-adaptive scheme (where the moderation matrix is decomposed in a spectral basis), and two adaptive schemes, namely a simplified version of SENCORP (Smoothed ENsemble COrrelations Raised to a Power) and ECO-RAP (Ensemble COrrelations Raised to A Power). The paper shows, we believe for the first time, how the degree of balance (geostrophic and hydrostatic) implied by the error covariance matrices localized by these schemes can be diagnosed. Here it is considered that an effective localization scheme is one that reduces spurious correlations adequately but also minimizes disruption of balance (where the 'correct' degree of balance or imbalance is assumed to be possessed by the unlocalized ensemble). By varying free parameters that describe each scheme (e.g. the degree of truncation in the schemes that use the spectral basis, the 'order' of each scheme, and the degree of ensemble smoothing), it is found that a particular configuration of the ECO-RAP scheme is best suited to the convective-scale system studied. According to our diagnostics this ECO-RAP configuration still weakens geostrophic and hydrostatic balance, but overall this is less so than for other schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the billiard dynamics in a non-compact set of ℝ d that is constructed as a bi-infinite chain of translated copies of the same d-dimensional polytope. A random configuration of semi-dispersing scatterers is placed in each copy. The ensemble of dynamical systems thus defined, one for each global realization of the scatterers, is called quenched random Lorentz tube. Under some fairly general conditions, we prove that every system in the ensemble is hyperbolic and almost every system is recurrent, ergodic, and enjoys some higher chaotic properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative palaeoclimate reconstructions are widely used to evaluate climatemodel performance. Here, as part of an effort to provide such a data set for Australia, we examine the impact of analytical decisions and sampling assumptions on modern-analogue reconstructions using a continent-wide pollen data set. There is a high degree of correlation between temperature variables in the modern climate of Australia, but there is sufficient orthogonality in the variations of precipitation, summer and winter temperature and plant–available moisture to allow independent reconstructions of these four variables to be made. The method of analogue selection does not affect the reconstructions, although bootstrap resampling provides a more reliable technique for obtaining robust measures of uncertainty. The number of analogues used affects the quality of the reconstructions: the most robust reconstructions are obtained using 5 analogues. The quality of reconstructions based on post-1850 CE pollen samples differ little from those using samples from between 1450 and 1849 CE, showing that European post settlement modification of vegetation has no impact on the fidelity of the reconstructions although it substantially increases the availability of potential analogues. Reconstructions based on core top samples are more realistic than those using surface samples, but only using core top samples would substantially reduce the number of available analogues and therefore increases the uncertainty of the reconstructions. Spatial and/or temporal averaging of pollen assemblages prior to analysis negatively affects the subsequent reconstructions for some variables and increases the associated uncertainties. In addition, the quality of the reconstructions is affected by the degree of spatial smoothing of the original climate data, with the best reconstructions obtained using climate data froma 0.5° resolution grid, which corresponds to the typical size of the pollen catchment. This study provides a methodology that can be used to provide reliable palaeoclimate reconstructions for Australia, which will fill in a major gap in the data sets used to evaluate climate models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide new evidence on sea surface temperature (SST) variations and paleoceanographic/paleoenvironmental changes over the past 1500 years for the north Aegean Sea (NE Mediterranean). The reconstructions are based on multiproxy analyses, obtained from the high resolution (decadal to multidecadal) marine record M2 retrieved from the Athos basin. Reconstructed SSTs show an increase from ca. 850 to 950 AD and from ca. 1100 to 1300 AD. A cooling phase of almost 1.5 �C is observed from ca. 1600 AD to 1700 AD. This seems to have been the starting point of a continuous SST warming trend until the end of the reconstructed period, interrupted by two prominent cooling events at 1832 ± 15 AD and 1995 ± 1 AD. Application of an adaptive Kernel smoothing suggests that the current warming in the reconstructed SSTs of the north Aegean might be unprecedented in the context of the past 1500 years. Internal variability in atmospheric/oceanic circulations systems as well as external forcing as solar radiation and volcanic activity could have affected temperature variations in the north Aegean Sea over the past 1500 years. The marked temperature drop of approximately ~2 �C at 1832 ± 15 yr AD could be related to the 1809 АD ‘unknown’ and the 1815 AD Tambora volcanic eruptions. Paleoenvironmental proxy-indices of the M2 record show enhanced riverine/continental inputs in the northern Aegean after ca. 1450 AD. The paleoclimatic evidence derived from the M2 record is combined with a socio-environmental study of the history of the north Aegean region. We show that the cultivation of temperature-sensitive crops, i.e. walnut, vine and olive, co-occurred with stable and warmer temperatures, while its end coincided with a significant episode of cooler temperatures. Periods of agricultural growth in Macedonia coincide with periods of warmer and more stable SSTs, but further exploration is required in order to identify the causal links behind the observed phenomena. The Black Death likely caused major changes in agricultural activity in the north Aegean region, as reflected in the pollen data from land sites of Macedonia and the M2 proxy-reconstructions. Finally, we conclude that the early modern peaks in mountain vegetation in the Rhodope and Macedonia highlands, visible also in the M2 record, were very likely climate-driven.