903 resultados para Filters


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for dynamic data reconciliation of nonlinear systems that are simulated using the sequential modular approach, and where individual modules are represented by a class of differential algebraic equations. The estimation technique consists of a bank of extended Kalman filters that are integrated with the modules. The paper reports a study based on experimental data obtained from a pilot scale mixing process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding “auditory” filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The “sir” or “stir” test-words were distinguished by degrees of amplitude modulation, and played in the context; “next you’ll get _ to click on.” Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a “noise-like” quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word’s [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word’s bands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report here the construction and characterisation of a BAC library from the maize flint inbred line F2, widely used in European maize breeding programs. The library contains 86,858 clones with an average insert size of approximately 90 kb, giving approximately 3.2-times genome coverage. High-efficiency BAC cloning was achieved through the use of a single size selection for the high-molecular-weight genomic DNA, and co-transformation of the ligation with yeast tRNA to optimise transformation efficiency. Characterisation of the library showed that less than 0.5% of the clones contained no inserts, while 5.52% of clones consisted of chloroplast DNA. The library was gridded onto 29 nylon filters in a double-spotted 8 × 8 array, and screened by hybridisation with a number of single-copy and gene-family probes. A 3-dimensional DNA pooling scheme was used to allow rapid PCR screening of the library based on primer pairs from simple sequence repeat (SSR) and expressed sequence tag (EST) markers. Positive clones were obtained in all hybridisation and PCR screens carried out so far. Six BAC clones, which hybridised to a portion of the cloned Rp1-D rust resistance gene, were further characterised and found to form contigs covering most of this complex resistance locus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anthropogenic emissions of heat and exhaust gases play an important role in the atmospheric boundary layer, altering air quality, greenhouse gas concentrations and the transport of heat and moisture at various scales. This is particularly evident in urban areas where emission sources are integrated in the highly heterogeneous urban canopy layer and directly linked to human activities which exhibit significant temporal variability. It is common practice to use eddy covariance observations to estimate turbulent surface fluxes of latent heat, sensible heat and carbon dioxide, which can be attributed to a local scale source area. This study provides a method to assess the influence of micro-scale anthropogenic emissions on heat, moisture and carbon dioxide exchange in a highly urbanized environment for two sites in central London, UK. A new algorithm for the Identification of Micro-scale Anthropogenic Sources (IMAS) is presented, with two aims. Firstly, IMAS filters out the influence of micro-scale emissions and allows for the analysis of the turbulent fluxes representative of the local scale source area. Secondly, it is used to give a first order estimate of anthropogenic heat flux and carbon dioxide flux representative of the building scale. The algorithm is evaluated using directional and temporal analysis. The algorithm is then used at a second site which was not incorporated in its development. The spatial and temporal local scale patterns, as well as micro-scale fluxes, appear physically reasonable and can be incorporated in the analysis of long-term eddy covariance measurements at the sites in central London. In addition to the new IMAS-technique, further steps in quality control and quality assurance used for the flux processing are presented. The methods and results have implications for urban flux measurements in dense urbanised settings with significant sources of heat and greenhouse gases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The discrete Fourier transmission spread OFDM DFTS-OFDM) based single-carrier frequency division multiple access (SC-FDMA) has been widely adopted due to its lower peak-to-average power ratio (PAPR) of transmit signals compared with OFDM. However, the offset modulation, which has lower PAPR than general modulation, cannot be directly applied into the existing SC-FDMA. When pulse-shaping filters are employed to further reduce the envelope fluctuation of transmit signals of SC-FDMA, the spectral efficiency degrades as well. In order to overcome such limitations of conventional SC-FDMA, this paper for the first time investigated cyclic prefixed OQAMOFDM (CP-OQAM-OFDM) based SC-FDMA transmission with adjustable user bandwidth and space-time coding. Firstly, we propose CP-OQAM-OFDM transmission with unequally-spaced subbands. We then apply it to SC-FDMA transmission and propose a SC-FDMA scheme with the following features: a) the transmit signal of each user is offset modulated single-carrier with frequency-domain pulse-shaping; b) the bandwidth of each user is adjustable; c) the spectral efficiency does not decrease with increasing roll-off factors. To combat both inter-symbolinterference and multiple access interference in frequencyselective fading channels, a joint linear minimum mean square error frequency domain equalization using a prior information with low complexity is developed. Subsequently, we construct space-time codes for the proposed SC-FDMA. Simulation results confirm the powerfulness of the proposed CP-OQAM-OFDM scheme (i.e., effective yet with low complexity).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.