951 resultados para vector quantization based Gaussian modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions. Copyright © 2011 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3  s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the forecasting of macroeconomic variables that are subject to revisions, using Bayesian vintage-based vector autoregressions. The prior incorporates the belief that, after the first few data releases, subsequent ones are likely to consist of revisions that are largely unpredictable. The Bayesian approach allows the joint modelling of the data revisions of more than one variable, while keeping the concomitant increase in parameter estimation uncertainty manageable. Our model provides markedly more accurate forecasts of post-revision values of inflation than do other models in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current European Union regulatory risk assessment allows application of pesticides provided that recovery of nontarget arthropods in-crop occurs within a year. Despite the long-established theory of source-sink dynamics, risk assessment ignores depletion of surrounding populations and typical field trials are restricted to plot-scale experiments. In the present study, the authors used agent-based modeling of 2 contrasting invertebrates, a spider and a beetle, to assess how the area of pesticide application and environmental half-life affect the assessment of recovery at the plot scale and impact the population at the landscape scale. Small-scale plot experiments were simulated for pesticides with different application rates and environmental half-lives. The same pesticides were then evaluated at the landscape scale (10 km × 10 km) assuming continuous year-on-year usage. The authors' results show that recovery time estimated from plot experiments is a poor indicator of long-term population impact at the landscape level and that the spatial scale of pesticide application strongly determines population-level impact. This raises serious doubts as to the utility of plot-recovery experiments in pesticide regulatory risk assessment for population-level protection. Predictions from the model are supported by empirical evidence from a series of studies carried out in the decade starting in 1988. The issues raised then can now be addressed using simulation. Prediction of impacts at landscape scales should be more widely used in assessing the risks posed by environmental stressors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply a numerical model of time-dependent ionospheric convection to two directly driven reconnection pulses during a 15-min interval of southward IMF on 26 November 2000. The model requires an input magnetopause reconnection rate variation, which is here derived from the observed variation in the upstream IMF clock angle, q. The reconnection rate is mapped to an ionospheric merging gap, the MLT extent of which is inferred from the Doppler-shifted Lyman-a emission on newly opened field lines, as observed by the FUV instrument on the IMAGE spacecraft. The model is used to reproduce a variety of features observed during this event: SuperDARN observations of the ionospheric convection pattern and transpolar voltage; FUV observations of the growth of patches of newly opened flux; FUVand in situ observations of the location of the Open-Closed field line Boundary (OCB) and a cusp ion step. We adopt a clock angle dependence of the magnetopause reconnection electric field, mapped to the ionosphere, of the form Enosin4(q/2) and estimate the peak value, Eno, by matching observed and modeled variations of both the latitude, LOCB, of the dayside OCB (as inferred from the equatorward edge of cusp proton emissions seen by FUV) and the transpolar voltage FPC (as derived using the mapped potential technique from SuperDARN HF radar data). This analysis also yields the time constant tOCB with which the open-closed boundary relaxes back toward its equilibrium configuration. For the case studied here, we find tOCB = 9.7 ± 1.3 min, consistent with previous inferences from the observed response of ionospheric flow to southward turnings of the IMF. The analysis confirms quantitatively the concepts of ionospheric flow excitation on which the model is based and explains some otherwise anomalous features of the cusp precipitation morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employ a numerical model of cusp ion precipitation and proton aurora emission to fit variations of the peak Doppler-shifted Lyman-a intensity observed on 26 November 2000 by the SI-12 channel of the FUV instrument on the IMAGE satellite. The major features of this event appeared in response to two brief swings of the interplanetary magnetic field (IMF) toward a southward orientation. We reproduce the observed spatial distributions of this emission on newly opened field lines by combining the proton emission model with a model of the response of ionospheric convection. The simulations are based on the observed variations of the solar wind proton temperature and concentration and the interplanetary magnetic field clock angle. They also allow for the efficiency, sampling rate, integration time and spatial resolution of the FUV instrument. The good match (correlation coefficient 0.91, significant at the 98% level) between observed and modeled variations confirms the time constant (about 4 min) for the rise and decay of the proton emissions predicted by the model for southward IMF conditions. The implications for the detection of pulsed magnetopause reconnection using proton aurora are discussed for a range of interplanetary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in hardware and software technologies allow to capture streaming data. The area of Data Stream Mining (DSM) is concerned with the analysis of these vast amounts of data as it is generated in real-time. Data stream classification is one of the most important DSM techniques allowing to classify previously unseen data instances. Different to traditional classifiers for static data, data stream classifiers need to adapt to concept changes (concept drift) in the stream in real-time in order to reflect the most recent concept in the data as accurately as possible. A recent addition to the data stream classifier toolbox is eRules which induces and updates a set of expressive rules that can easily be interpreted by humans. However, like most rule-based data stream classifiers, eRules exhibits a poor computational performance when confronted with continuous attributes. In this work, we propose an approach to deal with continuous data effectively and accurately in rule-based classifiers by using the Gaussian distribution as heuristic for building rule terms on continuous attributes. We show on the example of eRules that incorporating our method for continuous attributes indeed speeds up the real-time rule induction process while maintaining a similar level of accuracy compared with the original eRules classifier. We termed this new version of eRules with our approach G-eRules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimates of how soil water stress affects plant transpiration are crucial for reliable land surface model (LSM) predictions. Current LSMs generally use a water stress factor, β, dependent on soil moisture content, θ, that ranges linearly between β = 1 for unstressed vegetation and β = 0 when wilting point is reached. This paper explores the feasibility of replacing the current approach with equations that use soil water potential as their independent variable, or with a set of equations that involve hydraulic and chemical signaling, thereby ensuring feedbacks between the entire soil–root–xylem–leaf system. A comparison with the original linear θ-based water stress parameterization, and with its improved curvi-linear version, was conducted. Assessment of model suitability was focused on their ability to simulate the correct (as derived from experimental data) curve shape of relative transpiration versus fraction of transpirable soil water. We used model sensitivity analyses under progressive soil drying conditions, employing two commonly used approaches to calculate water retention and hydraulic conductivity curves. Furthermore, for each of these hydraulic parameterizations we used two different parameter sets, for 3 soil texture types; a total of 12 soil hydraulic permutations. Results showed that the resulting transpiration reduction functions (TRFs) varied considerably among the models. The fact that soil hydraulic conductivity played a major role in the model that involved hydraulic and chemical signaling led to unrealistic values of β, and hence TRF, for many soil hydraulic parameter sets. However, this model is much better equipped to simulate the behavior of different plant species. Based on these findings, we only recommend implementation of this approach into LSMs if great care with choice of soil hydraulic parameters is taken

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.