39 resultados para Smoothed bootstrap
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
A graphical display of the frequency content of background,electroencephalogram (EEG) activity is obtained by calculating the spectral estimates using autocorrelation autoregressive method and the classical Fourier transform method, Display of spectral content of consecutive data segments is made using hidden-line suppression technique so as to get a spectral array, The autoregressive spectral array (ASA) is found to be sensitive to baseline drift, Following baseline correction the autoregressive technique is found to be superior to the Fourier method of compressed spectral array (CSA) in detecting the transitions in the frequencies of the signal. The smoothed ASA gives a better picture of transitions and changes in the background activity, The ASA can be made to adapt to specific changes of dominant frequencies while eliminating unnecessary peaks in the spectrum. The utility,of the ASA for background EEG analysis is discussed,
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
By using the strain smoothing technique proposed by Chen et al. (Comput. Mech. 2000; 25: 137-156) for meshless methods in the context of the finite element method (FEM), Liu et al. (Comput. Mech. 2007; 39(6): 859-877) developed the Smoothed FEM (SFEM). Although the SFEM is not yet well understood mathematically, numerical experiments point to potentially useful features of this particularly simple modification of the FEM. To date, the SFEM has only been investigated for bilinear and Wachspress approximations and is limited to linear reproducing conditions. The goal of this paper is to extend the strain smoothing to higher order elements and to investigate numerically in which condition strain smoothing is beneficial to accuracy and convergence of enriched finite element approximations. We focus on three widely used enrichment schemes, namely: (a) weak discontinuities; (b) strong discontinuities; (c) near-tip linear elastic fracture mechanics functions. The main conclusion is that strain smoothing in enriched approximation is only beneficial when the enrichment functions are polynomial (cases (a) and (b)), but that non-polynomial enrichment of type (c) lead to inferior methods compared to the standard enriched FEM (e.g. XFEM). Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
The statistical performance analysis of ESPRIT, root-MUSIC, minimum-norm methods for direction estimation, due to finite data perturbations, using the modified spatially smoothed covariance matrix, is developed. Expressions for the mean-squared error in the direction estimates are derived based on a common framework. Based on the analysis, the use of the modified smoothed covariance matrix improves the performance of the methods when the sources are fully correlated. Also, the performance is better even when the number of subarrays is large unlike in the case of the conventionally smoothed covariance matrix. However, the performance for uncorrelated sources deteriorates due to an artificial correlation introduced by the modified smoothing. The theoretical expressions are validated using extensive simulations. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
We recast the reconstruction problem of diffuse optical tomography (DOT) in a pseudo-dynamical framework and develop a method to recover the optical parameters using particle filters, i.e., stochastic filters based on Monte Carlo simulations. In particular, we have implemented two such filters, viz., the bootstrap (BS) filter and the Gaussian-sum (GS) filter and employed them to recover optical absorption coefficient distribution from both numerically simulated and experimentally generated photon fluence data. Using either indicator functions or compactly supported continuous kernels to represent the unknown property distribution within the inhomogeneous inclusions, we have drastically reduced the number of parameters to be recovered and thus brought the overall computation time to within reasonable limits. Even though the GS filter outperformed the BS filter in terms of accuracy of reconstruction, both gave fairly accurate recovery of the height, radius, and location of the inclusions. Since the present filtering algorithms do not use derivatives, we could demonstrate accurate contrast recovery even in the middle of the object where the usual deterministic algorithms perform poorly owing to the poor sensitivity of measurement of the parameters. Consistent with the fact that the DOT recovery, being ill posed, admits multiple solutions, both the filters gave solutions that were verified to be admissible by the closeness of the data computed through them to the data used in the filtering step (either numerically simulated or experimentally generated). (C) 2011 Optical Society of America
Resumo:
In this paper, we present an unrestricted Kannada online handwritten character recognizer which is viable for real time applications. It handles Kannada and Indo-Arabic numerals, punctuation marks and special symbols like $, &, # etc, apart from all the aksharas of the Kannada script. The dataset used has handwriting of 69 people from four different locations, making the recognition writer independent. It was found that for the DTW classifier, using smoothed first derivatives as features, enhanced the performance to 89% as compared to preprocessed co-ordinates which gave 85%, but was too inefficient in terms of time. To overcome this, we used Statistical Dynamic Time Warping (SDTW) and achieved 46 times faster classification with comparable accuracy i.e. 88%, making it fast enough for practical applications. The accuracies reported are raw symbol recognition results from the classifier. Thus, there is good scope of improvement in actual applications. Where domain constraints such as fixed vocabulary, language models and post processing can be employed. A working demo is also available on tablet PC for recognition of Kannada words.
Resumo:
We address the problem of recognition and retrieval of relatively weak industrial signal such as Partial Discharges (PD) buried in excessive noise. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) which has similar time-frequency characteristics as PD pulse. Therefore conventional frequency based DSP techniques are not useful in retrieving PD pulses. We employ statistical signal modeling based on combination of long-memory process and probabilistic principal component analysis (PPCA). An parametric analysis of the signal is exercised for extracting the features of desired pules. We incorporate a wavelet based bootstrap method for obtaining the noise training vectors from observed data. The procedure adopted in this work is completely different from the research work reported in the literature, which is generally based on deserved signal frequency and noise frequency.
Resumo:
The van der Waals and Platteuw (vdVVP) theory has been successfully used to model the thermodynamics of gas hydrates. However, earlier studies have shown that this could be due to the presence of a large number of adjustable parameters whose values are obtained through regression with experimental data. To test this assertion, we carry out a systematic and rigorous study of the performance of various models of vdWP theory that have been proposed over the years. The hydrate phase equilibrium data used for this study is obtained from Monte Carlo molecular simulations of methane hydrates. The parameters of the vdWP theory are regressed from this equilibrium data and compared with their true values obtained directly from simulations. This comparison reveals that (i) methane-water interactions beyond the first cage and methane-methane interactions make a significant contribution to the partition function and thus cannot be neglected, (ii) the rigorous Monte Carlo integration should be used to evaluate the Langmuir constant instead of the spherical smoothed cell approximation, (iii) the parameter values describing the methane-water interactions cannot be correctly regressed from the equilibrium data using the vdVVP theory in its present form, (iv) the regressed empty hydrate property values closely match their true values irrespective of the level of rigor in the theory, and (v) the flexibility of the water lattice forming the hydrate phase needs to be incorporated in the vdWP theory. Since methane is among the simplest of hydrate forming molecules, the conclusions from this study should also hold true for more complicated hydrate guest molecules.
Resumo:
This paper presents a singular edge-based smoothed finite element method (sES-FEM) for mechanics problems with singular stress fields of arbitrary order. The sES-FEM uses a basic mesh of three-noded linear triangular (T3) elements and a special layer of five-noded singular triangular elements (sT5) connected to the singular-point of the stress field. The sT5 element has an additional node on each of the two edges connected to the singular-point. It allows us to represent simple and efficient enrichment with desired terms for the displacement field near the singular-point with the satisfaction of partition-of-unity property. The stiffness matrix of the discretized system is then obtained using the assumed displacement values (not the derivatives) over smoothing domains associated with the edges of elements. An adaptive procedure for the sES-FEM is proposed to enhance the quality of the solution with minimized number of nodes. Several numerical examples are provided to validate the reliability of the present sES-FEM method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We consider the problem of secure communication in mobile Wireless Sensor Networks (WSNs). Achieving security in WSNs requires robust encryption and authentication standards among the sensor nodes. Severe resources constraints in typical Wireless Sensor nodes hinder them in achieving key agreements. It is proved from past studies that many notable key management schemes do not work well in sensor networks due to their limited capacities. The idea of key predistribution is not feasible considering the fact that the network could scale to millions. We prove a novel algorithm that provides robust and secure communication channel in WSNs. Our Double Encryption with Validation Time (DEV) using Key Management Protocol algorithm works on the basis of timed sessions within which a secure secret key remains valid. A mobile node is used to bootstrap and exchange secure keys among communicating pairs of nodes. Analysis and simulation results show that the performance of the DEV using Key Management Protocol Algorithm is better than the SEV scheme and other related work.
Resumo:
The problem of updating the reliability of instrumented structures based on measured response under random dynamic loading is considered. A solution strategy within the framework of Monte Carlo simulation based dynamic state estimation method and Girsanov's transformation for variance reduction is developed. For linear Gaussian state space models, the solution is developed based on continuous version of the Kalman filter, while, for non-linear and (or) non-Gaussian state space models, bootstrap particle filters are adopted. The controls to implement the Girsanov transformation are developed by solving a constrained non-linear optimization problem. Numerical illustrations include studies on a multi degree of freedom linear system and non-linear systems with geometric and (or) hereditary non-linearities and non-stationary random excitations.
Resumo:
Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.