994 resultados para Variance estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of identification of multi-component and (or) spatially varying earthquake support motions based on measured responses in instrumented structures is considered. The governing equations of motion are cast in the state space form and a time domain solution to the input identification problem is developed based on the Kalman and particle filtering methods. The method allows for noise in measured responses, imperfections in mathematical model for the structure, and possible nonlinear behavior of the structure. The unknown support motions are treated as hypothetical additional system states and a prior model for these motions are taken to be given in terms of white noise processes. For linear systems, the solution is developed within the Kalman filtering framework while, for nonlinear systems, the Monte Carlo simulation based particle filtering tools are employed. In the latter case, the question of controlling sampling variance based on the idea of Rao-Blackwellization is also explored. Illustrative examples include identification of multi-component and spatially varying support motions in linear/nonlinear structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study follows an approach to estimate phytomass using recent techniques of remote sensing and digital photogrammetry. It involved tree inventory of forest plantations in Bhakra forest range of Nainital district. Panchromatic stereo dataset of Cartosat-1 was evaluated for mean stand height retrieval. Texture analysis and tree-tops detection analyses were done on Quick-Bird PAN data. The composite texture image of mean, variance and contrast with a 5x5 pixel window was found best to separate tree crowns for assessment of crown areas. Tree tops count obtained by local maxima filtering was found to be 83.4 % efficient with an RMSE+/-13 for 35 sample plots. The predicted phytomass ranged from 27.01 to 35.08 t/ha in the case of Eucalyptus sp. while in the case of Tectona grandis from 26.52 to 156 t/ha. The correlation between observed and predicted phytomass in Eucalyptus sp. was 0.468 with an RMSE of 5.12. However, the phytomass predicted in Tectona grandis was fairly strong with R-2=0.65 and RMSE of 9.89 as there was no undergrowth and the crowns were clearly visible. Results of the study show the potential of Cartosat-1 derived DSM and Quick-Bird texture image for the estimation of stand height, stem diameter, tree count and phytomass of important timber species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Revisions of US macroeconomic data are not white-noise. They are persistent, correlated with real-time data, and with high variability (around 80% of volatility observed in US real-time data). Their business cycle effects are examined in an estimated DSGE model extended with both real-time and final data. After implementing a Bayesian estimation approach, the role of both habit formation and price indexation fall significantly in the extended model. The results show how revision shocks of both output and inflation are expansionary because they occur when real-time published data are too low and the Fed reacts by cutting interest rates. Consumption revisions, by contrast, are countercyclical as consumption habits mirror the observed reduction in real-time consumption. In turn, revisions of the three variables explain 9.3% of changes of output in its long-run variance decomposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of identifying the beaks and estimating body weight and mantle length of 18 species of cephalopods from the Pacific Ocean is presented. Twenty specimens were selected from each of the following cephalopod species: Symplectoteuthis oualaniensis, Dosidicus gigas, Ommastrephes bartramii, S. luminosa, Todarodes pacificus, Nototodarus hawaiiensis, Ornithoteuthis volalilis, Hyaloteuthis pelagica, Onychoteuthis banksii, Pterygioteuthis giardi, Abraliopsis affinis, A. felis, Liocranchia reinhardti, Leachia danae, Histioteuthis heteropsis, H. dofleini, Gonalus onyx, and Loligo opalescens. Dimensions measured on the upper and lower beak are converted to ratios and compared individually among the species using an analysis of variance procedure with Tukey's omega and Duncan's multiple range tests. Significant differences (P =0.05) observed among the species' beak ratio means and structural characteristics are used to construct artificial keys for the upper and lower beaks of the 18 species. Upper and lower beak dimensions are used as independent variables in a linear regression model with mantle length and body weight (log transformed). (PDF file contains 56 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demersal groundfish densities were estimated by conducting a visual strip-transect survey via manned submersible on the continental shelf off Cape Flattery, Washington. The purpose of this study was to evaluate the statistical sampling power of the submersible survey as a tool to discriminate density differences between trawlable and untrawlable habitats. A geophysical map of the study area was prepared with side-scan sonar imagery, multibeam bathymetry data, and known locations of historical NMFS trawl survey events. Submersible transects were completed at randomly selected dive sites located in each habitat type. Significant differences in density between habitats were observed for lingcod (Ophiodon elongatus), yelloweye rockfish (Sebastes ruberrimus), and tiger rockfish (S. nigrocinctus) individually, and for “all rockfish” and “all flatfish” in the aggregate. Flatfish were more than ten times as abundant in the trawlable habitat samples than in the untrawlable samples, whereas rockfish as a group were over three times as abundant in the untrawlable habitat samples. Guidelines for sample sizes and implications for the estimation of the continental shelf trawl-survey habitat-bias are considered. We demonstrate an approach that can be used to establish sample size guidelines for future work by illustrating the interplay between statistical sampling power and 1) habitat specific-density differences, 2) variance of density differences, and 3) the proportion of untrawlable area in a habitat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been a growing interest amongst the speech research community into the use of spectral estimators which circumvent the traditional quasi-stationary assumption and provide greater time-frequency (t-f) resolution than conventional spectral estimators, such as the short time Fourier power spectrum (STFPS). One distribution in particular, the Wigner distribution (WD), has attracted considerable interest. However, experimental studies have indicated that, despite its improved t-f resolution, employing the WD as the front end of speech recognition system actually reduces recognition performance; only by explicitly re-introducing t-f smoothing into the WD are recognition rates improved. In this paper we provide an explanation for these findings. By treating the spectral estimation problem as one of optimization of a bias variance trade off, we show why additional t-f smoothing improves recognition rates, despite reducing the t-f resolution of the spectral estimator. A practical adaptive smoothing algorithm is presented, whicy attempts to match the degree of smoothing introduced into the WD with the time varying quasi-stationary regions within the speech waveform. The recognition performance of the resulting adaptively smoothed estimator is found to be comparable to that of conventional filterbank estimators, yet the average temporal sampling rate of the resulting spectral vectors is reduced by around a factor of 10. © 1992.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of the instantaneous frequency of speech resonances is a hard problem mainly due to phase discontinuities in the speech signal associated with excitation instants. We review a variety of approaches for enhanced frequency and bandwidth estimation in the time-domain and propose a new cognitively motivated approach using filterbank arrays. We show that by filtering speech resonances using filters of different center frequency, bandwidth and shape, the ambiguity in instantaneous frequency estimation associated with amplitude envelope minima and phase discontinuities can be significantly reduced. The novel estimators are shown to perform well on synthetic speech signals with frequency and bandwidth micro-modulations (i.e., modulations within a pitch period), as well as on real speech signals. Filterbank arrays, when applied to frequency and bandwidth modulation index estimation, are shown to reduce the estimation error variance by 85% and 70% respectively. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the biological sciences, stereological techniques are frequently used to infer changes in structural parameters (volume fraction, for example) between samples from different populations or subject to differing treatment regimes. Non-homogeneity of these parameters is virtually guaranteed, both between experimental animals and within the organ under consideration. A two-stage strategy is then desirable, the first stage involving unbiased estimation of the required parameter, separately for each experimental unit, the latter being defined as a subset of the organ for which homogeneity can reasonably be assumed. In the second stage, these point estimates are used as data inputs to a hierarchical analysis of variance, to distinguish treatment effects from variability between animals, for example. Techniques are therefore required for unbiased estimation of parameters from potentially small numbers of sample profiles. This paper derives unbiased estimates of linear properties in one special case—the sampling of spherical particles by transmission microscopy, when the section thickness is not negligible and the resulting circular profiles are subject to lower truncation. The derivation uses the general integral equation formulation of Nicholson (1970); the resulting formulae are simplified, algebraically, and their efficient computation discussed. Bias arising from variability in slice thickness is shown to be negligible in typical cases. The strategy is illustrated for data examining the effects, on the secondary lysosomes in the digestive cells, of exposure of the common mussel to hydrocarbons. Prolonged exposure, at 30 μg 1−1 total oil-derived hydrocarbons, is seen to increase the average volume of a lysosome, and the volume fraction that lysosomes occupy, but to reduce their number.