126 resultados para Motion-based estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of a population size by means of capture-recapture techniques is an important problem occurring in many areas of life and social sciences. We consider the frequencies of frequencies situation, where a count variable is used to summarize how often a unit has been identified in the target population of interest. The distribution of this count variable is zero-truncated since zero identifications do not occur in the sample. As an application we consider the surveillance of scrapie in Great Britain. In this case study holdings with scrapie that are not identified (zero counts) do not enter the surveillance database. The count variable of interest is the number of scrapie cases per holding. For count distributions a common model is the Poisson distribution and, to adjust for potential heterogeneity, a discrete mixture of Poisson distributions is used. Mixtures of Poissons usually provide an excellent fit as will be demonstrated in the application of interest. However, as it has been recently demonstrated, mixtures also suffer under the so-called boundary problem, resulting in overestimation of population size. It is suggested here to select the mixture model on the basis of the Bayesian Information Criterion. This strategy is further refined by employing a bagging procedure leading to a series of estimates of population size. Using the median of this series, highly influential size estimates are avoided. In limited simulation studies it is shown that the procedure leads to estimates with remarkable small bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating snow mass at continental scales is difficult but important for understanding landatmosphere interactions, biogeochemical cycles and Northern latitudes’ hydrology. Remote sensing provides the only consistent global observations, but the uncertainty in measurements is poorly understood. Existing techniques for the remote sensing of snow mass are based on the Chang algorithm, which relates the absorption of Earth-emitted microwave radiation by a snow layer to the snow mass within the layer. The absorption also depends on other factors such as the snow grain size and density, which are assumed and fixed within the algorithm. We examine the assumptions, compare them to field measurements made at the NASA Cold Land Processes Experiment (CLPX) Colorado field site in 2002–3, and evaluate the consequences of deviation and variability for snow mass retrieval. The accuracy of the emission model used to devise the algorithm also has an impact on its accuracy, so we test this with the CLPX measurements of snow properties against SSM/I and AMSR-E satellite measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-rate multicarrier DS/CDMA is a potentially attractive multiple access method for future wireless communications networks that must support multimedia, and thus multi-rate, traffic. Several receiver structures exist for single-rate multicarrier systems, but little has been reported on multi-rate multicarrier systems. Considering that high-performance detection such as coherent demodulation needs the explicit knowledge of the channel, based on the finite-length chip waveform truncation, this paper proposes a subspace-based scheme for timing and channel estimation in multi-rate multicarrier DS/CDMA systems, which is applicable to both multicode and variable spreading factor systems. The performance of the proposed scheme for these two multi-rate systems is validated via numerical simulations. The effects of the finite-length chip waveform truncation on the performance of the proposed scheme is also analyzed theoretically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter proposes the subspace-based blind adaptive channel estimation algorithm for dual-rate quasi-synchronous DS/CDMA systems, which can operate at the low-rate (LR) or high-rate (HR) mode. Simulation results show that the proposed blind adaptive algorithm at the LR mode has a better performance than that at the HR mode, with the cost of an increasing computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-rate multicarrier DS-CDMA is a potentially attractive multiple access method for future broadband wireless multimedia networks that must support integrated voice/data traffic. This paper proposes a subspace based channel estimation scheme for multi-rate multicarrier DS-CDMA, which is applicable to both multicode and variable spreading factor systems. The performance of the proposed scheme for these two multi-rate systems is compared via numerical simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a subspace based blind adaptive channel estimation algorithm for dual-rate DS-CDMA systems, which can operate at the low-rate (LR) or high-rate (HR) mode. Simulation results show that the proposed blind adaptive algorithm at the LR mode has a better performance than that at the HR mode, with the cost of an increased computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-rate multicarrier DS-CDMA is a potentially attractive multiple access method for future wireless networks that must support multimedia, and thus multi-rate, traffic. Considering that high performance detection such as coherent demodulation needs the explicit knowledge of the channel, this paper proposes a subspace-based blind adaptive algorithm for timing acquisition and channel estimation in asynchronous multirate multicarrier DS-CDMA systems, which is applicable to both multicode and variable spreading factor systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses Dynamic Integrated System Optimization and Parameter Estimation (DISOPE), which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimization procedure. A version of the algorithm with a linear-quadratic model-based problem, implemented in the C+ + programming language, is developed and applied to illustrative simulation examples. An analysis of the optimality and convergence properties of the algorithm is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimates for the fall speed of natural hydrometeors are vital if their evolution in clouds is to be understood quantitatively. In this study, laboratory measurements of the terminal velocity vt for a variety of ice particle models settling in viscous fluids, along with wind-tunnel and field measurements of ice particles settling in air, have been analyzed and compared to common methods of computing vt from the literature. It is observed that while these methods work well for a number of particle types, they fail for particles with open geometries, specifically those particles for which the area ratio Ar is small (Ar is defined as the area of the particle projected normal to the flow divided by the area of a circumscribing disc). In particular, the fall speeds of stellar and dendritic crystals, needles, open bullet rosettes, and low-density aggregates are all overestimated. These particle types are important in many cloud types: aggregates in particular often dominate snow precipitation at the ground and vertically pointing Doppler radar measurements. Based on the laboratory data, a simple modification to previous computational methods is proposed, based on the area ratio. This new method collapses the available drag data onto an approximately universal curve, and the resulting errors in the computed fall speeds relative to the tank data are less than 25% in all cases. Comparison with the (much more scattered) measurements of ice particles falling in air show strong support for this new method, with the area ratio bias apparently eliminated.