951 resultados para Covariance matrix estimation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a general, global approach to the problem of robot exploration, utilizing a topological data structure to guide an underlying Simultaneous Localization and Mapping (SLAM) process. A Gap Navigation Tree (GNT) is used to motivate global target selection and occluded regions of the environment (called “gaps”) are tracked probabilistically. The process of map construction and the motion of the vehicle alters both the shape and location of these regions. The use of online mapping is shown to reduce the difficulties in implementing the GNT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider quantile regression models and investigate the induced smoothing method for obtaining the covariance matrix of the regression parameter estimates. We show that the difference between the smoothed and unsmoothed estimating functions in quantile regression is negligible. The detailed and simple computational algorithms for calculating the asymptotic covariance are provided. Intensive simulation studies indicate that the proposed method performs very well. We also illustrate the algorithm by analyzing the rainfall–runoff data from Murray Upland, Australia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spectroscopic studies of complex clinical fluids have led to the application of a more holistic approach to their chemical analysis becoming more popular and widely employed. The efficient and effective interpretation of multidimensional spectroscopic data relies on many chemometric techniques and one such group of tools is represented by so-called correlation analysis methods. Typical of these techniques are two-dimensional correlation analysis and statistical total correlation spectroscopy (STOCSY). Whilst the former has largely been applied to optical spectroscopic analysis, STOCSY was developed and has been applied almost exclusively to NMR metabonomic studies. Using a 1H NMR study of human blood plasma, from subjects recovering from exhaustive exercise trials, the basic concepts and applications of these techniques are examined. Typical information from their application to NMR-based metabonomics is presented and their value in aiding interpretation of NMR data obtained from biological systems is illustrated. Major energy metabolites are identified in the NMR spectra and the dynamics of their appearance and removal from plasma during exercise recovery are illustrated and discussed. The complementary nature of two-dimensional correlation analysis and statistical total correlation spectroscopy are highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates how best to forecast optimal portfolio weights in the context of a volatility timing strategy. It measures the economic value of a number of methods for forming optimal portfolios on the basis of realized volatility. These include the traditional econometric approach of forming portfolios from forecasts of the covariance matrix, and a novel method, where a time series of optimal portfolio weights are constructed from observed realized volatility and directly forecast. The approach proposed here of directly forecasting portfolio weights shows a great deal of merit. Resulting portfolios are of equivalent economic benefit to a number of competing approaches and are more stable across time. These findings have obvious implications for the manner in which volatility timing is undertaken in a portfolio allocation context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Techniques for evaluating and selecting multivariate volatility forecasts are not yet understood as well as their univariate counterparts. This paper considers the ability of different loss functions to discriminate between a set of competing forecasting models which are subsequently applied in a portfolio allocation context. It is found that a likelihood-based loss function outperforms its competitors, including those based on the given portfolio application. This result indicates that considering the particular application of forecasts is not necessarily the most effective basis on which to select models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A smoothed rank-based procedure is developed for the accelerated failure time model to overcome computational issues. The proposed estimator is based on an EM-type procedure coupled with the induced smoothing. "The proposed iterative approach converges provided the initial value is based on a consistent estimator, and the limiting covariance matrix can be obtained from a sandwich-type formula. The consistency and asymptotic normality of the proposed estimator are also established. Extensive simulations show that the new estimator is not only computationally less demanding but also more reliable than the other existing estimators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modeling of cultivar x trial effects for multienvironment trials (METs) within a mixed model framework is now common practice in many plant breeding programs. The factor analytic (FA) model is a parsimonious form used to approximate the fully unstructured form of the genetic variance-covariance matrix in the model for MET data. In this study, we demonstrate that the FA model is generally the model of best fit across a range of data sets taken from early generation trials in a breeding program. In addition, we demonstrate the superiority of the FA model in achieving the most common aim of METs, namely the selection of superior genotypes. Selection is achieved using best linear unbiased predictions (BLUPs) of cultivar effects at each environment, considered either individually or as a weighted average across environments. In practice, empirical BLUPs (E-BLUPs) of cultivar effects must be used instead of BLUPs since variance parameters in the model must be estimated rather than assumed known. While the optimal properties of minimum mean squared error of prediction (MSEP) and maximum correlation between true and predicted effects possessed by BLUPs do not hold for E-BLUPs, a simulation study shows that E-BLUPs perform well in terms of MSEP.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stationary processes are random variables whose value is a signal and whose distribution is invariant to translation in the domain of the signal. They are intimately connected to convolution, and therefore to the Fourier transform, since the covariance matrix of a stationary process is a Toeplitz matrix, and Toeplitz matrices are the expression of convolution as a linear operator. This thesis utilises this connection in the study of i) efficient training algorithms for object detection and ii) trajectory-based non-rigid structure-from-motion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single-symbol maximum likelihood (ML) decodable distributed orthogonal space-time block codes (DOST- BCs) have been introduced recently for cooperative networks and an upper-bound on the maximal rate of such codes along with code constructions has been presented. In this paper, we introduce a new class of distributed space-time block codes (DSTBCs) called semi-orthogonal precoded distributed single-symbol decodable space-time block codes (Semi-SSD-PDSTBCs) wherein, the source performs preceding on the information symbols before transmitting it to all the relays. A set of necessary and sufficient conditions on the relay matrices for the existence of semi-SSD- PDSTBCs is proved. It is shown that the DOSTBCs are a special case of semi-SSD-PDSTBCs. A subset of semi-SSD-PDSTBCs having diagonal covariance matrix at the destination is studied and an upper bound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of Semi- SSD-PDSTBCs is presented when the number of relays K ges 4 and the constructed codes are shown to have higher rates than that of DOSTBCs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Four algorithms, all variants of Simultaneous Perturbation Stochastic Approximation (SPSA), are proposed. The original one-measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. As a result, the asymptotic covariance matrix of the iterate convergence process has a bias term. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p. 1 of both algorithms is established. We extend measurement reuse to design two second-order SPSA algorithms and sketch the convergence analysis. Finally, we present simulation results on an illustrative minimization problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method of source localization in shallow water, based on subspace concept, is described. It is shown that a vector representing the source in the image space spanned by the direction vectors of the source images is orthogonal to the noise eigenspace of the covariance matrix. Computer simulation has shown that a horizontal array of eight sensors can accurately localize one or more uncorrelated sources in shallow water dominated by multipath propagation.