49 resultados para Maximum-likelihood-estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytical q-ball imaging is widely used for reconstruction of orientation distribution function (ODF) using diffusion weighted MRI data. Estimating the spherical harmonic coefficients is a critical step in this method. Least squares (LS) is widely used for this purpose assuming the noise to be additive Gaussian. However, Rician noise is considered as a more appropriate model to describe noise in MR signal. Therefore, the current estimation techniques are valid only for high SNRs with Gaussian distribution approximating the Rician distribution. The aim of this study is to present an estimation approach considering the actual distribution of the data to provide reliable results particularly for the case of low SNR values. Maximum likelihood (ML) is investigated as a more effective estimation method. However, no closed form estimator is presented as the estimator becomes nonlinear for the noise assumption of the Rician distribution. Consequently, the results of LS estimator is used as an initial guess and the more refined answer is achieved using iterative numerical methods. According to the results, the ODFs reconstructed from low SNR data are in close agreement with ODFs reconstructed from high SNRs when Rician distribution is considered. Also, the error between the estimated and actual fiber orientations was compared using ML and LS estimator. In low SNRs, ML estimator achieves less error compared to the LS estimator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tracking mobile agents with a Doppler radar system mounted on a moving vehicle is considered in this paper. Dopplers modulated from mobile agents on the single frequency continuous wave signals are analyzed in order to estimate the positions and velocities of multiple mobile agents. The measurement noise is assumed to be Gaussian and the maximum likelihood estimation is utilized to enhance the localization accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Q-ball imaging has been presented to reconstruct diffusion orientation distribution function using diffusion weighted MRI. In this thesiis, we present a novel and robust approach to satisfy the smoothness constraint required in Q-ball imaging. Moreover, we developed an improved estimator based on the actual distribution of the MR data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software reliability growth models (SRGMs) are extensively employed in software engineering to assess the reliability of software before their release for operational use. These models are usually parametric functions obtained by statistically fitting parametric curves, using Maximum Likelihood estimation or Least–squared method, to the plots of the cumulative number of failures observed N(t) against a period of systematic testing time t. Since the 1970s, a very large number of SRGMs have been proposed in the reliability and software engineering literature and these are often very complex, reflecting the involved testing regime that often took place during the software development process. In this paper we extend some of our previous work by adopting a nonparametric approach to SRGM modeling based on local polynomial modeling with kernel smoothing. These models require very few assumptions, thereby facilitating the estimation process and also rendering them more relevant under a wide variety of situations. Finally, we provide numerical examples where these models will be evaluated and compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Activity recognition is an important issue in building intelligent monitoring systems. We address the recognition of multilevel activities in this paper via a conditional Markov random field (MRF), known as the dynamic conditional random field (DCRF). Parameter estimation in general MRFs using maximum likelihood is known to be computationally challenging (except for extreme cases), and thus we propose an efficient boosting-based algorithm AdaBoost.MRF for this task. Distinct from most existing work, our algorithm can handle hidden variables (missing labels) and is particularly attractive for smarthouse domains where reliable labels are often sparsely observed. Furthermore, our method works exclusively on trees and thus is guaranteed to converge. We apply the AdaBoost.MRF algorithm to a home video surveillance application and demonstrate its efficacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistics-based Internet traffic classification using machine learning techniques has attracted extensive research interest lately, because of the increasing ineffectiveness of traditional port-based and payload-based approaches. In particular, unsupervised learning, that is, traffic clustering, is very important in real-life applications, where labeled training data are difficult to obtain and new patterns keep emerging. Although previous studies have applied some classic clustering algorithms such as K-Means and EM for the task, the quality of resultant traffic clusters was far from satisfactory. In order to improve the accuracy of traffic clustering, we propose a constrained clustering scheme that makes decisions with consideration of some background information in addition to the observed traffic statistics. Specifically, we make use of equivalence set constraints indicating that particular sets of flows are using the same application layer protocols, which can be efficiently inferred from packet headers according to the background knowledge of TCP/IP networking. We model the observed data and constraints using Gaussian mixture density and adapt an approximate algorithm for the maximum likelihood estimation of model parameters. Moreover, we study the effects of unsupervised feature discretization on traffic clustering by using a fundamental binning method. A number of real-world Internet traffic traces have been used in our evaluation, and the results show that the proposed approach not only improves the quality of traffic clusters in terms of overall accuracy and per-class metrics, but also speeds up the convergence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter addresses the issue of joint space-time trellis decoding and channel estimation in time-varying fading channels that are spatially and temporally correlated. A recursive space-time receiver which incorporates per-survivor processing (PSP) and Kalman filtering into the Viterbi algorithm is proposed. This approach generalizes existing work to the correlated fading channel case. The channel time-evolution is modeled by a multichannel autoregressive process, and a bank of Kalman filters is used to track the channel variations. Computer simulation results show that a performance close to the maximum likelihood receiver with perfect channel state information (CSI) can be obtained. The effects of the spatial correlation on the performance of a receiver that assumes independent fading channels are examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, an algorithm for approximating the path of a moving autonomous mobile sensor with an unknown position location using Received Signal Strength (RSS) measurements is proposed. Using a Least Squares (LS) estimation method as an input, a Maximum-Likelihood (ML) approach is used to determine the location of the unknown mobile sensor. For the mobile sensor case, as the sensor changes position the characteristics of the RSS measurements also change; therefore the proposed method adapts the RSS measurement model by dynamically changing the pass loss value alpha to aid in position estimation. Secondly, a Recursive Least-Squares (RLS) algorithm is used to estimate the path of a moving mobile sensor using the Maximum-Likelihood position estimation as an input. The performance of the proposed algorithm is evaluated via simulation and it is shown that this method can accurately determine the position of the mobile sensor, and can efficiently track the position of the mobile sensor during motion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we generalize Besag's pseudo-likelihood function for spatial statistical models on a region of a lattice. The correspondingly defined maximum generalized pseudo-likelihood estimates (MGPLEs) are natural extensions of Besag's maximum pseudo-likelihood estimate (MPLE). The MGPLEs connect the MPLE and the maximum likelihood estimate. We carry out experimental calculations of the MGPLEs for spatial processes on the lattice. These simulation results clearly show better performances of the MGPLEs than the MPLE, and the performances of differently defined MGPLEs are compared. These are also illustrated by the application to two real data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non-independence of observations on standard error parameter estimates. Using field data structured in the form of repeated measurements it is to be shown, in a two-factor confirmatory factor analysis model, how the bias in SE can be derived when the non-independence is ignored.

Design/methodology/approach – Three estimation procedures are compared: normal asymptotic theory (maximum likelihood); non-parametric standard error estimation (naïve bootstrap); and sandwich (robust covariance matrix) estimation (pseudo-maximum likelihood).

Findings – The study reveals that, when using either normal asymptotic theory or non-parametric standard error estimation, the SE bias produced by the non-independence of observations can be noteworthy.

Research limitations/implications –
Considering the methodological constraints in employing field data, the three analyses examined must be interpreted independently and as a result taxonomic generalisations are limited. However, the study still provides “case study” evidence suggesting the existence of the relationship between non-independence of observations and standard error bias estimates.

Originality/value – Given the increasing popularity of structural equation models in the social sciences and in particular in the marketing discipline, the paper provides a theoretical and practical insight into how to treat repeated measures and clustered data in general, adding to previous methodological research. Some conclusions and suggestions for researchers who make use of partial least squares modelling are also drawn.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We examine the problem of optimal bearing-only localization of a single target using synchronous measurements from multiple sensors. We approach the problem by forming geometric relationships between the measured parameters and their corresponding errors in the relevant emitter localization scenarios. Specifically, we derive a geometric constraint equation on the measurement errors in such a scenario. Using this constraint, we formulate the localization task as a constrained optimization problem that can be performed on the measurements in order to provide the optimal values such that the solution is consistent with the underlying geometry. We illustrate and confirm the advantages of our approach through simulation, offering detailed comparison with traditional maximum likelihood (TML) estimation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effective management of our marine ecosystems requires the capability to identify, characterise and predict the distribution of benthic biological communities within the overall seascape architecture. The rapid expansion of seabed mapping studies has seen an increase in the application of automated classification techniques to efficiently map benthic habitats, and the need of techniques to assess confidence of model outputs. We use towed video observations and 11 seafloor complexity variables derived from multibeam echosounder (MBES) bathymetry and backscatter to predict the distribution of 8 dominant benthic biological communities in a 54 km2 site, off the central coast of Victoria, Australia. The same training and evaluation datasets were used to compare the accuracies of a Maximum Likelihood Classifier (MLC) and two new generation decision tree methods, QUEST (Quick Unbiased Efficient Statistical Tree) and CRUISE (Classification Rule with Unbiased Interaction Selection and Estimation), for predicting dominant biological communities. The QUEST classifier produced significantly better results than CRUISE and MLC model runs, with an overall accuracy of 80% (Kappa 0.75). We found that the level of accuracy with the size of training set varies for different algorithms. The QUEST results generally increased in a linear fashion, CRUISE performed well with smaller training data sets, and MLC performed least favourably overall, generating anomalous results with changes to training size. We also demonstrate how predicted habitat maps can provide insights into habitat spatial complexity on the continental shelf. Significant variation between patch-size and habitat types and significant correlations between patch size and depth were also observed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, under a proportional model, two families of robust estimates for the proportionality constants, the common principal axes and their size are discussed. The first approach is obtained by plugging robust scatter matrices on the maximum likelihood equations for normal data. A projection- pursuit and a modified projection-pursuit approach, adapted to the proportional setting, are also considered. For all families of estimates, partial influence functions are obtained and asymptotic variances are derived from them. The performance of the estimates is compared through a Monte Carlo study. © 2006 Springer-Verlag.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A retrospective assessment of exposure to benzene was carried out for a nested case control study of lympho-haematopoietic cancers, including leukaemia, in the Australian petroleum industry. Each job or task in the industry was assigned a Base Estimate (BE) of exposure derived from task-based personal exposure assessments carried out by the company occupational hygienists. The BEs corresponded to the estimated arithmetic mean exposure to benzene for each job or task and were used in a deterministic algorithm to estimate the exposure of subjects in the study. Nearly all of the data sets underlying the BEs were found to contain some values below the limit of detection (LOD) of the sampling and analytical methods and some were very heavily censored; up to 95% of the data were below the LOD in some data sets. It was necessary, therefore, to use a method of calculating the arithmetic mean exposures that took into account the censored data. Three different methods were employed in an attempt to select the most appropriate method for the particular data in the study. A common method is to replace the missing (censored) values with half the detection limit. This method has been recommended for data sets where much of the data are below the limit of detection or where the data are highly skewed; with a geometric standard deviation of 3 or more. Another method, involving replacing the censored data with the limit of detection divided by the square root of 2, has been recommended when relatively few data are below the detection limit or where data are not highly skewed. A third method that was examined is Cohen's method. This involves mathematical extrapolation of the left-hand tail of the distribution, based on the distribution of the uncensored data, and calculation of the maximum likelihood estimate of the arithmetic mean. When these three methods were applied to the data in this study it was found that the first two simple methods give similar results in most cases. Cohen's method on the other hand, gave results that were generally, but not always, higher than simpler methods and in some cases gave extremely high and even implausible estimates of the mean. It appears that if the data deviate substantially from a simple log-normal distribution, particularly if high outliers are present, then Cohen's method produces erratic and unreliable estimates. After examining these results, and both the distributions and proportions of censored data, it was decided that the half limit of detection method was most suitable in this particular study.