935 resultados para HTS - Hough Transform Statistics
Resumo:
New algorithms for the continuous wavelet transform are developed that are easy to apply, each consisting of a single-pass finite impulse response (FIR) filter, and several times faster than the fastest existing algorithms. The single-pass filter, named WT-FIR-1, is made possible by applying constraint equations to least-squares estimation of filter coefficients, which removes the need for separate low-pass and high-pass filters. Non-dyadic two-scale relations are developed and it is shown that filters based on them can work more efficiently than dyadic ones. Example applications to the Mexican hat wavelet are presented.
Resumo:
Near infrared (NIR) spectroscopy was investigated as a potential rapid method of estimating fish age from whole otoliths of Saddletail snapper (Lutjanus malabaricus). Whole otoliths from 209 Saddletail snapper were extracted and the NIR spectral characteristics were acquired over a spectral range of 800–2780 nm. Partial least-squares models (PLS) were developed from the diffuse reflectance spectra and reference-validated age estimates (based on traditional sectioned otolith increments) to predict age for independent otolith samples. Predictive models developed for a specific season and geographical location performed poorly against a different season and geographical location. However, overall PLS regression statistics for predicting a combined population incorporating both geographic location and season variables were: coefficient of determination (R2) = 0.94, root mean square error of prediction (RMSEP) = 1.54 for age estimation, indicating that Saddletail age could be predicted within 1.5 increment counts. This level of accuracy suggests the method warrants further development for Saddletail snapper and may have potential for other fish species. A rapid method of fish age estimation could have the potential to reduce greatly both costs of time and materials in the assessment and management of commercial fisheries.
Resumo:
A parametric regression model for right-censored data with a log-linear median regression function and a transformation in both response and regression parts, named parametric Transform-Both-Sides (TBS) model, is presented. The TBS model has a parameter that handles data asymmetry while allowing various different distributions for the error, as long as they are unimodal symmetric distributions centered at zero. The discussion is focused on the estimation procedure with five important error distributions (normal, double-exponential, Student's t, Cauchy and logistic) and presents properties, associated functions (that is, survival and hazard functions) and estimation methods based on maximum likelihood and on the Bayesian paradigm. These procedures are implemented in TBSSurvival, an open-source fully documented R package. The use of the package is illustrated and the performance of the model is analyzed using both simulated and real data sets.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
A 24-member ensemble of 1-h high-resolution forecasts over the Southern United Kingdom is used to study short-range forecast error statistics. The initial conditions are found from perturbations from an ensemble transform Kalman filter. Forecasts from this system are assumed to lie within the bounds of forecast error of an operational forecast system. Although noisy, this system is capable of producing physically reasonable statistics which are analysed and compared to statistics implied from a variational assimilation system. The variances for temperature errors for instance show structures that reflect convective activity. Some variables, notably potential temperature and specific humidity perturbations, have autocorrelation functions that deviate from 3-D isotropy at the convective-scale (horizontal scales less than 10 km). Other variables, notably the velocity potential for horizontal divergence perturbations, maintain 3-D isotropy at all scales. Geostrophic and hydrostatic balances are studied by examining correlations between terms in the divergence and vertical momentum equations respectively. Both balances are found to decay as the horizontal scale decreases. It is estimated that geostrophic balance becomes less important at scales smaller than 75 km, and hydrostatic balance becomes less important at scales smaller than 35 km, although more work is required to validate these findings. The implications of these results for high-resolution data assimilation are discussed.
Resumo:
Sensory thresholds are often collected through ascending forced-choice methods. Group thresholds are important for comparing stimuli or populations; yet, the method has two problems. An individual may correctly guess the correct answer at any concentration step and might detect correctly at low concentrations but become adapted or fatigued at higher concentrations. The survival analysis method deals with both issues. Individual sequences of incorrect and correct answers are adjusted, taking into account the group performance at each concentration. The technique reduces the chance probability where there are consecutive correct answers. Adjusted sequences are submitted to survival analysis to determine group thresholds. The technique was applied to an aroma threshold and a taste threshold study. It resulted in group thresholds similar to ASTM or logarithmic regression procedures. Significant differences in taste thresholds between younger and older adults were determined. The approach provides a more robust technique over previous estimation methods.
Resumo:
Vortex-induced motion (VIM) is a highly nonlinear dynamic phenomenon. Usual spectral analysis methods, using the Fourier transform, rely on the hypotheses of linear and stationary dynamics. A method to treat nonstationary signals that emerge from nonlinear systems is denoted Hilbert-Huang transform (HHT) method. The development of an analysis methodology to study the VIM of a monocolumn production, storage, and offloading system using HHT is presented. The purposes of the present methodology are to improve the statistics analysis of VIM. The results showed to be comparable to results obtained from a traditional analysis (mean of the 10% highest peaks) particularly for the motions in the transverse direction, although the difference between the results from the traditional analysis for the motions in the in-line direction showed a difference of around 25%. The results from the HHT analysis are more reliable than the traditional ones, owing to the larger number of points to calculate the statistics characteristics. These results may be used to design risers and mooring lines, as well as to obtain VIM parameters to calibrate numerical predictions. [DOI: 10.1115/1.4003493]
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.