12 resultados para kernel estimators
em Cochin University of Science
Resumo:
An improved color video super-resolution technique using kernel regression and fuzzy enhancement is presented in this paper. A high resolution frame is computed from a set of low resolution video frames by kernel regression using an adaptive Gaussian kernel. A fuzzy smoothing filter is proposed to enhance the regression output. The proposed technique is a low cost software solution to resolution enhancement of color video in multimedia applications. The performance of the proposed technique is evaluated using several color videos and it is found to be better than other techniques in producing high quality high resolution color videos
Resumo:
The changes occuring to cashew kernels during storage at two humidity levels - 80% to 20% with respect to organoleptic characteristics, protein content, carbohydrate content, oil content, iodine and peroxide values were studied. From the present study it is concluded that organoleptic characteristics of cashew kernels deteriorates with increase in humidity. Decrease in protein and carbohydrate content of stored cashew kernel is dependent on humidity. Humidity increased oxidative rancidification.
Resumo:
In this article it is proved that the stationary Markov sequences generated by minification models are ergodic and uniformly mixing. These results are used to establish the optimal properties of estimators for the parameters in the model. The problem of estimating the parameters in the exponential minification model is discussed in detail.
Resumo:
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The insample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out-of-sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models.
Resumo:
This paper proposes different estimators for the parameters of SemiPareto and Pareto autoregressive minification processes The asymptotic properties of the estimators are established by showing that the SemiPareto process is α-mixing. Asymptotic variances of different moment and maximum likelihood estimators are compared.
Resumo:
Fourier transform methods are employed heavily in digital signal processing. Discrete Fourier Transform (DFT) is among the most commonly used digital signal transforms. The exponential kernel of the DFT has the properties of symmetry and periodicity. Fast Fourier Transform (FFT) methods for fast DFT computation exploit these kernel properties in different ways. In this thesis, an approach of grouping data on the basis of the corresponding phase of the exponential kernel of the DFT is exploited to introduce a new digital signal transform, named the M-dimensional Real Transform (MRT), for l-D and 2-D signals. The new transform is developed using number theoretic principles as regards its specific features. A few properties of the transform are explored, and an inverse transform presented. A fundamental assumption is that the size of the input signal be even. The transform computation involves only real additions. The MRT is an integer-to-integer transform. There are two kinds of redundancy, complete redundancy & derived redundancy, in MRT. Redundancy is analyzed and removed to arrive at a more compact version called the Unique MRT (UMRT). l-D UMRT is a non-expansive transform for all signal sizes, while the 2-D UMRT is non-expansive for signal sizes that are powers of 2. The 2-D UMRT is applied in image processing applications like image compression and orientation analysis. The MRT & UMRT, being general transforms, will find potential applications in various fields of signal and image processing.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on 'n' complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time 'T'. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.
Resumo:
This thesis Entitled “modelling and analysis of recurrent event data with multiple causes.Survival data is a term used for describing data that measures the time to occurrence of an event.In survival studies, the time to occurrence of an event is generally referred to as lifetime.Recurrent event data are commonly encountered in longitudinal studies when individuals are followed to observe the repeated occurrences of certain events. In many practical situations, individuals under study are exposed to the failure due to more than one causes and the eventual failure can be attributed to exactly one of these causes.The proposed model was useful in real life situations to study the effect of covariates on recurrences of certain events due to different causes.In Chapter 3, an additive hazards model for gap time distributions of recurrent event data with multiple causes was introduced. The parameter estimation and asymptotic properties were discussed .In Chapter 4, a shared frailty model for the analysis of bivariate competing risks data was presented and the estimation procedures for shared gamma frailty model, without covariates and with covariates, using EM algorithm were discussed. In Chapter 6, two nonparametric estimators for bivariate survivor function of paired recurrent event data were developed. The asymptotic properties of the estimators were studied. The proposed estimators were applied to a real life data set. Simulation studies were carried out to find the efficiency of the proposed estimators.
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
In this paper, we propose a handwritten character recognition system for Malayalam language. The feature extraction phase consists of gradient and curvature calculation and dimensionality reduction using Principal Component Analysis. Directional information from the arc tangent of gradient is used as gradient feature. Strength of gradient in curvature direction is used as the curvature feature. The proposed system uses a combination of gradient and curvature feature in reduced dimension as the feature vector. For classification, discriminative power of Support Vector Machine (SVM) is evaluated. The results reveal that SVM with Radial Basis Function (RBF) kernel yield the best performance with 96.28% and 97.96% of accuracy in two different datasets. This is the highest accuracy ever reported on these datasets
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576