87 resultados para Tridiagonal Kernel


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Describes the design and implementation of an operating system kernel specifically designed to support real-time applications. It emphasises portability and aims to support state-of-the-art concepts in real-time programming. Discusses architectural aspects of the ARTOS kernel, and introduces new concepts on the areas of interrupt processing, scheduling, mutual exclusion and inter-task communication. Also explains the programming environment of ARTOS kernal and its task model, defines the real-time task states and system data structures and discusses exception handling mechanisms which are used to detect missed deadlines and take corrective action.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a random design model based on independent and identically distributed (iid) pairs of observations (Xi, Yi), where the regression function m(x) is given by m(x) = E(Yi|Xi = x) with one independent variable. In a nonparametric setting the aim is to produce a reasonable approximation to the unknown function m(x) when we have no precise information about the form of the true density, f(x) of X. We describe an estimation procedure of non-parametric regression model at a given point by some appropriately constructed fixed-width (2d) confidence interval with the confidence coefficient of at least 1−. Here, d(> 0) and 2 (0, 1) are two preassigned values. Fixed-width confidence intervals are developed using both Nadaraya-Watson and local linear kernel estimators of nonparametric regression with data-driven bandwidths.

The sample size was optimized using the purely and two-stage sequential procedure together with asymptotic properties of the Nadaraya-Watson and local linear estimators. A large scale simulation study was performed to compare their coverage accuracy. The numerical results indicate that the confidence bands based on the local linear estimator have the best performance than those constructed by using Nadaraya-Watson estimator. However both estimators are shown to have asymptotically correct coverage properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a random design model based on independent and identically distributed pairs of observations (Xi, Yi), where the regression function m(x) is given by m(x) = E(Yi|Xi = x) with one independent variable. In a nonparametric setting the aim is to produce a reasonable approximation to the unknown function m(x) when we have no precise information about the form of the true density, f(x) of X. We describe an estimation procedure of non-parametric regression model at a given point by some appropriately constructed fixed-width (2d) confidence interval with the confidence coefficient of at least 1−. Here, d(> 0) and 2 (0, 1) are two preassigned values. Fixed-width confidence intervals are developed using both Nadaraya-Watson and local linear kernel estimators of nonparametric regression with data-driven bandwidths. The sample size was optimized using the purely and two-stage sequential procedures together with asymptotic properties of the Nadaraya-Watson and local linear estimators. A large scale simulation study was performed to compare their coverage accuracy. The numerical results indicate that the confi dence bands based on the local linear estimator have the better performance than those constructed by using Nadaraya-Watson estimator. However both estimators are shown to have asymptotically correct coverage properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software reliability growth models (SRGMs) are extensively employed in software engineering to assess the reliability of software before their release for operational use. These models are usually parametric functions obtained by statistically fitting parametric curves, using Maximum Likelihood estimation or Least–squared method, to the plots of the cumulative number of failures observed N(t) against a period of systematic testing time t. Since the 1970s, a very large number of SRGMs have been proposed in the reliability and software engineering literature and these are often very complex, reflecting the involved testing regime that often took place during the software development process. In this paper we extend some of our previous work by adopting a nonparametric approach to SRGM modeling based on local polynomial modeling with kernel smoothing. These models require very few assumptions, thereby facilitating the estimation process and also rendering them more relevant under a wide variety of situations. Finally, we provide numerical examples where these models will be evaluated and compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present novel ridge regression (RR) and kernel ridge regression (KRR) techniques for multivariate labels and apply the methods to the problem of face recognition. Motivated by the fact that the regular simplex vertices are separate points with highest degree of symmetry, we choose such vertices as the targets for the distinct individuals in recognition and apply RR or KRR to map the training face images into a face subspace where the training images from each individual will locate near their individual targets. We identify the new face image by mapping it into this face subspace and comparing its distance to all individual targets. An efficient cross-validation algorithm is also provided for selecting the regularization and kernel parameters. Experiments were conducted on two face databases and the results demonstrate that the proposed algorithm significantly outperforms the three popular linear face recognition techniques (Eigenfaces, Fisherfaces and Laplacianfaces) and also performs comparably with the recently developed Orthogonal Laplacianfaces with the advantage of computational speed. Experimental results also demonstrate that KRR outperforms RR as expected since KRR can utilize the nonlinear structure of the face images. Although we concentrate on face recognition in this paper, the proposed method is general and may be applied for general multi-category classification problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given n training examples, the training of a Kernel Fisher Discriminant (KFD) classifier corresponds to solving a linear system of dimension n. In cross-validating KFD, the training examples are split into 2 distinct subsets for a number of times (L) wherein a subset of m examples is used for validation and the other subset of(n - m) examples is used for training the classifier. In this case L linear systems of dimension (n - m) need to be solved. We propose a novel method for cross-validation of KFD in which instead of solving L linear systems of dimension (n - m), we compute the inverse of an n × n matrix and solve L linear systems of dimension 2m, thereby reducing the complexity when L is large and/or m is small. For typical 10-fold and leave-one-out cross-validations, the proposed algorithm is approximately 4 and (4/9n) times respectively as efficient as the naive implementations. Simulations are provided to demonstrate the efficiency of the proposed algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel dimensionality reduction algorithm for kernel based classification. In the feature space, the proposed algorithm maximizes the ratio of the squared between-class distance and the sum of the within-class variances of the training samples for a given reduced dimension. This algorithm has lower complexity than the recently reported kernel dimension reduction(KDR) for supervised learning. We conducted several simulations with large training datasets, which demonstrate that the proposed algorithm has similar performance or is marginally better compared with KDR whilst having the advantage of computational efficiency. Further, we applied the proposed dimension reduction algorithm to face recognition in which the number of training samples is very small. This proposed face recognition approach based on the new algorithm outperforms the eigenface approach based on the principle component analysis (PCA), when the training data is complete, that is, representative of the whole dataset.