1000 resultados para pseudo-orthogonal Latin squares
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.
Resumo:
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A unified approach is proposed for data modelling that includes supervised regression and classification applications as well as unsupervised probability density function estimation. The orthogonal-least-squares regression based on the leave-one-out test criteria is formulated within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic data-modelling approach for constructing parsimonious kernel models with excellent generalisation capability. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
In this note we first introduce balanced critical sets and near balanced critical sets in Latin squares. Then we prove that there exist balanced critical sets in the back circulant Latin squares of order 3n for n even. Using this result we decompose the back circulant Latin squares of order 3n, n even, into three isotopic and disjoint balanced critical sets each of size 3n. We also find near balanced critical sets in the back circulant Latin squares of order 3n for n odd. Finally, we examine representatives of each main class of Latin squares of order up to six in order to determine which main classes contain balanced or near balanced critical sets.
Resumo:
A critical set in a Latin square of order n is a set of entries from the square which can be embedded in precisely one Latin square of order n, Such that if any element of the critical set. is deleted, the remaining set can be embedded, in more than one Latin square of order n.. In this paper we find all the critical sets of different sizes in the Latin squares of order at most six. We count the number of main and isotopy classes of these critical sets and classify critical sets from the main classes into various strengths. Some observations are made about the relationship between the numbers of classes, particularly in the 6 x 6 case. Finally some examples are given of each type of critical set.
Resumo:
We find necessary and sufficient conditions for completing an arbitrary 2 by n latin rectangle to an n by n symmetric latin square, for completing an arbitrary 2 by n latin rectangle to an n by n unipotent symmetric latin square, and for completing an arbitrary 1 by n latin rectangle to an n by n idempotent symmetric latin square. Equivalently, we prove necessary and sufficient conditions for the existence of an (n - 1)-edge colouring of K-n (n even), and for an n-edge colouring of K-n (n odd) in which the colours assigned to the edges incident with two vertices are specified in advance.
Resumo:
In this paper we focus on the existence of 2-critical sets in the latin square corresponding to the elementary abelian 2-group of order 2(n). It has been shown by Stinson and van Rees that this latin square contains a 2-critical set of volume 4(n) - 3(n). We provide constructions for 2-critical sets containing 4(n) - 3(n) + 1 - (2(k-1) + 2(m-1) + 2(n-(k+m+1))) entries, where 1 less than or equal to k less than or equal to n and 1 less than or equal to m less than or equal to n - k. That is, we construct 2-critical sets for certain values less than 4(n) - 3(n) + 1 - 3 (.) 2([n /3]-1). The results raise the interesting question of whether, for the given latin square, it is possible to construct 2-critical sets of volume m, where 4(n) - 3(n) + 1 - 3 (.) 2([n/3]-1) < m < 4(n) - 3(n).
Resumo:
Previously the process of finding critical sets in Latin squares has been inside cumbersome by the complexity and number of Latin trades that, must be constructed. In this paper we develop a theory of Latin trades that yields more transparent constructions. We use these Latin trades to find a new class of critical sets for Latin squares which are a product of the Latin square of order 2 with a. back circulant Latin square of odd order.