1000 resultados para pseudo-orthogonal Latin squares
Resumo:
In this paper we focus on the representation of Steiner trades of volume less than or equal to nine and identify those for which the associated partial latin square can be decomposed into six disjoint latin interchanges.
Resumo:
In this paper we focus on the identification of latin interchanges in latin squares which are the direct product of latin squares of smaller orders. The results we obtain on latin interchanges will be used to identify critical sets in direct products. This work is an extension of research carried out by Stinson and van Rees in 1982.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.
Resumo:
A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Zootecnia - FCAV
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Some historical uses and background are touched upon as well. The majority of the definitions are contained within this chapter as well. In Chapter 2 we consider the question whether one can decompose λ copies of monochromatic Kv into copies of Kk such that each copy of the Kk contains at most one edge from each Kv. This is called a proper edge coloring (Hurd, Sarvate, [29]). The majority of the content in this section is a wide variety of examples to explain the constructions used in Chapters 3 and 4. In Chapters 3 and 4 we investigate how to properly color BIBD(v, k, λ) for k = 4, and 5. Not only will there be direct constructions of relatively small BIBDs, we also prove some generalized constructions used within. In Chapter 5 we talk about an alternate solution to Chapters 3 and 4. A purely graph theoretical solution using matchings, augmenting paths, and theorems about the edgechromatic number is used to develop a theorem that than covers all possible cases. We also discuss how this method performed compared to the methods in Chapters 3 and 4. In Chapter 6, we switch topics to Latin rectangles that have the same number of symbols and an equivalent sized matrix to Latin squares. Suppose ab = n2. We define an equitable Latin rectangle as an a × b matrix on a set of n symbols where each symbol appears either [b/n] or [b/n] times in each row of the matrix and either [a/n] or [a/n] times in each column of the matrix. Two equitable Latin rectangles are orthogonal in the usual way. Denote a set of ka × b mutually orthogonal equitable Latin rectangles as a k–MOELR(a, b; n). We show that there exists a k–MOELR(a, b; n) for all a, b, n where k is at least 3 with some exceptions.