244 resultados para Harris, Marvin
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
The ability to resist or avoid natural enemy attack is a critically important insect life history trait, yet little is understood of how these traits may be affected by temperature. This study investigated how different genotypes of the pea aphid Acyrthosiphon pisum Harris, a pest of leguminous crops, varied in resistance to three different natural enemies (a fungal pathogen, two species of parasitoid wasp and a coccinellid beetle), and whether expression of resistance was influenced by temperature. Substantial clonal variation in resistance to the three natural enemies was found. Temperature influenced the number of aphids succumbing to the fungal pathogen Erynia neoaphidis Remaudiere & Hermebert, with resistance increasing at higher temperatures (18 vs. 28degreesC). A temperature difference of 5degreesC (18 vs. 23degreesC) did not affect the ability of A. pisum to resist attack by the parasitoids Aphidius ervi Haliday and A. eadyi Stary Gonzalez & Hall. Escape behaviour from foraging coccinellid beetles (Hippodamia convergens Guerin-Meneville) was not directly influenced by aphid clone or temperature (16 vs. 21degreesC). However, there were significant interactions between clone and temperature (while most clones did not respond to temperature, one was less likely to escape at 16degreesC), and between aphid clone and ladybird presence (some clones showed greater changes in escape behaviour in response to the presence of foraging coccinellids than others). Therefore, while larger temperature differences may alter interactions between Acyrthosiphon pisum and an entomopathogen, there is little evidence to suggest that smaller changes in temperature will alter pea aphid-natural enemy interactions.
Resumo:
Urban areas have both positive and negative influences on wildlife. For terrestrial mammals, one of the principle problems is the risk associated with moving through the environment whilst foraging. In this study, we examined nocturnal patterns of movement of urban-dwelling hedgehogs (Erinaceus europaeus) in relation to (i) the risks posed by predators and motor vehicles and (ii) nightly weather patterns. Hedgehogs preferentially utilised the gardens of semi-detached and terraced houses. However, females, but not males, avoided the larger back gardens of detached houses, which contain more of the habitat features selected by badgers. This difference in the avoidance of predation risk is probably associated with sex differences in breeding behaviour. Differences in nightly movement patterns were consistent with strategies associated with mating behaviour and the accumulation of fat reserves for hibernation. Hedgehogs also exhibited differences in behaviour associated with the risks posed by humans; they avoided actively foraging near roads and road verges, but did not avoid crossing roads per se. They were, however, significantly more active after midnight when there was a marked reduction in vehicle and foot traffic. In particular, responses to increased temperature, which is associated with increased abundance of invertebrate prey, were only observed after midnight. This variation in the timing of bouts of activity would reduce the risks associated with human activities. There were also profound differences in both area ranged and activity with chronological year which warrant further investigation.
Resumo:
Studies on exposure of non-targets to anticoagulant rodenticides have largely focussed on predatory birds and mammals; insectivores have rarely been studied. We investigated the exposure of 120 European hedgehogs (Erinaceus europaeus) from throughout Britain to first- and second-generation anticoagulant rodenticides (FGARs and SGARs) using high performance liquid chromatography coupled with fluorescence detection (HPLC) and liquid-chromatography mass spectrometry (LCMS). The proportion of hedgehogs with liver SGAR concentrations detected by HPLC was 3-13% per compound, 23% overall. LCMS identified much higher prevalence for difenacoum and bromadiolone, mainly because of greater ability to detect low level contamination. The overall proportion of hedgehogs with LCMS-detected residues was 57.5% (SGARs alone) and 66.7% (FGARs and SGARs combined); 27 (22.5%) hedgehogs contained >1 rodenticide. Exposure of insectivores and predators to anticoagulant rodenticides appears to be similar. The greater sensitivity of LCMS suggests that hitherto exposure of non-targets is likely to have been under-estimated using HPLC techniques.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.