244 resultados para Harris, Marvin
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
We develop a particle swarm optimisation (PSO) aided orthogonal forward regression (OFR) approach for constructing radial basis function (RBF) classifiers with tunable nodes. At each stage of the OFR construction process, the centre vector and diagonal covariance matrix of one RBF node is determined efficiently by minimising the leave-one-out (LOO) misclassification rate (MR) using a PSO algorithm. Compared with the state-of-the-art regularisation assisted orthogonal least square algorithm based on the LOO MR for selecting fixednode RBF classifiers, the proposed PSO aided OFR algorithm for constructing tunable-node RBF classifiers offers significant advantages in terms of better generalisation performance and smaller model size as well as imposes lower computational complexity in classifier construction process. Moreover, the proposed algorithm does not have any hyperparameter that requires costly tuning based on cross validation.
Resumo:
Analysis and modeling of X-ray and neutron Bragg and total diffraction data show that the compounds referred to in the literature as “Pd(CN)2”and“Pt(CN)2” are nanocrystalline materials containing of small sheets of vertex-sharing square-planar M(CN)4 units, layered in a disordered manner with an intersheet separation of 3.44 A at 300 K. The small size of the crystallites means that the sheets’ edges form a significant fraction of each material. The Pd(CN)2 nanocrystallites studied using total neutron diffraction are terminated by water and the Pt(CN)2 nanocrystallites by ammonia, in place of half of the terminal cyanide groups, thus maintaining charge neutrality. The neutron samples contain sheets of approximate dimensions 30 A x 30 A. For sheets of the size we describe, our structural models predict compositions of Pd(CN)2-xH2O and Pt(CN)2-yNH3 (x = y = 0.29). These values are in good agreement with those obtained from total neutron diffraction and thermal analysis, and are also supported by infrared and Raman spectroscopy measurements. It is also possible to prepare related compounds Pd(CN)2-pNH3 and Pt(CN)2-qH2O, in which the terminating groups are exchanged. Additional samples showing sheet sizes in the range 10 A x 10 A (y = 0.67) to 80 A x 80 A (p = q = 0.12), as determined by X-ray diffraction, have been prepared. The related mixed-metal phase, Pd1/2Pt1/2(CN)2-qH2O(q = 0.50), is also nanocrystalline (sheet size 15 A x 15 A). In all cases, the interiors of the sheets are isostructural with those found in Ni(CN)2. Removal of the final traces of water or ammonia by heating results in decomposition of the compounds to Pd and Pt metal, or in the case of the mixed-metal cyanide, the alloy, Pd1/2Pt1/2, making it impossible to prepare the simple cyanides, Pd(CN)2, Pt(CN)2 or Pd1/2Pt1/2(CN)2, by this method.
Resumo:
1. Declines in area and quality of species-rich mesotrophic and calcareous grasslands have occurred all across Europe.While the European Union has promoted schemes to restore these grasslands, the emphasis for management has remained largely focused on plants. Here we focus on restoration of the phytophagous beetles of these grasslands. Although local management, particularly that which promotes the establishment of host plants, is key to restoration success, dispersal limitation is also likely to be an important limiting factor during the restoration of phytophagous beetle assemblages. 2. Using a 3-year multi-site experiment, we investigated how restoration success of phytophagous beetles was affected by hay-spreading management (intended to introduce target plant species), success in restoration of the plant communities and the landscape context within which restoration was attempted. 3. Restoration success of the plants was greatest where green hay spreading had been used to introduce seeds into restoration sites. Beetle restoration success increased over time, although hayspreading had no direct effect. However, restoration success of the beetles was positively correlated with restoration success of the plants. 4. Overall restoration success of the phytophagous beetles was positively correlated with the proportion of species-rich grassland in the landscape, as was the restoration success of the polyphagous beetles. Restoration success for beetles capable of flight and those showing oligophagous host plant specialism were also positively correlated with connectivity to species-rich grasslands. There was no indication that beetles not capable of flight showed greater dependence on landscape scale factors than flying species. 5. Synthesis and applications. Increasing the similarity of the plant community at restoration sites to target species-rich grasslands will promote restoration success for the phytophagous beetles. However, landscape context is also important, with restoration being approximately twice as successful in those landscapes containing high as opposed to low proportions of species-rich grassland. By targeting grassland restoration within landscapes containing high proportions of species-rich grassland, dispersal limitation problems associated with restoration for invertebrate assemblages are more likely to be overcome.
Resumo:
This study focuses on the restoration of chalk grasslands over a 6-year period and tests the efficacy of two management practices, hay spreading and soil disturbance, in promoting this process for phytophagous beetles. Restoration success for the beetles, measured as similarity to target species-rich chalk grassland, was not found to be influenced by either management practice. In contrast, restoration success for the plants did increase in response to hay spreading management. Although the presence of suitable host plants was considered to dictate the earliest point at which phytophagous beetles could successfully colonized, few beetle species colonized as soon as their host plants became established. Morphological characteristics and feeding habits of 27 phytophagous beetle species were therefore tested to identify factors that limited their colonization and persistence. The lag time between host plant establishment and colonization was greatest for flightless beetles. Beetles with foliage-feeding larvae both colonized at slower rates than seed-, stem-, or root-feeding species and persisted within the swards for shorter periods. Although the use of hay spreading may benefit plant communities during chalk grassland restoration, it did not directly benefit phytophagous beetles. Without techniques for overcoming colonization limitation for invertebrate taxa, short-term success of restoration may be limited to the plants only.
Resumo:
Three main changes to current risk analysis processes are proposed to improve their transparency, openness, and accountability. First, the addition of a formal framing stage would allow interested parties, experts and officials to work together as needed to gain an initial shared understanding of the issue, the objectives of regulatory action, and alternative risk management measures. Second, the scope of the risk assessment is expanded to include the assessment of health and environmental benefits as well as risks, and the explicit consideration of economic- and social-impacts of risk management action and their distribution. Moreover approaches were developed for deriving improved information from genomic, proteomic and metabolomic profiling methods and for probabilistic modelling of health impacts for risk assessment purposes. Third, in an added evaluation stage, interested parties, experts, and officials may compare and weigh the risks, costs, and benefits and their distribution. As part of a set of recommendations on risk communication, we propose that reports on each stage should be made public.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.
Resumo:
This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.