997 resultados para Pruning method
Resumo:
Historically, grapevine (Vitis vinifera L.) leaf characterisation has been a driving force in the identification of cultivars. In this study, ampelometric (foliometric) analysis was done on leaf samples collected from hand-pruned, mechanically pruned and minimally pruned ‘Sauvignon blanc’ and ‘Syrah’ vines to estimate the impact of within-vineyard variability and a change in bud load on the stability of leaf properties. The results showed that within-vineyard variability of ampelometric characteristics was high within a cultivar, irrespective of bud load. In terms of the O.I.V. coding system, zero to four class differences were observed between minimum and maximum values of each characteristic. The value of variability of each characteristic was different between the three levels of bud load and the two cultivars. With respect to bud load, the number of shoots per vine had a significant effect on the characteristics of the leaf laminae. Single leaf area and lengths of veins changed significantly for both cultivars, irrespective of treatment, while angle between veins proved to be a stable characteristic. A large number of biometric data can be recorded on a single leaf; the data measured on several leaves, however, are not necessarily unique for a specific cultivar. The leaf characteristics analysed in this study can be divided into two groups according to the response to a change in bud load, i.e. stable (angles between the veins, depths of sinuses) and variable (length of the veins, length of the petiole, single leaf area). The variable characteristics are not recommended to be used in cultivar identification, unless the pruning method/bud load is known.
Resumo:
This paper deals with the design of optimal multiple gravity assist trajectories with deep space manoeuvres. A pruning method which considers the sequential nature of the problem is presented. The method locates feasible vectors using local optimization and applies a clustering algorithm to find reduced bounding boxes which can be used in a subsequent optimization step. Since multiple local minima remain within the pruned search space, the use of a global optimization method, such as Differential Evolution, is suggested for finding solutions which are likely to be close to the global optimum. Two case studies are presented.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
This article deals with classification problems involving unequal probabilities in each class and discusses metrics to systems that use multilayer perceptrons neural networks (MLP) for the task of classifying new patterns. In addition we propose three new pruning methods that were compared to other seven existing methods in the literature for MLP networks. All pruning algorithms presented in this paper have been modified by the authors to do pruning of neurons, in order to produce fully connected MLP networks but being small in its intermediary layer. Experiments were carried out involving the E. coli unbalanced classification problem and ten pruning methods. The proposed methods had obtained good results, actually, better results than another pruning methods previously defined at the MLP neural network area. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The introduction of dwarfed rootstocks in apple crop has led to a new concept of intensive planting systems with the aim of producing early high yield and with returns of the initial high investment. Although yield is an important aspect to the grower, the consumer has become demanding regards fruit quality and is generally attracted by appearance. To fulfil the consumer’s expectations the grower may need to choose a proper training system along with an ideal pruning technique, which ensure a good light distribution in different parts of the canopy and a marketable fruit quality in terms of size and skin colour. Although these aspects are important, these fruits might not reach the proper ripening stage within the canopy because they are often heterogeneous. To describe the variability present in a tree, a software (PlantToon®), was used to recreate the tree architecture in 3D in the two training systems. The ripening stage of each of the fruits was determined using a non-destructive device (DA-Meter), thus allowing to estimate the fruit ripening variability. This study deals with some of the main parameters that can influence fruit quality and ripening stage within the canopy and orchard management techniques that can ameliorate a ripening fruit homogeneity. Significant differences in fruit quality were found within the canopies due to their position, flowering time and bud wood age. Bi-axis appeared to be suitable for high density planting, even though the fruit quality traits resulted often similar to those obtained with a Slender Spindle, suggesting similar fruit light availability within the canopies. Crop load confirmed to be an important factor that influenced fruit quality as much as the interesting innovative pruning method “Click”, in intensive planting systems.
Resumo:
There are several papers on pruning methods in the artificial neural networks area. However, with rare exceptions, none of them presents an appropriate statistical evaluation of such methods. In this article, we proved statistically the ability of some methods to reduce the number of neurons of the hidden layer of a multilayer perceptron neural network (MLP), and to maintain the same landing of classification error of the initial net. They are evaluated seven pruning methods. The experimental investigation was accomplished on five groups of generated data and in two groups of real data. Three variables were accompanied in the study: apparent classification error rate in the test group (REA); number of hidden neurons, obtained after the application of the pruning method; and number of training/retraining epochs, to evaluate the computational effort. The non-parametric Friedman's test was used to do the statistical analysis.
Resumo:
This report addresses speculative parallelism (the assignment of spare processing resources to tasks which are not known to be strictly required for the successful completion of a computation) at the user and application level. At this level, the execution of a program is seen as a (dynamic) tree —a graph, in general. A solution for a problem is a traversal of this graph from the initial state to a node known to be the answer. Speculative parallelism then represents the assignment of resources to múltiple branches of this graph even if they are not positively known to be on the path to a solution. In highly non-deterministic programs the branching factor can be very high and a naive assignment will very soon use up all the resources. This report presents work assignment strategies other than the usual depth-first and breadth-first. Instead, best-first strategies are used. Since their definition is application-dependent, the application language contains primitives that allow the user (or application programmer) to a) indícate when intelligent OR-parallelism should be used; b) provide the functions that define "best," and c) indícate when to use them. An abstract architecture enables those primitives to perform the search in a "speculative" way, using several processors, synchronizing them, killing the siblings of the path leading to the answer, etc. The user is freed from worrying about these interactions. Several search strategies are proposed and their implementation issues are addressed. "Armageddon," a global pruning method, is introduced, together with both a software and a hardware implementation for it. The concepts exposed are applicable to áreas of Artificial Intelligence such as extensive expert systems, planning, game playing, and in general to large search problems. The proposed strategies, although showing promise, have not been evaluated by simulation or experimentation.
Resumo:
Includes bibliographical references.
Resumo:
Transactions of the Horticultural society of London, 1822, iv, 145-152.
Resumo:
Genome-wide association studies (GWAS) are conducted with the promise to discover novel genetic variants associated with diverse traits. For most traits, associated markers individually explain just a modest fraction of the phenotypic variation, but their number can well be in the hundreds. We developed a maximum likelihood method that allows us to infer the distribution of associated variants even when many of them were missed by chance. Compared to previous approaches, the novelty of our method is that it (a) does not require having an independent (unbiased) estimate of the effect sizes; (b) makes use of the complete distribution of P-values while allowing for the false discovery rate; (c) takes into account allelic heterogeneity and the SNP pruning strategy. We applied our method to the latest GWAS meta-analysis results of the GIANT consortium. It revealed that while the explained variance of genome-wide (GW) significant SNPs is around 1% for waist-hip ratio (WHR), the observed P-values provide evidence for the existence of variants explaining 10% (CI=[8.5-11.5%]) of the phenotypic variance in total. Similarly, the total explained variance likely to exist for height is estimated to be 29% (CI=[28-30%]), three times higher than what the observed GW significant SNPs give rise to. This methodology also enables us to predict the benefit of future GWA studies that aim to reveal more associated genetic markers via increased sample size.
Resumo:
Analyzes the use of linear and neural network models for financial distress classification, with emphasis on the issues of input variable selection and model pruning. A data-driven method for selecting input variables (financial ratios, in this case) is proposed. A case study involving 60 British firms in the period 1997-2000 is used for illustration. It is shown that the use of the Optimal Brain Damage pruning technique can considerably improve the generalization ability of a neural model. Moreover, the set of financial ratios obtained with the proposed selection procedure is shown to be an appropriate alternative to the ratios usually employed by practitioners.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
In this article we describe a feature extraction algorithm for pattern classification based on Bayesian Decision Boundaries and Pruning techniques. The proposed method is capable of optimizing MLP neural classifiers by retaining those neurons in the hidden layer that realy contribute to correct classification. Also in this article we proposed a method which defines a plausible number of neurons in the hidden layer based on the stem-and-leaf graphics of training samples. Experimental investigation reveals the efficiency of the proposed method. © 2002 IEEE.
Resumo:
Granger causality (GC) is a statistical technique used to estimate temporal associations in multivariate time series. Many applications and extensions of GC have been proposed since its formulation by Granger in 1969. Here we control for potentially mediating or confounding associations between time series in the context of event-related electrocorticographic (ECoG) time series. A pruning approach to remove spurious connections and simultaneously reduce the required number of estimations to fit the effective connectivity graph is proposed. Additionally, we consider the potential of adjusted GC applied to independent components as a method to explore temporal relationships between underlying source signals. Both approaches overcome limitations encountered when estimating many parameters in multivariate time-series data, an increasingly common predicament in today's brain mapping studies.
Resumo:
The problem of recognition on finite set of events is considered. The generalization ability of classifiers for this problem is studied within the Bayesian approach. The method for non-uniform prior distribution specification on recognition tasks is suggested. It takes into account the assumed degree of intersection between classes. The results of the analysis are applied for pruning of classification trees.