981 resultados para Decision trees


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a fault diagnosis method based on adaptive neuro-fuzzy inference system (ANFIS) in combination with decision trees. Classification and regression tree (CART) which is one of the decision tree methods is used as a feature selection procedure to select pertinent features from data set. The crisp rules obtained from the decision tree are then converted to fuzzy if-then rules that are employed to identify the structure of ANFIS classifier. The hybrid of back-propagation and least squares algorithm are utilized to tune the parameters of the membership functions. In order to evaluate the proposed algorithm, the data sets obtained from vibration signals and current signals of the induction motors are used. The results indicate that the CART–ANFIS model has potential for fault diagnosis of induction motors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives Demonstrate the application of decision trees – classification and regression trees (CARTs), and their cousins, boosted regression trees (BRTs) – to understand structure in missing data. Setting Data taken from employees at three different industry sites in Australia. Participants 7915 observations were included. Materials and Methods The approach was evaluated using an occupational health dataset comprising results of questionnaires, medical tests, and environmental monitoring. Statistical methods included standard statistical tests and the ‘rpart’ and ‘gbm’ packages for CART and BRT analyses, respectively, from the statistical software ‘R’. A simulation study was conducted to explore the capability of decision tree models in describing data with missingness artificially introduced. Results CART and BRT models were effective in highlighting a missingness structure in the data, related to the Type of data (medical or environmental), the site in which it was collected, the number of visits and the presence of extreme values. The simulation study revealed that CART models were able to identify variables and values responsible for inducing missingness. There was greater variation in variable importance for unstructured compared to structured missingness. Discussion Both CART and BRT models were effective in describing structural missingness in data. CART models may be preferred over BRT models for exploratory analysis of missing data, and selecting variables important for predicting missingness. BRT models can show how values of other variables influence missingness, which may prove useful for researchers. Conclusion Researchers are encouraged to use CART and BRT models to explore and understand missing data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE To develop and test decision tree (DT) models to classify physical activity (PA) intensity from accelerometer output and Gross Motor Function Classification System (GMFCS) classification level in ambulatory youth with cerebral palsy (CP); and 2) compare the classification accuracy of the new DT models to that achieved by previously published cut-points for youth with CP. METHODS Youth with CP (GMFCS Levels I - III) (N=51) completed seven activity trials with increasing PA intensity while wearing a portable metabolic system and ActiGraph GT3X accelerometers. DT models were used to identify vertical axis (VA) and vector magnitude (VM) count thresholds corresponding to sedentary (SED) (<1.5 METs), light PA (LPA) (>/=1.5 and <3 METs) and moderate-to-vigorous PA (MVPA) (>/=3 METs). Models were trained and cross-validated using the 'rpart' and 'caret' packages within R. RESULTS For the VA (VA_DT) and VM decision trees (VM_DT), a single threshold differentiated LPA from SED, while the threshold for differentiating MVPA from LPA decreased as the level of impairment increased. The average cross-validation accuracy for the VC_DT was 81.1%, 76.7%, and 82.9% for GMFCS levels I, II, and III, respectively. The corresponding cross-validation accuracy for the VM_DT was 80.5%, 75.6%, and 84.2%, respectively. Within each GMFCS level, the decision tree models achieved better PA intensity recognition than previously published cut-points. The accuracy differential was greatest among GMFCS level III participants, in whom the previously published cut-points misclassified 40% of the MVPA activity trials. CONCLUSION GMFCS-specific cut-points provide more accurate assessments of MVPA levels in youth with CP across the full spectrum of ambulatory ability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess goodness of hyperplanes at each node. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy, based on some recent variants of SVM, to assess the hyperplanes in such a way that the geometric structure in the data is taken into account. We show through empirical studies that our method is effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision Trees need train samples in the train data set to get classification rules. If the number of train data was too small, the important information might be missed and thus the model could not explain the classification rules of data. While it is not affirmative that large scale of train data set can get well model. This Paper analysis the relationship between decision trees and the train data scale. We use nine decision tree algorithms to experiment the accuracy, complexity and robustness of decision tree algorithms. Some results are demonstrated.

Relevância:

100.00% 100.00%

Publicador: