905 resultados para Method of a Decision-Tree
Resumo:
The new European Standard EN 301 549 “Accessibility requirements suitable for public procurement of ICT products and services in Europe” is the response by CEN, CENELEC and ETSI to the European Commission’s Mandate 376. Today, ICT products and services are converging, and the boundaries between product categories are being constantly blurred. For that reason EN 301 549 has been drafted using a feature-based approach, instead of being based on product categories. The result is a standard that can be applied to any ICT product and service, by identifying applicable requirements depending on the features of the ICT. This demonstration presents ongoing work at the research group CETTICO of the Technical University of Madrid. CETTICO is developing a workgroup-based support tool where teams of people can annotate the result of performing a conformity assessment of a given ICT product or service according to the requirements of the EN. One of the functions of the tool is creating evaluation projects. During that task the user defines the features of the corresponding ICT product or service by answering questions presented by the tool. As a result of this process, the tool will create a list of applicable requirements and recommendations.
Resumo:
Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
This study proposes an integrated analytical framework for effective management of project risks using combined multiple criteria decision-making technique and decision tree analysis. First, a conceptual risk management model was developed through thorough literature review. The model was then applied through action research on a petroleum oil refinery construction project in the Central part of India in order to demonstrate its effectiveness. Oil refinery construction projects are risky because of technical complexity, resource unavailability, involvement of many stakeholders and strict environmental requirements. Although project risk management has been researched extensively, practical and easily adoptable framework is missing. In the proposed framework, risks are identified using cause and effect diagram, analysed using the analytic hierarchy process and responses are developed using the risk map. Additionally, decision tree analysis allows modelling various options for risk response development and optimises selection of risk mitigating strategy. The proposed risk management framework could be easily adopted and applied in any project and integrated with other project management knowledge areas.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
Increases in vascular permeability and angiogenesis are crucial events to wound repair, tumoral growth and revascularization of tissues submitted to ischemia. An increased vascular permeability allows a variety of cytokines and growth factors to reach the damaged tissue. Nevertheless, the angiogenesis supply tissues with a wide variety of nutrients and is also important to metabolites clearance. It has been suggested that the natural latex from Hevea brasiliensis showed wound healing properties and angiogenic activity. Thus, the purpose of this work was to characterize its angiogenic activity and its effects on vascular permeability and wound healing. The serum fraction of the latex was separated from the rubber with reduction of the pH. The activity of the dialyzed serum fraction on the vascular permeability injected in subcutaneous tissue was assayed according Mile`s method. The angiogenic activity was determined using a chick embryo chorioallantoic membrane assay and its effects on the wound-healing process was determined by the rabbit ear dermal ulcer model. The serum fraction showed evident angiogenic effect and it was effective in enhancing vascular permeability. In dermal ulcers, this material significantly accelerated wound healing. Moreover, the serum fraction boiled and treated with proteases lost these activities. These results are in accordance with the enhancement of wound healing observed in clinical trials carried out with a biomembrane prepared with the same natural latex. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
In patients undergoing non-cardiac surgery, cardiac events are the most common cause of perioperative morbidity and mortality. It is often difficult to choose adequate cardiologic examinations before surgery. This paper, inspired by the guidelines of the European and American societies of cardiology (ESC, AHA, ACC), discusses the place of standard ECG, echocardiography, treadmill or bicycle ergometer and pharmacological stress testing in preoperative evaluations. The role of coronary angiography and prophylactic revascularization will also be discussed. Finally, we provide a decision tree which will be helpful to both general practitioners and specialists.
Resumo:
A new aggregation method for decision making is presented by using induced aggregation operators and the index of maximum and minimum level. Its main advantage is that it can assess complex reordering processes in the aggregation that represent complex attitudinal characters of the decision maker such as psychological or personal factors. A wide range of properties and particular cases of this new approach are studied. A further generalization by using hybrid averages and immediate weights is also presented. The key issue in this approach against the previous model is that we can use the weighted average and the ordered weighted average in the same formulation. Thus, we are able to consider the subjective attitude and the degree of optimism of the decision maker in the decision process. The paper ends with an application in a decision making problem based on the use of the assignment theory.
Resumo:
Background: Cardiac magnetic resonance (CMR) is accepted as a method to assess suspected coronary artery disease (CAD). Nonetheless, invasive coronary angiography (CXA) combined or not with fractional flow reserve (FFR) remains the main diagnostic test to evaluate CAD. Little data exist on the economic impact of the use of these procedures in a population with a low to intermediate pre-test probability. Objective: To compare the costs of 3 decision strategies to revascularize a patient with suspected CAD: 1) strategy guided by CMR 2) hypothetical strategy guided by CXA-FFR, 3) hypothetical strategy guided by CXA alone.
Resumo:
Yellow mombin is a fruit tree that grows spontaneously in the Semi-Arid Northeastern Brazil. Its fruits are still extractively exploited. The pulp of yellow mombin fruit stands out regarding the commercial aspect due to the characteristic flavor and aroma felt when consumed in diverse ways. This study aimed to evaluate the presence of bioactive compounds, total extractable polyphenols, and antioxidant activity of yellow mombin fruits (Spondias mombin, L.), from clone and ungrafted genotypes. The fruits were harvested at commercial maturity from twelve yellow mombin tree genotypes from an experimental orchard located at the municipality of Joao Pessoa, Paraíba, Brazil, and evaluated for chlorophyll, carotenoids, yellow flavonoids, total extractable polyphenols, and antioxidant activity, which was measured by the β-carotene/linoleic acid method. The antioxidant activity showed a percentage of inhibition of oxidation higher than 75% for all genotypes evaluated at the time of 120 minutes. The fruits from clone genotypes showed a higher percentage of antioxidant activity.
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
This letter has tested the canopy height profile (CHP) methodology as a way of effective leaf area index (LAIe) and vertical vegetation profile retrieval at a single-tree level. Waveform and discrete airborne LiDAR data from six swaths, as well as from the combined data of six swaths, were used to extract the LAIe of a single live Callitris glaucophylla tree. LAIe was extracted from raw waveform as an intermediate step in the CHP methodology, with two different vegetation-ground reflectance ratios. Discrete point LAIe estimates were derived from the gap probability using the following: 1) single ground returns and 2) all ground returns. LiDAR LAIe retrievals were subsequently compared to hemispherical photography estimates, yielding mean values within ±7% of the latter, depending on the method used. The CHP of a single dead Callitris glaucophylla tree, representing the distribution of vegetation material, was verified with a field profile manually reconstructed from convergent photographs taken with a fixed-focal-length camera. A binwise comparison of the two profiles showed very high correlation between the data reaching R2 of 0.86 for the CHP from combined swaths. Using a study-area-adjusted reflectance ratio improved the correlation between the profiles, but only marginally in comparison to using an arbitrary ratio of 0.5 for the laser wavelength of 1550 nm.