897 resultados para Decision tree method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Zootecnia - FCAV

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Green buildings are becoming the new fixation for the building industry because of the impact they have on the carbon footprint and the cost savings they offer for utility costs. Governments have begun to produce policies and regulations that implement and mandate green buildings due to these successes. However, the policies are having troubles increasing the popularity and quantities of green buildings. There is a need for a way to produce better policies and regulations that will increase both the amount of green buildings their popularity. A decision-making tool, such as a decision tree, should be created to help policymakers who do not have the backgrounds to produce well thought out regulations. By researching the green building industry and its current status, key points can be graphed out in a decision tool that will provide the needed education for policy makers to produce better green building regulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The competitive regime faced by individuals is fundamental to modelling the evolution of social organization. In this paper, we assess the relative importance of contest and scramble food competition on the social dynamics of a provisioned semi-free-ranging Cebus apella group (n=18). Individuals competed directly for provisioned and clumped foods. Effects of indirect competition were apparent with individuals foraging in different areas and with increased group dispersion during periods of low food abundance. We suggest that both forms of competition can act simultaneously and to some extent synergistically in their influence on social dynamics; the combination of social and ecological opportunities for competition and how those opportunities are exploited both influence the nature of the relationships within social groups of primates and underlie the evolved social structure. Copyright (c) 2008 S. Karger AG, Basel

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Cost-effectiveness studies have been increasingly part of decision processes for incorporating new vaccines into the Brazilian National Immunisation Program. This study aimed to evaluate the cost-effectiveness of 10-valent pneumococcal conjugate vaccine (PCV10) in the universal childhood immunisation programme in Brazil. Methods A decision-tree analytical model based on the ProVac Initiative pneumococcus model was used, following 25 successive cohorts from birth until 5 years of age. Two strategies were compared: (1) status quo and (2) universal childhood immunisation programme with PCV10. Epidemiological and cost estimates for pneumococcal disease were based on National Health Information Systems and literature. A 'top-down' costing approach was employed. Costs are reported in 2004 Brazilian reals. Costs and benefits were discounted at 3%. Results 25 years after implementing the PCV10 immunisation programme, 10 226 deaths, 360 657 disability-adjusted life years (DALYs), 433 808 hospitalisations and 5 117 109 outpatient visits would be avoided. The cost of the immunisation programme would be R$10 674 478 765, and the expected savings on direct medical costs and family costs would be R$1 036 958 639 and R$209 919 404, respectively. This resulted in an incremental cost-effectiveness ratio of R$778 145/death avoided and R$22 066/DALY avoided from the society perspective. Conclusion The PCV10 universal infant immunisation programme is a cost-effective intervention (1-3 GDP per capita/DALY avoided). Owing to the uncertain burden of disease data, as well as unclear long-term vaccine effects, surveillance systems to monitor the long-term effects of this programme will be essential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary aim of this dissertation to identify subgroups of patients with chronic kidney disease (CKD) who have a differential risk of progression of illness and the secondary aim is compare 2 equations to estimate the glomerular filtration rate (GFR). To this purpose, the PIRP (Prevention of Progressive Kidney Disease) registry was linked with the dialysis and mortality registries. The outcome of interest is the mean annual variation of GFR, estimated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation. A decision tree model was used to subtype CKD patients, based on the non-parametric procedure CHAID (Chi-squared Automatic Interaction Detector). The independent variables of the model include gender, age, diabetes, hypertension, cardiac diseases, body mass index, baseline serum creatinine, haemoglobin, proteinuria, LDL cholesterol, tryglycerides, serum phoshates, glycemia, parathyroid hormone and uricemia. The decision tree model classified patients into 10 terminal nodes using 6 variables (gender, age, proteinuria, diabetes, serum phosphates and ischemic cardiac disease) that predict a differential progression of kidney disease. Specifically, age <=53 year, male gender, proteinuria, diabetes and serum phosphates >3.70 mg/dl predict a faster decrease of GFR, while ischemic cardiac disease predicts a slower decrease. The comparison between GFR estimates obtained using MDRD4 and CKD-EPI equations shows a high percentage agreement (>90%), with modest discrepancies for high and low age and serum creatinine levels. The study results underscore the need for a tight follow-up schedule in patients with age <53, and of patients aged 54 to 67 with diabetes, to try to slow down the progression of the disease. The result also emphasize the effective management of patients aged>67, in whom the estimated decrease in glomerular filtration rate corresponds with the physiological decrease observed in the absence of kidney disease, except for the subgroup of patients with proteinuria, in whom the GFR decline is more pronounced.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate diagnosis of the causes of chest pain and dyspnea remain challenging. In this preliminary observational study with a 5-year follow-up, we attempted to find a simplified approach to selecting patients with chest pain needing immediate care based on the initial evaluation in ED. During a 24-month period were randomly selected 301 patients and a conditional inference tree (CIT) was used as the basis of the prognostic rule. Common diagnoses were musculoskeletal chest pain (27%), ACS (19%) and panic attack (12%). Using variables of ACS symptoms we estimated the likelihood of ACS based on a CIT to be high at 91% (32), low at 4% (198) and intermediate at 20.5-40% in (71) patients. Coronary catheterization was performed within 24 hours in 91% of the patients with ACS. A culprit lesion was found in 79%. Follow-up (median 4.2 years) information was available for 70% of the patients. Of the 164 patients without ACS who were followed up, 5 were treated with revascularization for stable angina pectoris, 2 were treated with revascularization for myocardial infarction, and 25 died. Although a simple triage decision tree could theoretically help to efficient select patients needing immediate care we need also to be vigilant for those presenting with atypical symptoms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Efficient planning of soil conservation measures requires, first, to understand the impact of soil erosion on soil fertility with regard to local land cover classes; and second, to identify hot spots of soil erosion and bright spots of soil conservation in a spatially explicit manner. Soil organic carbon (SOC) is an important indicator of soil fertility. The aim of this study was to conduct a spatial assessment of erosion and its impact on SOC for specific land cover classes. Input data consisted of extensive ground truth, a digital elevation model and Landsat 7 imagery from two different seasons. Soil spectral reflectance readings were taken from soil samples in the laboratory and calibrated with results of SOC chemical analysis using regression tree modelling. The resulting model statistics for soil degradation assessments are promising (R2=0.71, RMSEV=0.32). Since the area includes rugged terrain and small agricultural plots, the decision tree models allowed mapping of land cover classes, soil erosion incidence and SOC content classes at an acceptable level of accuracy for preliminary studies. The various datasets were linked in the hot-bright spot matrix, which was developed to combine soil erosion incidence information and SOC content levels (for uniform land cover classes) in a scatter plot. The quarters of the plot show different stages of degradation, from well conserved land to hot spots of soil degradation. The approach helps to gain a better understanding of the impact of soil erosion on soil fertility and to identify hot and bright spots in a spatially explicit manner. The results show distinctly lower SOC content levels on large parts of the test areas, where annual crop cultivation was dominant in the 1990s and where cultivation has now been abandoned. On the other hand, there are strong indications that afforestations and fruit orchards established in the 1980s have been successful in conserving soil resources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of a clinical decision tree based on knowledge about risks and reported outcomes of therapy is a necessity for successful planning and outcome of periodontal therapy. This requires a well-founded knowledge of the disease entity and a broad knowledge of how different risk conditions attribute to periodontitis. The infectious etiology, a complex immune response, and influence from a large number of co-factors are challenging conditions in clinical periodontal risk assessment. The difficult relationship between independent and dependent risk conditions paired with limited information on periodontitis prevalence adds to difficulties in periodontal risk assessment. The current information on periodontitis risk attributed to smoking habits, socio-economic conditions, general health and subjects' self-perception of health, is not comprehensive, and this contributes to limited success in periodontal risk assessment. New models for risk analysis have been advocated. Their utility for the estimation of periodontal risk assessment and prognosis should be tested. The present review addresses several of these issues associated with periodontal risk assessment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. METHOD: Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. RESULTS: In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. CONCLUSION: Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article discusses the detection of discourse markers (DM) in dialog transcriptions, by human annotators and by automated means. After a theoretical discussion of the definition of DMs and their relevance to natural language processing, we focus on the role of like as a DM. Results from experiments with human annotators show that detection of DMs is a difficult but reliable task, which requires prosodic information from soundtracks. Then, several types of features are defined for automatic disambiguation of like: collocations, part-of-speech tags and duration-based features. Decision-tree learning shows that for like, nearly 70% precision can be reached, with near 100% recall, mainly using collocation filters. Similar results hold for well, with about 91% precision at 100% recall.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Center for Disease Control and Prevention (CDC) estimates that more than 2 million patients annually acquire an infection while hospitalized in U.S. hospitals for other health problems, and that 88,000 die as a direct or indirect result of these infections. Infection with Clostridium difficile is the most important common cause of health care associated infectious diarrhea in industrialized countries. The purpose of this study was to explore the cost of current treatment practice of beginning empiric metronidazole treatment for hospitalized patients with diarrhea prior to identification of an infectious agent. The records of 70 hospitalized patients were retrospectively analyzed to determine the pharmacologic treatment, laboratory testing, and radiographic studies ordered and the median cost for each of these was determined. All patients in the study were tested for C. difficile and concurrently started on empiric metronidazole. The median direct cost for metronidazole was $7.25 per patient (95% CI 5.00, 12.721). The median direct cost for laboratory charges was $468.00 (95% CI 339.26, 552.58) and for radiology the median direct cost was $970.00 (95% CI 738.00, 3406.91). Indirect costs, which are far greater than direct costs, were not studied. At St. Luke's, if every hospitalized patient with diarrhea was empirically treated with metronidazole at a median cost of $7.25, the annual direct cost is estimated to be over $9,000.00 plus uncalculated indirect costs. In the U.S., the estimated annual direct cost may be as much as $21,750,000.00, plus indirect costs. ^ An unexpected and significant finding of this study was the inconsistency in testing and treatment of patients with health care associated diarrhea. A best-practice model for C. difficile testing and treatment was not found in the literature review. In addition to the cost savings gained by not routinely beginning empiric treatment with metronidazole, significant savings and improvement in patient care may result from a more consistent approach to the diagnosis and treatment of all patients with health care associated diarrhea. A decision tree model for C. difficile testing and treatment is proposed, but further research is needed to evaluate the decision arms before a validated best practice model can be proposed. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Group sequential methods and response adaptive randomization (RAR) procedures have been applied in clinical trials due to economical and ethical considerations. Group sequential methods are able to reduce the average sample size by inducing early stopping, but patients are equally allocated with half of chance to inferior arm. RAR procedures incline to allocate more patients to better arm; however it requires more sample size to obtain a certain power. This study intended to combine these two procedures. We applied the Bayesian decision theory approach to define our group sequential stopping rules and evaluated the operating characteristics under RAR setting. The results showed that Bayesian decision theory method was able to preserve the type I error rate as well as achieve a favorable power; further by comparing with the error spending function method, we concluded that Bayesian decision theory approach was more effective on reducing average sample size.^