941 resultados para Predictive
Resumo:
This work deals with a procedure for model re-identification of a process in closed loop with ail already existing commercial MPC. The controller considered here has a two-layer structure where the upper layer performs a target calculation based on a simplified steady-state optimization of the process. Here, it is proposed a methodology where a test signal is introduced in a tuning parameter of the target calculation layer. When the outputs are controlled by zones instead of at fixed set points, the approach allows the continuous operation of the process without an excessive disruption of the operating objectives as process constraints and product specifications remain satisfied during the identification test. The application of the method is illustrated through the simulation of two processes of the oil refining industry. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The aim objective of this project was to evaluate the protein extraction of soybean flour in dairy whey, by the multivariate statistical method with 2(3) experiments. Influence of three variables were considered: temperature, pH and percentage of sodium chloride against the process specific variable ( percentage of protein extraction). It was observed that, during the protein extraction against time and temperature, the treatments at 80 degrees C for 2h presented great values of total protein (5.99%). The increasing for the percentage of protein extraction was major according to the heating time. Therefore, the maximum point from the function that represents the protein extraction was analysed by factorial experiment 2(3). By the results, it was noted that all the variables were important to extraction. After the statistical analyses, was observed that the parameters as pH, temperature, and percentage of sodium chloride, did not sufficient for the extraction process, since did not possible to obtain the inflection point from mathematical function, however, by the other hand, the mathematical model was significant, as well as, predictive.
Resumo:
This paper applies Hierarchical Bayesian Models to price farm-level yield insurance contracts. This methodology considers the temporal effect, the spatial dependence and spatio-temporal models. One of the major advantages of this framework is that an estimate of the premium rate is obtained directly from the posterior distribution. These methods were applied to a farm-level data set of soybean in the State of the Parana (Brazil), for the period between 1994 and 2003. The model selection was based on a posterior predictive criterion. This study improves considerably the estimation of the fair premium rates considering the small number of observations.
Resumo:
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.
Resumo:
A warning system for sooty blotch and flyspeck (SBFS) of apple, developed in the southeastern United States, uses cumulative hours of leaf wetness duration (LWD) to predict the timing of the first appearance of signs. In the Upper Midwest United States, however, this warning system has resulted in sporadic disease control failures. The purpose of the present study was to determine whether the warning system`s algorithm could be modified to provide more reliable assessment of SBFS risk. Hourly LWD, rainfall, relative humidity (RH), and temperature data were collected from orchards in Iowa, North Carolina, and Wisconsin in 2005 and 2006. Timing of the first appearance of SBFS signs was determined by weekly scouting. Preliminary analysis using scatterplots and boxplots suggested that Cumulative hours of RH >= 97% could be a useful predictor of SBFS appearance. Receiver operating characteristic curve analysis was used to compare the predictive performance of cumulative LWD and cumulative hours of RH >= 97%. Cumulative hours of RH >= 97% was a more conservative and accurate predictor than cumulative LWD for 15 site years in the Upper Midwest, but not for four site years in North Carolina. Performance of the SBFS warning system in the Upper Midwest and climatically similar regions may be improved if cumulative hours of RH >= 97% were substituted for cumulative LWD to predict the first appearance of SBFS.
Resumo:
Tuberculosis (TB) is the primary cause of mortality among infectious diseases. Mycobacterium tuberculosis monophosphate kinase (TMPKmt) is essential to DNA replication. Thus, this enzyme represents a promising target for developing new drugs against TB. In the present study, the receptor-independent, RI, 4D-QSAR method has been used to develop QSAR models and corresponding 3D-pharmacophores for a set of 81 thymidine analogues, and two corresponding subsets, reported as inhibitors of TMPKmt. The resulting optimized models are not only statistically significant with r (2) ranging from 0.83 to 0.92 and q (2) from 0.78 to 0.88, but also are robustly predictive based on test set predictions. The most and the least potent inhibitors in their respective postulated active conformations, derived from each of the models, were docked in the active site of the TMPKmt crystal structure. There is a solid consistency between the 3D-pharmacophore sites defined by the QSAR models and interactions with binding site residues. Moreover, the QSAR models provide insights regarding a probable mechanism of action of the analogues.
Resumo:
Despite the increase in the use of natural compounds in place of synthetic derivatives as antioxidants in food products, the extent of this substitution is limited by cost constraints. Thus, the objective of this study was to explore the synergism on the antioxidant activity of natural compounds, for further application in food products. Three hydrosoluble compounds (x(1) = caffeic acid, x(2) = carnosic acid, and x(3) = glutathione) and three liposoluble compounds (x(1) = quercetin, x(2) = rutin, and x(3) = genistein) were mixed according to a ""centroid simplex design"". The antioxidant activity of the mixtures was analyzed by the ferric reducing antioxidant power (FRAP) and oxygen radical absorbance capacity (ORAL) methodologies, and activity was also evaluated in an oxidized mixed micelle prepared with linoleic acid (LAOX). Cubic polynomial models with predictive capacity were obtained when the mixtures were submitted to the LAOX methodology ((y) over cap = 0.56 x(1) + 0.59 x(2) + 0.04 x(3) + 0.41 x(1)x(2) - 0.41 x(1)x(3) - 1.12 x(2)x(3) - 4.01 x(1)x(2)x(3)) for the hydrosoluble compounds, and to FRAP methodology ((y) over cap = 3.26 x(1) + 2.39 x(2) + 0.04 x(3) + 1.51 x(1)x(2) + 1.03 x(1)x(3) + 0.29 x(1)x(3) + 3.20 x(1)x(2)x(3)) for the liposoluble compounds. Optimization of the models suggested that a mixture containing 47% caffeic acid + 53% carnosic acid and a mixture containing 67% quercetin + 33% rutin were potential synergistic combinations for further evaluation using a food matrix.
Resumo:
In this work total reflection X-ray fluorescence spectrometry has been employed to determine trace element concentrations in different human breast tissues (normal, normal adjacent, benign and malignant). A multivariate discriminant analysis of observed levels was performed in order to build a predictive model and perform tissue-type classifications. A total of 83 breast tissue samples were studied. Results showed the presence of Ca, Ti, Fe, Cu and Zn in all analyzed samples. All trace elements, except Ti, were found in higher concentrations in both malignant and benign tissues, when compared to normal tissues and normal adjacent tissues. In addition, the concentration of Fe was higher in malignant tissues than in benign neoplastic tissues. An opposite behavior was observed for Ca, Cu and Zn. Results have shown that discriminant analysis was able to successfully identify differences between trace element distributions from normal and malignant tissues with an overall accuracy of 80% and 65% for independent and paired breast samples respectively, and of 87% for benign and malignant tissues. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In recent years, the phrase 'genomic medicine' has increasingly been used to describe a new development in medicine that holds great promise for human health. This new approach to health care uses the knowledge of an individual's genetic make-up to identify those that are at a higher risk of developing certain diseases and to intervene at an earlier stage to prevent these diseases. Identifying genes that are involved in disease aetiology will provide researchers with tools to develop better treatments and cures. A major role within this field is attributed to 'predictive genomic medicine', which proposes screening healthy individuals to identify those who carry alleles that increase their susceptibility to common diseases, such as cancers and heart disease. Physicians could then intervene even before the disease manifests and advise individuals with a higher genetic risk to change their behaviour - for instance, to exercise or to eat a healthier diet - or offer drugs or other medical treatment to reduce their chances of developing these diseases. These promises have fallen on fertile ground among politicians, health-care providers and the general public, particularly in light of the increasing costs of health care in developed societies. Various countries have established databases on the DNA and health information of whole populations as a first step towards genomic medicine. Biomedical research has also identified a large number of genes that could be used to predict someone's risk of developing a certain disorder. But it would be premature to assume that genomic medicine will soon become reality, as many problems remain to be solved. Our knowledge about most disease genes and their roles is far from sufficient to make reliable predictions about a patient’s risk of actually developing a disease. In addition, genomic medicine will create new political, social, ethical and economic challenges that will have to be addressed in the near future.
Resumo:
This study investigated the ability of negatively versus positively perceived stress to predict outcome of treatment for binge eating disorder (BED). Participants were 62 obese women satisfying the DSMIV research criteria for BED. Stress was measured using an instrument based on the Recent Life Change Questionnaire (RLCQ). Participants experiencing high negative stress during the study period reported a binge eating frequency three times greater than that reported by subjects experiencing low negative stress (2.14 vs. 0.65 binge-days/week). Negative stress predicted how fast an individual would reduce binge eating and demonstrated more predictive power than positive stress.
Resumo:
In this second counterpoint article, we refute the claims of Landy, Locke, and Conte, and make the more specific case for our perspective, which is that ability-based models of emotional intelligence have value to add in the domain of organizational psychology. In this article, we address remaining issues, such as general concerns about the tenor and tone of the debates on this topic, a tendency for detractors to collapse across emotional intelligence models when reviewing the evidence and making judgments, and subsequent penchant to thereby discount all models, including the ability-based one, as lacking validity. We specifically refute the following three claims from our critics with the most recent empirically based evidence: (1) emotional intelligence is dominated by opportunistic academics-turned-consultants who have amassed much fame and fortune based on a concept that is shabby science at best; (2) the measurement of emotional intelligence is grounded in unstable, psychometrically flawed instruments, which have not demonstrated appropriate discriminant and predictive validity to warrant/justify their use; and (3) there is weak empirical evidence that emotional intelligence is related to anything of importance in organizations. We thus end with an overview of the empirical evidence supporting the role of emotional intelligence in organizational and social behavior.
Resumo:
Argumentation is modelled as a game where the payoffs are measured in terms of the probability that the claimed conclusion is, or is not, defeasibly provable, given a history of arguments that have actually been exchanged, and given the probability of the factual premises. The probability of a conclusion is calculated using a standard variant of Defeasible Logic, in combination with standard probability calculus. It is a new element of the present approach that the exchange of arguments is analysed with game theoretical tools, yielding a prescriptive and to some extent even predictive account of the actual course of play. A brief comparison with existing argument-based dialogue approaches confirms that such a prescriptive account of the actual argumentation has been almost lacking in the approaches proposed so far.
Resumo:
This review reflects the state of the art in study of contact and dynamic phenomena occurring in cold roll forming. The importance of taking these phenomena into account is determined by significant machine time and tooling costs spent on worn out forming rolls replacement and equipment adjustment in cold roll forming. Predictive modelling of the tool wear caused by contact and dynamic phenomena can reduce the production losses in this technological process.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Globalisation, increasing complexity, and the need to address triple-bottom line sustainability has seen the proliferation of Learning Organisations (LO) who, by definition, have the capacity to anticipate environmental changes and economic opportunities and adapt accordingly. Such organisations use system dynamics modelling (SDM) for both strategic planning and the promotion of organisational learning. Although SDM has been applied in the context of tourism destination management for predictive reasons, the current literature does not analyse or recognise how this could be used as a foundation for an LO. This study introduces the concept of the Learning Tourism Destinations (LTD) and discusses, on the basis of a review of 6 case studies, the potential of SDM as a tool for the implementation and enhancement of collective learning processes. The results reveal that SDM is capable of promoting communication between stakeholders and stimulating organisational learning. It is suggested that the LTD approach be further utilised and explored.