932 resultados para Predictive controllers
Resumo:
Despite the increase in the use of natural compounds in place of synthetic derivatives as antioxidants in food products, the extent of this substitution is limited by cost constraints. Thus, the objective of this study was to explore the synergism on the antioxidant activity of natural compounds, for further application in food products. Three hydrosoluble compounds (x(1) = caffeic acid, x(2) = carnosic acid, and x(3) = glutathione) and three liposoluble compounds (x(1) = quercetin, x(2) = rutin, and x(3) = genistein) were mixed according to a ""centroid simplex design"". The antioxidant activity of the mixtures was analyzed by the ferric reducing antioxidant power (FRAP) and oxygen radical absorbance capacity (ORAL) methodologies, and activity was also evaluated in an oxidized mixed micelle prepared with linoleic acid (LAOX). Cubic polynomial models with predictive capacity were obtained when the mixtures were submitted to the LAOX methodology ((y) over cap = 0.56 x(1) + 0.59 x(2) + 0.04 x(3) + 0.41 x(1)x(2) - 0.41 x(1)x(3) - 1.12 x(2)x(3) - 4.01 x(1)x(2)x(3)) for the hydrosoluble compounds, and to FRAP methodology ((y) over cap = 3.26 x(1) + 2.39 x(2) + 0.04 x(3) + 1.51 x(1)x(2) + 1.03 x(1)x(3) + 0.29 x(1)x(3) + 3.20 x(1)x(2)x(3)) for the liposoluble compounds. Optimization of the models suggested that a mixture containing 47% caffeic acid + 53% carnosic acid and a mixture containing 67% quercetin + 33% rutin were potential synergistic combinations for further evaluation using a food matrix.
Resumo:
In this work total reflection X-ray fluorescence spectrometry has been employed to determine trace element concentrations in different human breast tissues (normal, normal adjacent, benign and malignant). A multivariate discriminant analysis of observed levels was performed in order to build a predictive model and perform tissue-type classifications. A total of 83 breast tissue samples were studied. Results showed the presence of Ca, Ti, Fe, Cu and Zn in all analyzed samples. All trace elements, except Ti, were found in higher concentrations in both malignant and benign tissues, when compared to normal tissues and normal adjacent tissues. In addition, the concentration of Fe was higher in malignant tissues than in benign neoplastic tissues. An opposite behavior was observed for Ca, Cu and Zn. Results have shown that discriminant analysis was able to successfully identify differences between trace element distributions from normal and malignant tissues with an overall accuracy of 80% and 65% for independent and paired breast samples respectively, and of 87% for benign and malignant tissues. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In recent years, the phrase 'genomic medicine' has increasingly been used to describe a new development in medicine that holds great promise for human health. This new approach to health care uses the knowledge of an individual's genetic make-up to identify those that are at a higher risk of developing certain diseases and to intervene at an earlier stage to prevent these diseases. Identifying genes that are involved in disease aetiology will provide researchers with tools to develop better treatments and cures. A major role within this field is attributed to 'predictive genomic medicine', which proposes screening healthy individuals to identify those who carry alleles that increase their susceptibility to common diseases, such as cancers and heart disease. Physicians could then intervene even before the disease manifests and advise individuals with a higher genetic risk to change their behaviour - for instance, to exercise or to eat a healthier diet - or offer drugs or other medical treatment to reduce their chances of developing these diseases. These promises have fallen on fertile ground among politicians, health-care providers and the general public, particularly in light of the increasing costs of health care in developed societies. Various countries have established databases on the DNA and health information of whole populations as a first step towards genomic medicine. Biomedical research has also identified a large number of genes that could be used to predict someone's risk of developing a certain disorder. But it would be premature to assume that genomic medicine will soon become reality, as many problems remain to be solved. Our knowledge about most disease genes and their roles is far from sufficient to make reliable predictions about a patient’s risk of actually developing a disease. In addition, genomic medicine will create new political, social, ethical and economic challenges that will have to be addressed in the near future.
Resumo:
This study investigated the ability of negatively versus positively perceived stress to predict outcome of treatment for binge eating disorder (BED). Participants were 62 obese women satisfying the DSMIV research criteria for BED. Stress was measured using an instrument based on the Recent Life Change Questionnaire (RLCQ). Participants experiencing high negative stress during the study period reported a binge eating frequency three times greater than that reported by subjects experiencing low negative stress (2.14 vs. 0.65 binge-days/week). Negative stress predicted how fast an individual would reduce binge eating and demonstrated more predictive power than positive stress.
Resumo:
In this second counterpoint article, we refute the claims of Landy, Locke, and Conte, and make the more specific case for our perspective, which is that ability-based models of emotional intelligence have value to add in the domain of organizational psychology. In this article, we address remaining issues, such as general concerns about the tenor and tone of the debates on this topic, a tendency for detractors to collapse across emotional intelligence models when reviewing the evidence and making judgments, and subsequent penchant to thereby discount all models, including the ability-based one, as lacking validity. We specifically refute the following three claims from our critics with the most recent empirically based evidence: (1) emotional intelligence is dominated by opportunistic academics-turned-consultants who have amassed much fame and fortune based on a concept that is shabby science at best; (2) the measurement of emotional intelligence is grounded in unstable, psychometrically flawed instruments, which have not demonstrated appropriate discriminant and predictive validity to warrant/justify their use; and (3) there is weak empirical evidence that emotional intelligence is related to anything of importance in organizations. We thus end with an overview of the empirical evidence supporting the role of emotional intelligence in organizational and social behavior.
Resumo:
Argumentation is modelled as a game where the payoffs are measured in terms of the probability that the claimed conclusion is, or is not, defeasibly provable, given a history of arguments that have actually been exchanged, and given the probability of the factual premises. The probability of a conclusion is calculated using a standard variant of Defeasible Logic, in combination with standard probability calculus. It is a new element of the present approach that the exchange of arguments is analysed with game theoretical tools, yielding a prescriptive and to some extent even predictive account of the actual course of play. A brief comparison with existing argument-based dialogue approaches confirms that such a prescriptive account of the actual argumentation has been almost lacking in the approaches proposed so far.
Resumo:
This review reflects the state of the art in study of contact and dynamic phenomena occurring in cold roll forming. The importance of taking these phenomena into account is determined by significant machine time and tooling costs spent on worn out forming rolls replacement and equipment adjustment in cold roll forming. Predictive modelling of the tool wear caused by contact and dynamic phenomena can reduce the production losses in this technological process.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
Globalisation, increasing complexity, and the need to address triple-bottom line sustainability has seen the proliferation of Learning Organisations (LO) who, by definition, have the capacity to anticipate environmental changes and economic opportunities and adapt accordingly. Such organisations use system dynamics modelling (SDM) for both strategic planning and the promotion of organisational learning. Although SDM has been applied in the context of tourism destination management for predictive reasons, the current literature does not analyse or recognise how this could be used as a foundation for an LO. This study introduces the concept of the Learning Tourism Destinations (LTD) and discusses, on the basis of a review of 6 case studies, the potential of SDM as a tool for the implementation and enhancement of collective learning processes. The results reveal that SDM is capable of promoting communication between stakeholders and stimulating organisational learning. It is suggested that the LTD approach be further utilised and explored.
Resumo:
T cells recognize peptide epitopes bound to major histocompatibility complex molecules. Human T-cell epitopes have diagnostic and therapeutic applications in autoimmune diseases. However, their accurate definition within an autoantigen by T-cell bioassay, usually proliferation, involves many costly peptides and a large amount of blood, We have therefore developed a strategy to predict T-cell epitopes and applied it to tyrosine phosphatase IA-2, an autoantigen in IDDM, and HLA-DR4(*0401). First, the binding of synthetic overlapping peptides encompassing IA-2 was measured directly to purified DR4. Secondly, a large amount of HLA-DR4 binding data were analysed by alignment using a genetic algorithm and were used to train an artificial neural network to predict the affinity of binding. This bioinformatic prediction method was then validated experimentally and used to predict DR4 binding peptides in IA-2. The binding set encompassed 85% of experimentally determined T-cell epitopes. Both the experimental and bioinformatic methods had high negative predictive values, 92% and 95%, indicating that this strategy of combining experimental results with computer modelling should lead to a significant reduction in the amount of blood and the number of peptides required to define T-cell epitopes in humans.
Resumo:
Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.
Resumo:
Multi-frequency bioimpedance analysis (MFBIA) was used to determine the impedance, reactance and resistance of 103 lamb carcasses (17.1-34.2 kg) immediately after slaughter and evisceration. Carcasses were halved, frozen and one half subsequently homogenized and analysed for water, crude protein and fat content. Three measures of carcass length were obtained. Diagonal length between the electrodes (right side biceps femoris to left side of neck) explained a greater proportion of the variance in water mass than did estimates of spinal length and was selected for use in the index L-2/Z to predict the mass of chemical components in the carcass. Use of impedance (Z) measured at the characteristic frequency (Z(c)) instead of 50 kHz (Z(50)) did not improve the power of the model to predict the mass of water, protein or fat in the carcass. While L-2/Z(50) explained a significant proportion of variation in the masses of body water (r(2) 0.64), protein (r(2) 0.34) and fat (r(2) 0.35), its inclusion in multi-variate indices offered small or no increases in predictive capacity when hot carcass weight (HCW) and a measure of rib fat-depth (GR) were present in the model. Optimized equations were able to account for 65-90 % of the variance observed in the weight of chemical components in the carcass. It is concluded that single frequency impedance data do not provide better prediction of carcass composition than can be obtained from measures of HCW and GR. Indices of intracellular water mass derived from impedance at zero frequency and the characteristic frequency explained a similar proportion of the variance in carcass protein mass as did the index L-2/Z(50).
Resumo:
A study was conducted to examine the relationships among eating pathology, weight dissatisfaction and dieting, and unwanted sexual experiences in childhood. An unselected community sample of 201 young and 268 middle-aged women were administered questionnaires assessing eating behaviors and attitudes, and past and current sexual abuse. Results showed differential relationships among these factors for the two age cohorts: for young women, past sexual abuse predicted weight dissatisfaction, but not dieting or disordered eating behaviors, whereas for middle-aged women, past abuse was predictive of disordered eating, but not dieting or weight dissatisfaction. Current physical or sexual abuse was also found to be predictive of disordered eating for the young women. These findings underscore the complexity of the relationships among unwanted sexual experiences and eating and weight pathology, and suggest that the timing of sexual abuse, and the age of the woman, are important mediating factors. (C) 1998 Elsevier Science Inc.
Resumo:
Bioelectrical impedance analysis (BIA) offers the potential for a simple, portable and relatively inexpensive technique for the in vivo measurement of total body water (TBW). The potential of BIA as a technique of body composition analysis is even greater when one considers that body water can be used as a surrogate measure of lean body mass. However, BIA has not found universal acceptance even with the introduction of multi-frequency BIA (MFBIA) which, potentially, may improve the predictive accuracy of the measurement. There are a number of reasons for this lack of acceptance, although perhaps the major reason is that no single algorithm has been developed which can be applied to all subject groups. This may be due, in part, to the commonly used wrist-to-ankle protocol which is not indicated by the basic theory of bioimpedance, where the body is considered as five interconnecting cylinders. Several workers have suggested the use of segmental BIA measurements to provide a protocol more in keeping with basic theory. However, there are other difficulties associated with the application of BIA, such as effects of hydration and ion status, posture and fluid distribution. A further putative advantage of MFBIA is the independent assessment not only of TBW but also of the extracellular fluid volume (ECW), hence heralding the possibility of,being able to assess the fluid distribution between these compartments. Results of studies in this area have been, to date, mixed. Whereas strong relationships of impedance values at low frequencies with ECW, and at high frequencies with TBW, have been reported, changes in impedance are not always well correlated with changes in the size of the fluid compartments (assessed by alternative and more direct means) in pathological conditions. Furthermore, the theoretical advantages of Cole-Cole modelling over selected frequency prediction have not always been apparent. This review will consider the principles, methodology and applications of BIA. The principles and methodology will,be considered in relation to the basic theory of BIA and difficulties experienced in its application. The relative merits of single and multiple frequency BIA will be addressed, with particular attention to the latter's role in the assessment of compartmental fluid volumes. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Motivation: Prediction methods for identifying binding peptides could minimize the number of peptides required to be synthesized and assayed, and thereby facilitate the identification of potential T-cell epitopes. We developed a bioinformatic method for the prediction of peptide binding to MHC class II molecules. Results: Experimental binding data and expert knowledge of anchor positions and binding motifs were combined with an evolutionary algorithm (EA) and an artificial neural network (ANN): binding data extraction --> peptide alignment --> ANN training and classification. This method, termed PERUN, was implemented for the prediction of peptides that bind to HLA-DR4(B1*0401). The respective positive predictive values of PERUN predictions of high-, moderate-, low- and zero-affinity binder-a were assessed as 0.8, 0.7, 0.5 and 0.8 by cross-validation, and 1.0, 0.8, 0.3 and 0.7 by experimental binding. This illustrates the synergy between experimentation and computer modeling, and its application to the identification of potential immunotheraaeutic peptides.