905 resultados para Mathematical prediction.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
Near-infrared spectroscopy (NIRS) was used to analyse the crude protein content of dried and milled samples of wheat and to discriminate samples according to their stage of growth. A calibration set of 72 samples from three growth stages of wheat (tillering, heading and harvest) and a validation set of 28 samples was collected for this purpose. Principal components analysis (PCA) of the calibration set discriminated groups of samples according to the growth stage of the wheat. Based on these differences, a classification procedure (SIMCA) showed a very accurate classification of the validation set samples : all of them were successfully classified in each group using this procedure when both the residual and the leverage were used in the classification criteria. Looking only at the residuals all the samples were also correctly classified except one of tillering stage that was assigned to both tillering and heading stages. Finally, the determination of the crude protein content of these samples was considered in two ways: building up a global model for all the growth stages, and building up local models for each stage, separately. The best prediction results for crude protein were obtained using a global model for samples in the two first growth stages (tillering and heading), and using a local model for the harvest stage samples.
Resumo:
Regression equations predicting dissectable muscle weight in rabbits from external measurements were presented. Bone weight and weight of muscle groups were also carcass predicted. Predictive capacity of external measurements, retail cuts and muscle groups on total muscle, percent muscle, total bone and muscle to bone ratio were studied separately. Measurements on dissected retail cuts should be included in ordcr to obtain good equations for prediction of percent muscle in the carcass. Equations for predicting the muscle to bone ratio using external mcasurcments and data from the dissection of one hind leg were suggested. The equations had generally high coefficients of determination. The coefficient of determination for prediction of dissectable muscle was 0.91, and for percent muscle in the carcass 0.79.
Resumo:
BACKGROUND: Obesity is strongly associated with major depressive disorder (MDD) and various other diseases. Genome-wide association studies have identified multiple risk loci robustly associated with body mass index (BMI). In this study, we aimed to investigate whether a genetic risk score (GRS) combining multiple BMI risk loci might have utility in prediction of obesity in patients with MDD. METHODS: Linear and logistic regression models were conducted to predict BMI and obesity, respectively, in three independent large case-control studies of major depression (Radiant, GSK-Munich, PsyCoLaus). The analyses were first performed in the whole sample and then separately in depressed cases and controls. An unweighted GRS was calculated by summation of the number of risk alleles. A weighted GRS was calculated as the sum of risk alleles at each locus multiplied by their effect sizes. Receiver operating characteristic (ROC) analysis was used to compare the discriminatory ability of predictors of obesity. RESULTS: In the discovery phase, a total of 2,521 participants (1,895 depressed patients and 626 controls) were included from the Radiant study. Both unweighted and weighted GRS were highly associated with BMI (P <0.001) but explained only a modest amount of variance. Adding 'traditional' risk factors to GRS significantly improved the predictive ability with the area under the curve (AUC) in the ROC analysis, increasing from 0.58 to 0.66 (95% CI, 0.62-0.68; χ(2) = 27.68; P <0.0001). Although there was no formal evidence of interaction between depression status and GRS, there was further improvement in AUC in the ROC analysis when depression status was added to the model (AUC = 0.71; 95% CI, 0.68-0.73; χ(2) = 28.64; P <0.0001). We further found that the GRS accounted for more variance of BMI in depressed patients than in healthy controls. Again, GRS discriminated obesity better in depressed patients compared to healthy controls. We later replicated these analyses in two independent samples (GSK-Munich and PsyCoLaus) and found similar results. CONCLUSIONS: A GRS proved to be a highly significant predictor of obesity in people with MDD but accounted for only modest amount of variance. Nevertheless, as more risk loci are identified, combining a GRS approach with information on non-genetic risk factors could become a useful strategy in identifying MDD patients at higher risk of developing obesity.
Resumo:
Ultrasonographic detection of subclinical atherosclerosis improves cardiovascular risk stratification, but uncertainty persists about the most discriminative method to apply. In this study, we found that the "atherosclerosis burden score (ABS)", a novel straightforward ultrasonographic score that sums the number of carotid and femoral arterial bifurcations with plaques, significantly outperformed common carotid intima-media thickness, carotid mean/maximal thickness, and carotid/femoral plaque scores for the detection of coronary artery disease (CAD) (receiver operating characteristic (ROC) curve area under the curve (AUC) = 0.79; P = 0.027 to <0.001 with the other five US endpoints) in 203 patients undergoing coronary angiography. ABS was also more correlated with CAD extension (R = 0.55; P < 0.001). Furthermore, in a second group of 1128 patients without cardiovascular disease, ABS was weakly correlated with the European Society of Cardiology chart risk categories (R (2) = 0.21), indicating that ABS provided information beyond usual cardiovascular risk factor-based risk stratification. Pending prospective studies on hard cardiovascular endpoints, ABS appears as a promising tool in primary prevention.
Resumo:
In this commentary, we argue that the term 'prediction' is overly used when in fact, referring to foundational writings of de Finetti, the correspondent term should be inference. In particular, we intend (i) to summarize and clarify relevant subject matter on prediction from established statistical theory, and (ii) point out the logic of this understanding with respect practical uses of the term prediction. Written from an interdisciplinary perspective, associating statistics and forensic science as an example, this discussion also connects to related fields such as medical diagnosis and other areas of application where reasoning based on scientific results is practiced in societal relevant contexts. This includes forensic psychology that uses prediction as part of its vocabulary when dealing with matters that arise in the course of legal proceedings.
Resumo:
BACKGROUND: After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE: To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS: This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS: Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS: When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.
Resumo:
Työssä tutkittiin Andritz-Ahlstrom toimittamien soodakattiloiden lämmönsiirtoa ANITA 2.20- suunnitteluohjelmalla feedback- laskentaa apuna käyttäen. Data laskentaan saatiin kattiloiden takuukokeissa mitatuista arvoista. Mittaukset on suoritettiin Andritz-Ahlstromin henkilökunnan toimesta tehdashenkilökunnan avustuksella. Feedback -laskenta tapahtui mittaustulosten perusteella, joten tiettyä virhettä luonnollisesti esiintyi. Aluksi laskettiin taseet molempien ekojen yli erikseen sekä molemmat yhdessä Excel-taulukkolaskentaohjelmalla. Täältä saatiin oletettu savukaasuvirtaus kattilassa. Tämän jälkeen lämpöpintoja muokattiin todellisuutta vastaaviksi yleislikaisuuskerrointa muuttamalla (overall fouling factor). Kertoimet ovat liikkuivat noin 0.4 ja 1.6 välillä riipuen kattilan tyypistä ja ANITAn oletuksesta lämpöpintojen likaisuudelle. Havaittin että yhtä varsinaista syytä lämpöpintojen eroavaisuuteen oletetusta ei saatu. Syitä toiminnan poikkeamiseen oli monia. Mm. etukammion koolla havaittiin olevan suurtakin vaikutusta tulistimien, etenkin savukaasuvirrassa ensimmäisen tulistimen toimintaan. Yleisesti todettiin muiden tulistimien vastaavan oletettua toimintaa. Keittopinnan ja ekonomiserien toimintaa tutkittiin hivenen suppeammin ja havaittiin niiden toimivan huomattavasti stabiilimmin kuin tulistimien. Likaisuus kertoimet oletetusta vaihtelivat noin ±20 %.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.
Resumo:
Objectifs La chirurgie pancréatique reste associée à une morbidité postopératoire importante. Les efforts sont concentrés la plupart du temps sur la diminution de cette morbidité, mais la détection précoce de patients à risque de complications pourrait être une autre stratégie valable. Un score simple de prédiction des complications après duodénopancréatectomie céphalique a récemment été publié par Braga et al. La présente étude a pour but de valider ce score et de discuter de ses possibles implications cliniques. Méthodes De 2000 à 2015, 245 patients ont bénéficié d'une duodénopancréatectomie céphalique dans notre service. Les complications postopératoires ont été recensées selon la classification de Dindo et Clavien. Le score de Braga se base sur quatre paramètres : le score ASA (American Society of Anesthesiologists), la texture du pancréas, le diamètre du canal de Wirsung (canal pancréatique principal) et les pertes sanguines intra-opératoires. Un score de risque global de 0 à 15 peut être calculé pour chaque patient. La puissance de discrimination du score a été calculée en utilisant une courbe ROC (receiver operating characteristic). Résultats Des complications majeures sont apparues chez 31% des patients, alors que 17% des patients ont eu des complications majeures dans l'article de Braga. La texture du pancréas et les pertes sanguines étaient statistiquement significativement corrélées à une morbidité accrue. Les aires sous la courbe étaient respectivement de 0.95 et 0.99 pour les scores classés en quatre catégories de risques (de 0 à 3, 4 à 7, 8 à 11 et 12 à 15) et pour les scores individuels (de 0 à 15). Conclusions Le score de Braga permet donc une bonne discrimination entre les complications mineures et majeures. Notre étude de validation suggère que ce score peut être utilisé comme un outil pronostique de complications majeures après duodénopancréatectomie céphalique. Les implications cliniques, c'est-à-dire si les stratégies de prise en charge postopératoire doivent être adaptées en fonction du risque individuel du patient, restent cependant à élucider. -- Objectives Pancreatic surgery remains associated with important morbidity. Efforts are most commonly concentrated on decreasing postoperative morbidity, but early detection of patients at risk could be another valuable strategy. A simple prognostic score has recently been published. This study aimed to validate this score and discuss possible clinical implications. Methods From 2000 to 2012, 245 patients underwent pancreaticoduodenectomy. Complications were graded according to the Dindo-Clavien classification. The Braga score is based on American Society of Anesthesiologists score, pancreatic texture, Wirsung duct diameter, and blood loss. An overall risk score (from 0 to 15) can be calculated for each patient. Score discriminant power was calculated using a receiver operating characteristic curve. Results Major complications occurred in 31% of patients compared to 17% in Braga's data. Pancreatic texture and blood loss were independently statistically significant for increased morbidity. The areas under curve were 0.95 and 0.99 for 4-risk categories and for individual scores, respectively. Conclusions The Braga score discriminates well between minor and major complications. Our validation suggests that it can be used as prognostic tool for major complications after pancreaticoduodenectomy. The clinical implications, i.e., whether postoperative treatment strategies should be adapted according to the patient's individual risk, remain to be elucidated.
Resumo:
Abstract Objective: We aimed to determine the validity of two risk scores for patients with non-muscle invasive bladder cancer in different European settings, in patients with primary tumours. Methods: We included 1,892 patients with primary stage Ta or T1 non-muscle invasive bladder cancer who underwent a transurethral resection in Spain (n = 973), the Netherlands (n = 639), or Denmark (n = 280). We evaluated recurrence-free survival and progression-free survival according to the European Organisation for Research and Treatment of Cancer (EORTC) and the Spanish Urological Club for Oncological Treatment (CUETO) risk scores for each patient and used the concordance index (c-index) to indicate discriminative ability. Results: The 3 cohorts were comparable according to age and sex, but patients from Denmark had a larger proportion of patients with the high stage and grade at diagnosis (p,0.01). At least one recurrence occurred in 839 (44%) patients and 258 (14%) patients had a progression during a median follow-up of 74 months. Patients from Denmark had the highest 10- year recurrence and progression rates (75% and 24%, respectively), whereas patients from Spain had the lowest rates (34% and 10%, respectively). The EORTC and CUETO risk scores both predicted progression better than recurrence with c-indices ranging from 0.72 to 0.82 while for recurrence, those ranged from 0.55 to 0.61. Conclusion: The EORTC and CUETO risk scores can reasonably predict progression, while prediction of recurrence is more difficult. New prognostic markers are needed to better predict recurrence of tumours in primary non-muscle invasive bladder cancer patients.
Resumo:
The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.
Resumo:
The linear prediction coding of speech is based in the assumption that the generation model is autoregresive. In this paper we propose a structure to cope with the nonlinear effects presents in the generation of the speech signal. This structure will consist of two stages, the first one will be a classical linear prediction filter, and the second one will model the residual signal by means of two nonlinearities between a linear filter. The coefficients of this filter are computed by means of a gradient search on the score function. This is done in order to deal with the fact that the probability distribution of the residual signal still is not gaussian. This fact is taken into account when the coefficients are computed by a ML estimate. The algorithm based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics and is based on blind deconvolution of Wiener systems [1]. Improvements in the experimental results with speech signals emphasize on the interest of this approach.
Resumo:
In this paper we show how a nonlinear preprocessing of speech signal -with high noise- based on morphological filters improves the performance of robust algorithms for pitch tracking (RAPT). This result happens for a very simple morphological filter. More sophisticated ones could even improve such results. Mathematical morphology is widely used in image processing and has a great amount of applications. Almost all its formulations derived in the two-dimensional framework are easily reformulated to be adapted to one-dimensional context