971 resultados para Parameter-estimation
Resumo:
Com o objetivo de aumentar o lucro de plantas químicas, a Otimização em Tempo Real (RTO) é uma ferramenta que busca determinar as condições ótimas operacionais do processo em estado estacionário, respeitando as restrições operacionais estabelecidas. Neste trabalho foi realizada a implementação prática de um ciclo RTO em um processo de destilação por recompressão de vapor (VRD), propileno-propano, da Refinaria de Paulínia (Petrobras S.A.), a partir de dados históricos da planta. Foram consideradas as principais etapas de um ciclo clássico de RTO: identificação de estado estacionário, reconciliação de dados, estimação de parâmetros e otimização econômica. Essa unidade foi modelada, simulada e otimizada em EMSO (Environment for Modeling, Simulation and Optimization), um simulador de processos orientado a equações desenvolvido no Brasil. Foram analisados e comparados dois métodos de identificação de estado estacionário, um baseado no teste estatístico F e outro baseado em wavelets. Ambos os métodos tiveram resultados semelhantes e mostraram-se capazes de identificar os estados estacionários de forma satisfatória, embora seja necessário o ajuste de parâmetros na sua implementação. Foram identificados alguns pontos estacionários para serem submetidos ao ciclo RTO e foi possível verificar a importância de partir de um estado estacionário para a continuidade do ciclo, já que essa é uma premissa do método. A partir dos pontos analisados, os resultados deste estudo mostram que o RTO é capaz de aumentar o ganho econômico entre 2,5-24%, dependendo das condições iniciais consideradas, o que pode representar ganhos de até 18 milhões de dólares por ano. Além disso, para essa unidade, verificou-se que o compressor é um equipamento limitante no aumento de ganho econômico do processo.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
Resumo:
Statistical machine translation (SMT) is an approach to Machine Translation (MT) that uses statistical models whose parameter estimation is based on the analysis of existing human translations (contained in bilingual corpora). From a translation student’s standpoint, this dissertation aims to explain how a phrase-based SMT system works, to determine the role of the statistical models it uses in the translation process and to assess the quality of the translations provided that system is trained with in-domain goodquality corpora. To that end, a phrase-based SMT system based on Moses has been trained and subsequently used for the English to Spanish translation of two texts related in topic to the training data. Finally, the quality of this output texts produced by the system has been assessed through a quantitative evaluation carried out with three different automatic evaluation measures and a qualitative evaluation based on the Multidimensional Quality Metrics (MQM).
Resumo:
With the development of the embedded application and driving assistance systems, it becomes relevant to develop parallel mechanisms in order to check and to diagnose these new systems. In this thesis we focus our research on one of this type of parallel mechanisms and analytical redundancy for fault diagnosis of an automotive suspension system. We have considered a quarter model car passive suspension model and used a parameter estimation, ARX model, method to detect the fault happening in the damper and spring of system. Moreover, afterward we have deployed a neural network classifier to isolate the faults and identifies where the fault is happening. Then in this regard, the safety measurements and redundancies can take into the effect to prevent failure in the system. It is shown that The ARX estimator could quickly detect the fault online using the vertical acceleration and displacement sensor data which are common sensors in nowadays vehicles. Hence, the clear divergence is the ARX response make it easy to deploy a threshold to give alarm to the intelligent system of vehicle and the neural classifier can quickly show the place of fault occurrence.
Resumo:
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.
Finite mixture regression model with random effects: application to neonatal hospital length of stay
Resumo:
A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
The goal of this manuscript is to introduce a framework for consideration of designs for population pharmacokinetic orpharmacokinetic-pharmacodynamic studies. A standard one compartment pharmacokinetic model with first-order input and elimination is considered. A series of theoretical designs are considered that explore the influence of optimizing the allocation of sampling times, allocating patients to elementary designs, consideration of sparse sampling and unbalanced designs and also the influence of single vs. multiple dose designs. It was found that what appears to be relatively sparse sampling (less blood samples per patient than the number of fixed effects parameters to estimate) can also be highly informative. Overall, it is evident that exploring the population design space can yield many parsimonious designs that are efficient for parameter estimation and that may not otherwise have been considered without the aid of optimal design theory.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper investigates the performance analysis of separation of mutually independent sources in nonlinear models. The nonlinear mapping constituted by an unsupervised linear mixture is followed by an unknown and invertible nonlinear distortion, are found in many signal processing cases. Generally, blind separation of sources from their nonlinear mixtures is rather difficult. We propose using a kernel density estimator incorporated with equivariant gradient analysis to separate the sources with nonlinear distortion. The kernel density estimator parameters of which are iteratively updated to minimize the output independence expressed as a mutual information criterion. The equivariant gradient algorithm has the form of nonlinear decorrelation to perform the convergence analysis. Experiments are proposed to illustrate these results.
Resumo:
Le pitture intumescenti sono utilizzate come protettivi passivi antincendio nel settore delle costruzioni. In particolare sono utilizzate per aumentare la resistenza al fuoco di elementi in acciaio. Le proprietà termiche di questi rivestimenti sono spesso sconosciute o difficili da stimare per via del fatto che variano notevolmente durante il processo di espansione che subisce l’intumescente quando esposto al calore di un incendio. Per questa ragione la validazione della resistenza al fuoco di un rivestimento presente in commercio si basa su metodi costosi economicamente e come tempi di esecuzione nel quale ciascuna trave e colonna rivestita di protettivo deve essere testata una alla volta attraverso il test di resistenza al fuoco della curva cellulosica. In questo lavoro di tesi adottando invece un approccio basato sulla modellazione termica del rivestimento intumescente si ottiene un aiuto nella semplificazione della procedura di test ed un supporto nella progettazione della resistenza al fuoco delle strutture. Il tratto di unione nei vari passaggi della presente tesi è stata la metodologia di stima del comportamento termico sconosciuto, tale metodologia di stima è la “Inverse Parameter Estimation”. Nella prima fase vi è stata la caratterizzazione chimico fisica della vernice per mezzo di differenti apparecchiature come la DSC, la TGA e l’FT-IR che ci hanno permesso di ottenere la composizione qualitativa e le temperature a cui avvengono i principali processi chimici e fisici che subisce la pittura come anche le entalpie legate a questi eventi. Nella seconda fase si è proceduto alla caratterizzazione termica delle pitture al fine di ottenerne il valore di conduttività termica equivalente. A tale scopo si sono prima utilizzate le temperature dell’acciaio di prove termiche alla fornace con riscaldamento secondo lo standard ISO-834 e successivamente per meglio definire le condizioni al contorno si è presa come fonte di calore un cono calorimetrico in cui la misura della temperatura avveniva direttamente nello spessore del’intumescente. I valori di conduttività ottenuti sono risultati congruenti con la letteratura scientifica e hanno mostrato la dipendenza della stessa dalla temperatura, mentre si è mostrata poco variante rispetto allo spessore di vernice deposto ed alla geometria di campione utilizzato.
Resumo:
This paper reports preliminary progress on a principled approach to modelling nonstationary phenomena using neural networks. We are concerned with both parameter and model order complexity estimation. The basic methodology assumes a Bayesian foundation. However to allow the construction of pragmatic models, successive approximations have to be made to permit computational tractibility. The lowest order corresponds to the (Extended) Kalman filter approach to parameter estimation which has already been applied to neural networks. We illustrate some of the deficiencies of the existing approaches and discuss our preliminary generalisations, by considering the application to nonstationary time series.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
The automatic interpolation of environmental monitoring network data such as air quality or radiation levels in real-time setting poses a number of practical and theoretical questions. Among the problems found are (i) dealing and communicating uncertainty of predictions, (ii) automatic (hyper)parameter estimation, (iii) monitoring network heterogeneity, (iv) dealing with outlying extremes, and (v) quality control. In this paper we discuss these issues, in light of the spatial interpolation comparison exercise held in 2004.