971 resultados para PARAMETER ESTIMATION
Resumo:
A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.
Resumo:
This paper presents a methodology based on the Bayesian data fusion techniques applied to non-destructive and destructive tests for the structural assessment of historical constructions. The aim of the methodology is to reduce the uncertainties of the parameter estimation. The Young's modulus of granite stones was chosen as an example for the present paper. The methodology considers several levels of uncertainty since the parameters of interest are considered random variables with random moments. A new concept of Trust Factor was introduced to affect the uncertainty related to each test results, translated by their standard deviation, depending on the higher or lower reliability of each test to predict a certain parameter.
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
Telecommunications and network technology is now the driving force that ensures continued progress of world civilization. Design of new and expansion of existing network infrastructures requires improving the quality of service(QoS). Modeling probabilistic and time characteristics of telecommunication systems is an integral part of modern algorithms of administration of quality of service. At present, for the assessment of quality parameters except simulation models analytical models in the form of systems and queuing networks are widely used. Because of the limited mathematical tools of models of these classes the corresponding parameter estimation of parameters of quality of service are inadequate by definition. Especially concerning the models of telecommunication systems with packet transmission of multimedia real-time traffic.
Resumo:
Report for the scientific stay at the California Institute of Technology during the summer of 2005. ByoDyn is a tool for simulating the dynamical expression of gene regulatory networks (GRNs) and for parameter estimation in uni- and multicellular models. A software support was carried out describing GRNs in the Systems Biology Markup Language (SBML). This one is a computer format for representing and storing computational models of biochemical pathways in software tools and databases. Supporting this format gives ByoDyn a wide range of possibilities to study the dynamical properties of multiple regulatory pathways.
Resumo:
We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
We investigate identifiability issues in DSGE models and their consequences for parameter estimation and model evaluation when the objective function measures the distance between estimated and model impulse responses. We show that observational equivalence, partial and weak identification problems are widespread, that they lead to biased estimates, unreliable t-statistics and may induce investigators to select false models. We examine whether different objective functions affect identification and study how small samples interact with parameters and shock identification. We provide diagnostics and tests to detect identification failures and apply them to a state-of-the-art model.
Resumo:
When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.
Resumo:
[cat] A Navas i Marín Solano es va demostrar la coincidència entre els equilibris de Nash i de Stackelberg per a una versi´o modificada del joc diferencial proposat por Lancaster (1973). Amb l’objectiu d’obtenir una solució interior, es van imposar restriccions importants sobre el valors dels paràmetres del model. En aquest treball estenem aquest resultat, en el límit en que la taxa de descompte és igual a zero, eliminant les restriccions i considerant totes les solucions possibles.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
[cat] A Navas i Marín Solano es va demostrar la coincidència entre els equilibris de Nash i de Stackelberg per a una versi´o modificada del joc diferencial proposat por Lancaster (1973). Amb l’objectiu d’obtenir una solució interior, es van imposar restriccions importants sobre el valors dels paràmetres del model. En aquest treball estenem aquest resultat, en el límit en que la taxa de descompte és igual a zero, eliminant les restriccions i considerant totes les solucions possibles.
Resumo:
[spa] Se presenta un nuevo modelo para la toma de decisiones basado en el uso de medidas de distancia y de operadores de agregación inducidos. Se introduce la distancia media ponderada ordenada inducida (IOWAD). Es un nuevo operador de agregación que extiende el operador OWA a través del uso de distancias y un proceso de reordenación de los argumentos basado en variables de ordenación inducidas. La principal ventaja el operador IOWAD es la posibilidad de utilizar una familia parametrizada de operadores de agregación entre la distancia individual máxima y la mínima. Se estudian algunas de sus principales propiedades y algunos casos particulares. Se desarrolla un ejemplo numérico en un problema de toma de decisiones sobre selección de inversiones. Se observa que la principal ventaja de este modelo en la toma de decisiones es la posibilidad de mostrar una visión más completa del proceso, de forma que el decisor está capacitado para seleccionar la alternativa que está más cerca de sus intereses.
Resumo:
[spa] Se presenta un nuevo modelo para la toma de decisiones basado en el uso de medidas de distancia y de operadores de agregación inducidos. Se introduce la distancia media ponderada ordenada inducida (IOWAD). Es un nuevo operador de agregación que extiende el operador OWA a través del uso de distancias y un proceso de reordenación de los argumentos basado en variables de ordenación inducidas. La principal ventaja el operador IOWAD es la posibilidad de utilizar una familia parametrizada de operadores de agregación entre la distancia individual máxima y la mínima. Se estudian algunas de sus principales propiedades y algunos casos particulares. Se desarrolla un ejemplo numérico en un problema de toma de decisiones sobre selección de inversiones. Se observa que la principal ventaja de este modelo en la toma de decisiones es la posibilidad de mostrar una visión más completa del proceso, de forma que el decisor está capacitado para seleccionar la alternativa que está más cerca de sus intereses.