110 resultados para nonlinear least-square fit
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
The classical theory of collision induced emission (CIE) from pairs of dissimilar rare gas atoms was developed in Paper I [D. Reguera and G. Birnbaum, J. Chem. Phys. 125, 184304 (2006)] from a knowledge of the straight line collision trajectory and the assumption that the magnitude of the dipole could be represented by an exponential function of the inter-nuclear distance. This theory is extended here to deal with other functional forms of the induced dipole as revealed by ab initio calculations. Accurate analytical expression for the CIE can be obtained by least square fitting of the ab initio values of the dipole as a function of inter-atomic separation using a sum of exponentials and then proceeding as in Paper I. However, we also show how the multi-exponential fit can be replaced by a simpler fit using only two analytic functions. Our analysis is applied to the polar molecules HF and HBr. Unlike the rare gas atoms considered previously, these atomic pairs form stable bound diatomic molecules. We show that, interestingly, the spectra of these reactive molecules are characterized by the presence of multiple peaks. We also discuss the CIE arising from half collisions in excited electronic states, which in principle could be probed in photo-dissociation experiments.
Resumo:
This paper analyses intergenerational earnings mobility in Spain correcting for different selection biases. We address the co-residence selection problem by combining information from two samples and using the two-sample two-stage least square estimator. We find a small decrease in elasticity when we move to younger cohorts. Furthermore, we find a higher correlation in the case of daughters than in the case of sons; however, when we consider the employment selection in the case of daughters, by adopting a Heckman-type correction method, the diference between sons and daughters disappears. By decomposing the sources of earnings elasticity across generations, we find that the correlation between child's and father's occupation is the most important component. Finally, quantile regressions estimates show that the influence of the father's earnings is greater when we move to the lower tail of the offspring's earnings distribution, especially in the case of daughters' earnings.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
When dealing with sustainability we are concerned with the biophysical as well as the monetary aspects of economic and ecological interactions. This multidimensional approach requires that special attention is given to dimensional issues in relation to curve fitting practice in economics. Unfortunately, many empirical and theoretical studies in economics, as well as in ecological economics, apply dimensional numbers in exponential or logarithmic functions. We show that it is an analytical error to put a dimensional unit x into exponential functions ( a x ) and logarithmic functions ( x a log ). Secondly, we investigate the conditions of data sets under which a particular logarithmic specification is superior to the usual regression specification. This analysis shows that logarithmic specification superiority in terms of least square norm is heavily dependent on the available data set. The last section deals with economists’ “curve fitting fetishism”. We propose that a distinction be made between curve fitting over past observations and the development of a theoretical or empirical law capable of maintaining its fitting power for any future observations. Finally we conclude this paper with several epistemological issues in relation to dimensions and curve fitting practice in economics
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
We consider nonlinear elliptic problems involving a nonlocal operator: the square root of the Laplacian in a bounded domain with zero Dirichlet boundary conditions. For positive solutions to problems with power nonlinearities, we establish existence and regularity results, as well as a priori estimates of Gidas-Spruck type. In addition, among other results, we prove a symmetry theorem of Gidas-Ni-Nirenberg type.
Resumo:
Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error
Resumo:
Weak solutions of the spatially inhomogeneous (diffusive) Aizenmann-Bak model of coagulation-breakup within a bounded domain with homogeneous Neumann boundary conditions are shown to converge, in the fast reaction limit, towards local equilibria determined by their mass. Moreover, this mass is the solution of a nonlinear diffusion equation whose nonlinearity depends on the (size-dependent) diffusion coefficient. Initial data are assumed to have integrable zero order moment and square integrable first order moment in size, and finite entropy. In contrast to our previous result [CDF2], we are able to show the convergence without assuming uniform bounds from above and below on the number density of clusters.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.