25 resultados para quasi-least squares
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
The present study focuses on single-case data analysis and specifically on two procedures for quantifying differences between baseline and treatment measurements The first technique tested is based on generalized least squares regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The comparison is carried out in the context of generated data representing a variety of patterns (i.e., independent measurements, different serial dependence underlying processes, constant or phase-specific autocorrelation and data variability, different types of trend, and slope and level change). The results suggest that the two techniques perform adequately for a wide range of conditions and researchers can use both of them with certain guarantees. The regression-based procedure offers more efficient estimates, whereas the proposed non-regression procedure is more sensitive to intervention effects. Considering current and previous findings, some tentative recommendations are offered to applied researchers in order to help choosing among the plurality of single-case data analysis techniques.
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.
Resumo:
The main purpose of this paper is building a research model to integrate the socioeconomic concept of social capital within intentional models of new firm creation. Nevertheless, some researchers have found cultural differences between countries and regions to have an effect on economic development. Therefore, a second objective of this study is exploring whether those cultural differences affect entrepreneurial cognitions. Research design and methodology: Two samples of last year university students from Spain and Taiwan are studied through an Entrepreneurial Intention Questionnaire (EIQ). Structural equation models (Partial Least Squares) are used to test the hypotheses. The possible existence of differences between both sub-samples is also empirically explored through a multigroup analysis. Main outcomes and results: The proposed model explains 54.5% of the variance in entrepreneurial intention. Besides, there are some significant differences between both subsamples that could be attributed to cultural diversity. Conclusions: This paper has shown the relevance of cognitive social capital in shaping individuals’ entrepreneurial intentions across different countries. Furthermore, it suggests that national culture could be shaping entrepreneurial perceptions, but not cognitive social capital. Therefore, both cognitive social capital and culture (made up essentially of values and beliefs), may act together to reinforce the entrepreneurial intention.
Resumo:
The Republic of Haiti is the prime international remittances recipient country in the Latin American and Caribbean (LAC) region relative to its gross domestic product (GDP). The downside of this observation may be that this country is also the first exporter of skilled workers in the world by population size. The present research uses a zero-altered negative binomial (with logit inflation) to model households' international migration decision process, and endogenous regressors' Amemiya Generalized Least Squares method (instrumental variable Tobit, IV-Tobit) to account for selectivity and endogeneity issues in assessing the impact of remittances on labor market outcomes. Results are in line with what has been found so far in this literature in terms of a decline of labor supply in the presence of remittances. However, the impact of international remittances does not seem to be important in determining recipient households' labor participation behavior, particularly for women.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
La principal aportación de este trabajo es poner de manifiesto que la capacidad absortiva de las economías cambia en función de si el país es el líder o es un seguidor. Aunque tampoco olvidamos otras variables como son la I+D interna, la I+D externa, el desarrollo del sistema financiero y las instituciones. Para ello, primero se prueba la presencia de una raíz unitaria y después se asegura una relación de cointegración entre las variables implicadas en el modelo para poder sacar conclusiones a largo plazo. Y por último, para estimar el modelo, se utilizará una técnica econométrica que combina el tratamiento tradicional de los datos de panel con las técnicas de cointegración: los Dynamics Ordinary Least Squares (DOLS). Esta técnica soluciona las limitaciones de los OLS, ya que su distribución no suele ser estándar por la presencia de un sesgo de muestras finitas (causado bien por la endogeneidad de las variables explicativas bien por la correlación serial de la perturbación). Utilizando un panel de datos que comprende 8 países de la OECD entre 1973-2004 y para el Business Sector, se encuentran diversos resultados, entre los que destacamos que la I+D interna, la I+D externa, la frontera tecnológica, la capacidad absortiva y el desarrollo de las instituciones tienen un impacto positivo sobre el nivel de la PTF. En cambio, el desarrollo del sistema financiero tiene un impacto negativo. Palabras claves: fuentes de la I+D, frontera tecnológica, capacidad absortiva, raíces unitarias, cointegración, DOLS.
Resumo:
The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.