7 resultados para common method variance

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operational capabilities são caracterizadas como um recurso interno da firma e fonte de vantagem competitiva. Porém, a literatura de estratégia de operações fornece uma definição constitutiva inadequada para as operational capabilities, desconsiderando a relativização dos diferentes contextos, a limitação da base empírica, e não explorando adequadamente a extensa literatura sobre práticas operacionais. Quando as práticas operacionais são operacionalizadas no ambiente interno da firma, elas podem ser incorporadas as rotinas organizacionais, e através do conhecimento tácito da produção se transformar em operational capabilities, criando assim barreiras à imitação. Apesar disso, poucos são os pesquisadores que exploram as práticas operacionais como antecedentes das operational capabilities. Baseado na revisão da literatura, nós investigamos a natureza das operational capabilities; a relação entre práticas operacionais e operational capabilities; os tipos de operational capabilities que são caracterizadas no ambiente interno da firma; e o impacto das operational capabilities no desempenho operacional. Nós conduzimos uma pesquisa de método misto. Na etapa qualitativa, nós conduzimos estudos de casos múltiplos com quatro firmas, duas multinacionais americanas que operam no Brasil, e duas firmas brasileiras. Nós coletamos os dados através de entrevistas semi-estruturadas com questões semi-abertas. Elas foram baseadas na revisão da literatura sobre práticas operacionais e operational capabilities. As entrevistas foram conduzidas pessoalmente. No total 73 entrevistas foram realizadas (21 no primeiro caso, 18 no segundo caso, 18 no terceiro caso, e 16 no quarto caso). Todas as entrevistas foram gravadas e transcritas literalmente. Nós usamos o sotware NVivo. Na etapa quantitativa, nossa amostra foi composta por 206 firmas. O questionário foi criado a partir de uma extensa revisão da literatura e também a partir dos resultados da fase qualitativa. O método Q-sort foi realizado. Um pré-teste foi conduzido com gerentes de produção. Foram realizadas medidas para reduzir Variância de Método Comum. No total dez escalas foram utilizadas. 1) Melhoria Contínua; 2) Gerenciamento da Informação; 3) Aprendizagem; 4) Suporte ao Cliente; 5) Inovação; 6) Eficiência Operacional; 7) Flexibilidade; 8) Customização; 9) Gerenciamento dos Fornecedores; e 10) Desempenho Operacional. Nós usamos análise fatorial confirmatória para confirmar a validade de confiabilidade, conteúdo, convergente, e discriminante. Os dados foram analisados com o uso de regressões múltiplas. Nossos principais resultados foram: Primeiro, a relação das práticas operacionais como antecedentes das operational capabilities. Segundo, a criação de uma tipologia dividida em dois construtos. O primeiro construto foi chamado de Standalone Capabilities. O grupo consiste de zero order capabilities tais como Suporte ao Cliente, Inovação, Eficiência Operacional, Flexibilidade, e Gerenciamento dos Fornecedores. Estas operational capabilities têm por objetivo melhorar os processos da firma. Elas têm uma relação direta com desempenho operacional. O segundo construto foi chamado de Across-the-Board Capabilities. Ele é composto por first order capabilities tais como Aprendizagem Contínua e Gerenciamento da Informação. Estas operational capabilities são consideradas dinâmicas e possuem o papel de reconfigurar as Standalone Capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the degree of short run and long run co-movement in U.S. sectoral output data by estimating sectoraI trends and cycles. A theoretical model based on Long and Plosser (1983) is used to derive a reduced form for sectoral output from first principles. Cointegration and common features (cycles) tests are performed; sectoral output data seem to share a relatively high number of common trends and a relatively low number of common cycles. A special trend-cycle decomposition of the data set is performed and the results indicate a very similar cyclical behavior across sectors and a very different behavior for trends. Indeed. sectors cyclical components appear as one. In a variance decomposition analysis, prominent sectors such as Manufacturing and Wholesale/Retail Trade exhibit relatively important transitory shocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper constructs new business cycle indices for Argentina, Brazil, Chile, and Mexico based on common dynamic factors extracted from a comprehensive set of sectoral output, external trade, fiscal and financial variables. The analysis spans the 135 years since the insertion of these economies into the global economy in the 1870s. The constructed indices are used to derive a business cyc1e chronology for these countries and characterize a set of new stylized facts. In particular, we show that ali four countries have historically displayed a striking combination of high business cyc1e volatility and persistence relative to advanced country benchmarks. Volatility changed considerably over time, however, being very high during early formative decades through the Great Depression, and again during the 1970s and ear1y 1980s, before declining sharply in three of the four countries. We also identify a sizeable common factor across the four economies which variance decompositions ascribe mostly to foreign interest rates and shocks to commodity terms of trade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho apresentado no Congresso Nacional de Matemática Aplicada à Indústria, 18 a 21 de novembro de 2014, Caldas Novas - Goiás

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.