870 resultados para panel data modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using panel data pertaining to large Polish (non-financial) firms this paper examines the determinants of employment change during the period 1996-2002. Paying particular attention to the asymmetry hypothesis we investigate the impact of own wages, outside wages, output growth, regional characteristics and sectoral affiliation on the evolution of employment. In keeping with the 'right to manage' model we find that employment dynamics are not affected negatively by alternative wages. Furthermore, in contrast to the early transition period, we find evidence that employment levels respond to positive sales growth (in all but state firms). The early literature, (e.g. Kollo, 1998) found that labour hoarding lowered employment elasticities in the presence of positive demand shocks. Our findings suggest that inherited labour hoarding may no longer be a factor. We argue that the present pattern of employment adjustment is better explained by the role of insiders. This tentative conclusion is hinged on the contrasting behaviour of state and privatised companies and the similar behaviour of privatised and new private companies. We conclude that lower responsiveness of employment to both positive and negative changes in revenue in state firms is consistent with the proposition that rent sharing by insiders is stronger in the state sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conceptual database design is an unusually difficult and error-prone task for novice designers. This study examined how two training approaches---rule-based and pattern-based---might improve performance on database design tasks. A rule-based approach prescribes a sequence of rules for modeling conceptual constructs, and the action to be taken at various stages while developing a conceptual model. A pattern-based approach presents data modeling structures that occur frequently in practice, and prescribes guidelines on how to recognize and use these structures. This study describes the conceptual framework, experimental design, and results of a laboratory experiment that employed novice designers to compare the effectiveness of the two training approaches (between-subjects) at three levels of task complexity (within subjects). Results indicate an interaction effect between treatment and task complexity. The rule-based approach was significantly better in the low-complexity and the high-complexity cases; there was no statistical difference in the medium-complexity case. Designer performance fell significantly as complexity increased. Overall, though the rule-based approach was not significantly superior to the pattern-based approach in all instances, it out-performed the pattern-based approach at two out of three complexity levels. The primary contributions of the study are (1) the operationalization of the complexity construct to a degree not addressed in previous studies; (2) the development of a pattern-based instructional approach to database design; and (3) the finding that the effectiveness of a particular training approach may depend on the complexity of the task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objetive of this research is to evaluate the long term relationship between energy consumption and GDP for some Latin American countries in the period 1980-2009 -- The estimation has been done through the non-stationary panel approach, using the production function in order to control other sources of GDP variation, such as capital and labor -- In addition to this, a panel unit root tests are used in order to identify the non-stationarity of these variables, followed by the application of panel cointegration test proposed by Pedroni (2004) to avoid a spurious regression (Entorf, 1997; Kao, 1999)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty of the future of a firm has to be modelled and incorporated into the evaluation of companies outside their explicit period of analysis, i.e., in the continuing or terminal value considered within valuation models. However, there is a multiplicity of factors that influence the continuing value of businesses which are not currently being considered within valuation models. In fact, ignoring these factors may cause significant errors of judgment, which can lead models to values of goodwill or badwill, far from the substantial value of the inherent assets. Consequently, these results provided will be markedly different from market values. So, why not consider alternative models incorporating life expectancy of companies, as well as the influence of other attributes of the company in order to get a smoother adjustment between market price and valuation methods? This study aims to provide a contribution towards this area, having as its main objective the analysis of potential determinants of firm value in the long term. Using a sample of 714 listed companies, belonging to 15 European countries, and a panel data for the period between 1992 and 2011, our results show that continuing value cannot be regarded as the current value of a constant or growth perpetuity of a particular attribute of the company, but instead be according to a set of attributes such as free cash flow, net income, the average life expectancy of the company, investment in R&D, capabilities and quality of management, liquidity and financing structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the rapid changes that governs the Swedish financial sector such as financial deregulations and technological innovations, it is imperative to examine the extent to which the Swedish Financial institutions had performed amid these changes. For this to be accomplish, the work investigates what are the determinants of performance for Swedish Financial Monetary Institutions? Assumptions were derived from theoretical and empirical literatures to investigate the authenticity of this research question using seven explanatory variables. Two models were specified using Returns on Asset (ROA) and Return on Equity (ROE) as the main performance indicators and for the sake of reliability and validity, three different estimators such as Ordinary Least Square (OLS), Generalized Least Square (GLS) and Feasible Generalized Least Square (FGLS) were employed. The Akaike Information Criterion (AIC) was also used to verify which specification explains performance better while performing robustness check of parameter estimates was done by correcting for standard errors. Based on the findings, ROA specification proves to have the lowest Akaike Information Criterion (AIC) and Standard errors compared to ROE specification. Under ROA, two variables; the profit margins and the Interest coverage ratio proves to be statistically significant while under ROE just the interest coverage ratio (ICR) for all the estimators proves significant. The result also shows that the FGLS is the most efficient estimator, then follows the GLS and the last OLS. when corrected for SE robust, the gearing ratio which measures the capital structure becomes significant under ROA and its estimate become positive under ROE robust. Conclusions were drawn that, within the period of study three variables (ICR, profit margins and gearing) shows significant and four variables were insignificant. The overall findings show that the institutions strive to their best to maximize returns but these returns were just normal to cover their costs of operation. Much should be done as per the ASC theory to avoid liquidity and credit risks problems. Again, estimated values of ICR and profit margins shows that a considerable amount of efforts with sound financial policies are required to increase performance by one percentage point. Areas of further research could be how the individual stochastic factors such as the Dupont model, repo rates, inflation, GDP etc. can influence performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on four samples of Portuguese family-owned firmsdi) 185 young, low-sized family-owned firms; ii) 167 young, high-sized familyowned firms; iii) 301 old, low-sized family-owned firms; and iv) 353 old, high-sized family-owned firms d we show that age and size are fundamental characteristics in family-owned firms’ financing decisions. The multiple empirical evidence obtained allows us to conclude that the financing decisions of young, low-sized family-owned firms are quite close to the assumptions of Pecking Order Theory, whereas those of old, high-sized family-owned firms are quite close to what is forecast by Trade-Off Theory. The lesser information asymmetry associated with greater age, the lesser likelihood of bankruptcy associated with greater size, as well as the lesser concentration of ownership and management consequence of greater age and size, may be especially important in the financing decisions of family-owned firms. In addition, we find that GDP, interest rate and periods of crisis have a greater effect on the debt of young, low-sized family-owned firms than on that of family-owned firms of the remainder research samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste estudo são analisados, através de técnicas de dados em painel, os fatores determinantes dos níveis de ativos líquidos de empresas abertas do Brasil, Argentina, Chile, México e Peru no período de 1995 a 2009. O índice utilizado nas modelagens é denominado de ativo líquido (ou simplesmente caixa), o qual inclui os recursos disponíveis em caixa e as aplicações financeiras de curto prazo, divididos pelo total de ativos da firma. É possível identificar uma tendência crescente de acúmulo de ativos líquidos como proporção do total de ativos ao longo dos anos em praticamente todos os países. São encontradas evidências de que empresas com maiores oportunidades de crescimento, maior tamanho (medido pelo total de ativos), maior nível de pagamento de dividendos e maior nível de lucratividade, acumulam mais caixa na maior parte dos países analisados. Da mesma forma, empresas com maiores níveis de investimento em ativo imobilizado, maior geração de caixa, maior volatilidade do fluxo de caixa, maior alavancagem e maior nível de capital de giro, apresentam menor nível de acúmulo de ativos líquidos. São identificadas semelhanças de fatores determinantes de liquidez em relação a estudos empíricos com empresas de países desenvolvidos, bem como diferenças devido a fenômenos particulares de países emergentes, como por exemplo elevadas taxas de juros internas, diferentes graus de acessibilidade ao mercado de crédito internacional e a linhas de crédito de agências de fomento, equity kicking, entre outros. Em teste para a base de dados das maiores firmas do Brasil, é identificada a presença de níveis-alvo de caixa através de modelo auto-regressivo de primeira ordem (AR1). Variáveis presentes em estudos mais recentes com empresas de países desenvolvidos como aquisições, abertura recente de capital e nível de governança corporativa também são testadas para a base de dados do Brasil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

there has been much research on analyzing various forms of competing risks data. Nevertheless, there are several occasions in survival studies, where the existing models and methodologies are inadequate for the analysis competing risks data. ldentifiabilty problem and various types of and censoring induce more complications in the analysis of competing risks data than in classical survival analysis. Parametric models are not adequate for the analysis of competing risks data since the assumptions about the underlying lifetime distributions may not hold well. Motivated by this, in the present study. we develop some new inference procedures, which are completely distribution free for the analysis of competing risks data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proyecto de investigación parte de la dinámica del modelo de distribución tercerizada para una compañía de consumo masivo en Colombia, especializada en lácteos, que para este estudio se ha denominado “Lactosa”. Mediante datos de panel con estudio de caso, se construyen dos modelos de demanda por categoría de producto y distribuidor y mediante simulación estocástica, se identifican las variables relevantes que inciden sus estructuras de costos. El problema se modela a partir del estado de resultados por cada uno de los cuatro distribuidores analizados en la región central del país. Se analiza la estructura de costos y el comportamiento de ventas dado un margen (%) de distribución logístico, en función de las variables independientes relevantes, y referidas al negocio, al mercado y al entorno macroeconómico, descritas en el objeto de estudio. Entre otros hallazgos, se destacan brechas notorias en los costos de distribución y costos en la fuerza de ventas, pese a la homogeneidad de segmentos. Identifica generadores de valor y costos de mayor dispersión individual y sugiere uniones estratégicas de algunos grupos de distribuidores. La modelación con datos de panel, identifica las variables relevantes de gestión que inciden sobre el volumen de ventas por categoría y distribuidor, que focaliza los esfuerzos de la dirección. Se recomienda disminuir brechas y promover desde el productor estrategias focalizadas a la estandarización de procesos internos de los distribuidores; promover y replicar los modelos de análisis, sin pretender remplazar conocimiento de expertos. La construcción de escenarios fortalece de manera conjunta y segura la posición competitiva de la compañía y sus distribuidores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a simple Ordered Probit model to analyse the monetary policy reaction function of the Colombian Central Bank. There is evidence that the reaction function is asymmetric, in the sense that the Bank increases the Bank rate when the gap between observed inflation and the inflation target (lagged once) is positive, but it does not reduce the Bank rate when the gap is negative. This behaviour suggests that the Bank is more interested in fulfilling the announced inflation target rather than in reducing inflation excessively. The forecasting performance of the model, both within and beyond the estimation period, appears to be particularly good.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.