951 resultados para Micro data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate the effect of employment density on wages in Sweden in a large geocoded data set on individuals and workplaces. Employment density is measured in four circular zones around each individual’s place of living. The data contains a rich set of control variables that we use in an instrumental variables framework. Results show a relatively strong but rather local positive effect of employment density on wages. Beyond 5 kilometers the effect becomes negative. This might indicate that the effect of agglomeration economies falls faster with distance than the effects of congestion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large scale Chinese agricultural survey was conducted at the direction of John Lossing Buck from 1929 through 1933. At the end of the 1990’s, some parts of the original micro data of Buck’s survey were discovered at Nanjing Agricultural University. An international joint study was begun to restore micro data of Buck’s survey and construct parts of the micro database on both the crop yield survey and special expenditure survey. This paper includes a summary of the characteristics of farmlands and cropping patterns in crop yield micro data that covered 2,102 farmers in 20 counties of 9 provinces. In order to test the classical hypothesis of whether or not an inverse relationship between land productivity and cultivated area may be observed in developing countries, a Box-Cox transformation test was conducted for functional forms on five main crops of Buck’s crop yield survey. The result of the test shows that the relationship between land productivity and cultivated areas of wheat and barley is linear and somewhat negative; those of rice, rapeseed, and seed cotton appear to be slightly positive. It can be tentatively concluded that the relationship between cultivated area and land productivity are not the same among crops, and the difference of labor intensity and the level of commercialization of each crop may be strongly related to the existence or non-existence of inverse relationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical problem of agricultural productivity measurement has regained interest owing to recent price hikes in world food markets. At the same time, there is a new methodological debate on the appropriate identification strategies for addressing endogeneity and collinearity problems in production function estimation. We examine the plausibility of four established and innovative identification strategies for the case of agriculture and test a set of related estimators using farm-level panel datasets from seven EU countries. The newly suggested control function and dynamic panel approaches provide attractive conceptual improvements over the received ‘within’ and duality models. Even so, empirical implementation of the conceptual sophistications built into these estimators does not always live up to expectations. This is particularly true for the dynamic panel estimator, which mostly failed to identify reasonable elasticities for the (quasi-) fixed factors. Less demanding proxy approaches represent an interesting alternative for agricultural applications. In our EU sample, we find very low shadow prices for labour, land and fixed capital across countries. The production elasticity of materials is high, so improving the availability of working capital is the most promising way to increase agricultural productivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an empirical methodology for studying the reallocation of agricultural labour across sectors from micro data. Whereas different approaches have been employed in the literature to better understand the mobility of labour, looking at the determinants to exit farm employment and enter off-farm activities, the initial decision of individuals to work in agriculture, as opposed to other sectors, has often been neglected. The proposed methodology controls for the selectivity bias, which may arise in the presence of a non-random sample of the population, in this context those in agricultural employment, which would lead to biased and inconsistent estimates. A 3-step multivariate probit with two selection and one outcome equations constitutes the selected empirical approach to explore the determinants of farm labour to exit agriculture and switch occupational sector. The model can be used to take into account the different market and production structures across European member states on the allocation of agricultural labour and its adjustments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes arithmetic and geometric Paasche quality-adjusted price indexes that combine micro data from the base period with macro data on the averages of asset prices and characteristics at the index period.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper discusses how global financial institutions are using big data analytics within their compliance operations. A lot of previous research has focused on the strategic implications of big data, but not much research has considered how such tools are entwined with regulatory breaches and investigations in financial services. Our work covers two in-depth qualitative case studies, each addressing a distinct type of analytics. The first case focuses on analytics which manage everyday compliance breaches and so are expected by managers. The second case focuses on analytics which facilitate investigation and litigation where serious unexpected breaches may have occurred. In doing so, the study focuses on the micro/data to understand how these tools are influencing operational risks and practices. The paper draws from two bodies of literature, the social studies of information systems and finance to guide our analysis and practitioner recommendations. The cases illustrate how technologies are implicated in multijurisdictional challenges and regulatory conflicts at each end of the operational risk spectrum. We find that compliance analytics are both shaping and reporting regulatory matters yet often firms may have difficulties in recruiting individuals with relevant but diverse skill sets. The cases also underscore the increasing need for financial organizations to adopt robust information governance policies and processes to ease future remediation efforts.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this thesis is to investigate the price-setting behavior in Brazil and, in particular, the effects on inflation and good-level real exchange rate persistence. This thesis is composed by three Chapters. In the first Chapter, we present the main stylized facts about the behavior of retail prices in Brazil using micro data from the CPI index computed by the Fundação Getulio Vargas. Moreover we construct time series of price-setting statistics and relate them to macroeconomic variables using regression analyses. In Chapter 2, we investigated the relevance of heterogeneity in countries price stickiness on good-level real exchange rate persistence, considering a newly constructed panel data set of relative prices of 115 common products between the U.S. and Brazil. Chapter 3 is devoted to the relation between sectoral price stickiness and inflation persistence.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.