2 resultados para spend hen

em Duke University


Relevância:

10.00% 10.00%

Publicador:

Resumo:

At least since the seminal works of Jacob Mincer, labor economists have sought to understand how students make higher education investment decisions. Mincer’s original work seeks to understand how students decide how much education to accrue; subsequent work by various authors seeks to understand how students choose where to attend college, what field to major in, and whether to drop out of college.

Broadly speaking, this rich sub-field of literature contributes to society in two ways: First, it provides a better understanding of important social behaviors. Second, it helps policymakers anticipate the responses of students when evaluating various policy reforms.

While research on the higher education investment decisions of students has had an enormous impact on our understanding of society and has shaped countless education policies, students are only one interested party in the higher education landscape. In the jargon of economists, students represent only the `demand side’ of higher education---customers who are choosing options from a set of available alternatives. Opposite students are instructors and administrators who represent the `supply side’ of higher education---those who decide which options are available to students.

For similar reasons, it is also important to understand how individuals on the supply side of education make decisions: First, this provides a deeper understanding of the behaviors of important social institutions. Second, it helps policymakers anticipate the responses of instructors and administrators when evaluating various reforms. However, while there is substantial literature understanding decisions made on the demand side of education, there is far less attention paid to decisions on the supply side of education.

This dissertation uses empirical evidence to better understand how instructors and administrators make decisions and the implications of these decisions for students.

In the first chapter, I use data from Duke University and a Bayesian model of correlated learning to measure the signal quality of grades across academic fields. The correlated feature of the model allows grades in one academic field to signal ability in all other fields allowing me to measure both ‘own category' signal quality and ‘spillover' signal quality. Estimates reveal a clear division between information rich Science, Engineering, and Economics grades and less informative Humanities and Social Science grades. In many specifications, information spillovers are so powerful that precise Science, Engineering, and Economics grades are more informative about Humanities and Social Science abilities than Humanities and Social Science grades. This suggests students who take engineering courses during their Freshman year make more informed specialization decisions later in college.

In the second chapter, I use data from the University of Central Arkansas to understand how universities decide which courses to offer and how much to spend on instructors for these courses. Course offerings and instructor characteristics directly affect the courses students choose and the value they receive from these choices. This chapter reveals the university preferences over these student outcomes which best explain observed course offerings and instructors. This allows me to assess whether university incentives are aligned with students, to determine what alternative university choices would be preferred by students, and to illustrate how a revenue neutral tax/subsidy policy can induce a university to make these student-best decisions.

In the third chapter, co-authored with Thomas Ahn, Peter Arcidiacono, and Amy Hopson, we use data from the University of Kentucky to understand how instructors choose grading policies. In this chapter, we estimate an equilibrium model in which instructors choose grading policies and students choose courses and study effort given grading policies. In this model, instructors set both a grading intercept and a return on ability and effort. This builds a rich link between the grading policy decisions of instructors and the course choices of students. We use estimates of this model to infer what preference parameters best explain why instructors chose estimated grading policies. To illustrate the importance of these supply side decisions, we show changing grading policies can substantially reduce the gender gap in STEM enrollment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.