956 resultados para Models for count data
Resumo:
Linear programming (LP) is the most widely used optimization technique for solving real-life problems because of its simplicity and efficiency. Although conventional LP models require precise data, managers and decision makers dealing with real-world optimization problems often do not have access to exact values. Fuzzy sets have been used in the fuzzy LP (FLP) problems to deal with the imprecise data in the decision variables, objective function and/or the constraints. The imprecisions in the FLP problems could be related to (1) the decision variables; (2) the coefficients of the decision variables in the objective function; (3) the coefficients of the decision variables in the constraints; (4) the right-hand-side of the constraints; or (5) all of these parameters. In this paper, we develop a new stepwise FLP model where fuzzy numbers are considered for the coefficients of the decision variables in the objective function, the coefficients of the decision variables in the constraints and the right-hand-side of the constraints. In the first step, we use the possibility and necessity relations for fuzzy constraints without considering the fuzzy objective function. In the subsequent step, we extend our method to the fuzzy objective function. We use two numerical examples from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and the computational efficiency of the procedures and algorithms. © 2013-IOS Press and the authors. All rights reserved.
Resumo:
We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.
Resumo:
Despite a long history of prevention efforts and federal laws prohibiting the consumption of alcohol for those below the age of 21 years, underage drinking continues at both a high prevalence rate and high incidence rate. The purpose of this research study is to explain underage drinking of alcohol conditioned by perception of peer drinking. An acquisition model is conjectured and then a relationship within the model is explained with a national sample of students. From a developmental perspective, drinking alcohol is acquired in a reasonably ordered fashion that reflects the influences over time of the culture, family, and peers. The study measures perceptions of alcohol drinking during early adolescence when alcohol use begins the maintenance phase of the behavior. The correlation between drinking alcohol and perception of classmate drinking can be described via social learning theory. Simultaneously the moderating effects of grade level, gender, and race/ethnicity are used to explain differences between groups. Multilevel logistic regression was used to analyze the relations. The researcher found support for an association between adolescent drinking and perceptions of classmate drinking. Gender and grade level moderated the relation. African-Americans consistently demonstrated less drinking and less perception of classmate drinking than either whites or other students not white nor African-American. The importance of a better understanding of the process of acquiring drinking behaviors is discussed in relation to future research models with longitudinal data. ^
Resumo:
Traffic from major hurricane evacuations is known to cause severe gridlocks on evacuation routes. Better prediction of the expected amount of evacuation traffic is needed to improve the decision-making process for the required evacuation routes and possible deployment of special traffic operations, such as contraflow. The objective of this dissertation is to develop prediction models to predict the number of daily trips and the evacuation distance during a hurricane evacuation. ^ Two data sets from the surveys of the evacuees from Hurricanes Katrina and Ivan were used in the models' development. The data sets included detailed information on the evacuees, including their evacuation days, evacuation distance, distance to the hurricane location, and their associated socioeconomic characteristics, including gender, age, race, household size, rental status, income, and education level. ^ Three prediction models were developed. The evacuation trip and rate models were developed using logistic regression. Together, they were used to predict the number of daily trips generated before hurricane landfall. These daily predictions allowed for more detailed planning over the traditional models, which predicted the total number of trips generated from an entire evacuation. A third model developed attempted to predict the evacuation distance using Geographically Weighted Regression (GWR), which was able to account for the spatial variations found among the different evacuation areas, in terms of impacts from the model predictors. All three models were developed using the survey data set from Hurricane Katrina and then evaluated using the survey data set from Hurricane Ivan. ^ All of the models developed provided logical results. The logistic models showed that larger households with people under age six were more likely to evacuate than smaller households. The GWR-based evacuation distance model showed that the household with children under age six, income, and proximity of household to hurricane path, all had an impact on the evacuation distances. While the models were found to provide logical results, it was recognized that they were calibrated and evaluated with relatively limited survey data. The models can be refined with additional data from future hurricane surveys, including additional variables, such as the time of day of the evacuation. ^
Resumo:
A high resolution study of the H(e,e'K+)Λ,Σ 0 reaction was performed at Hall A, TJNAF as part of the hypernuclear experiment E94-107. One important ingredient to the measurement of the hypernuclear cross section is the elementary cross section for production of hyperons, Λ and Σ0. This reaction was studied using a hydrogen (i.e. a proton) target. Data were taken at very low Q2 (∼0.07 (GeV/c) 2) and W∼2.2 GeV. Kaons were detected along the direction of q, the momentum transferred by the incident electron (&thetas;CM∼6°). In addition, there are few data available regarding electroproduction of hyperons at low Q2 and &thetas;CM and the available theoretical models differ significantly in this kinematical region of W. The measurement of the elementary cross section was performed by scaling the Monte Carlo cross section (MCEEP) with the experimental-to-simulated yield ratio. The Monte Carlo cross section includes an experimental fit and extrapolation from the existing data for electroproduction of hyperons. Moreover, the estimated transverse component of the electroproduction cross section of H(e,e'K+)Λ was compared to the different predictions of the theoretical models and exisiting data curves for photoproductions of hyperons. None of the models fully describe the cross-section results over the entire angular range. Furthermore, measurements of the Σ 0/Λ production ratio were performed at &thetas; CM∼6°, where data are not available. Finally, data for the measurements of the differential cross sections and the Σ 0/Λ production were binned in Q2, W and &thetas;CM to understand the dependence on these variables. These results are not only a fundamental contribution to the hypernuclear spectroscopy studies but also an important experimental measurement to constrain existing theoretical models for the elementary reaction.
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
Purpose: Early onset of sexual activity has been linked to later substance abuse. Our study aimed to further describe the associations between Latina mothers’ and daughters’ early sexual activity and adult substance abuse. Methods: A survey was conducted with 92 Latina mother–daughter dyads whose members never experienced sexual abuse. Childhood sexual experience was defined as the occurrence of a consensual sexual encounter at the age of 15 years or younger. Substance abusers were identified by the extent of substance use during the 12 months prior to the interview. Path analysis was used to fit our conceptual models to the data. Main findings: Daughters’ current, adult substance abuse was associated independently with: their own childhood sexual experience (odds ratio [OR] = 6.0) and mothers’ current, adult substance abuse (OR = 2.0). Compared with daughters who first experienced sex after the age of 19, the odds of using substances were 17.7 times higher among daughters who had childhood sexual experience and 3.8 times higher among daughters who first experienced sex between the age of 16–19 years. Explicitly, sexual experiences between the ages of 16–19 years were also risk factors for later adult substance abuse. Mothers’ childhood sexual experience (OR = 7.3) was a strong predictor for daughters’ childhood sexual experience. Conclusions: Our study supported a link between mother and daughter childhood sexual experience among Latinas, and indicated it is a correlate of adult substance abuse. Family based substance abuse prevention efforts and future longitudinal studies should consider maternal childhood sexual experience as a potential indication of risk for Latina daughters.
Resumo:
A high resolution study of the H(e,e'K+)Λ,Σ0 reaction was performed at Hall A, TJNAF as part of the hypernuclear experiment E94-107. One important ingredient to the measurement of the hypernuclear cross section is the elementary cross section for production of hyperons, Λ and Σ0. This reaction was studied using a hydrogen (i.e. a proton) target. Data were taken at very low Q2 (∼0.07 (GeV/c)2) and W∼2.2 GeV. Kaons were detected along the direction of q, the momentum transferred by the incident electron (θCM~6°). In addition, there are few data available regarding electroproduction of hyperons at low Q2 and θCM, and the available theoretical models differ significantly in this kinematical region of W. The measurement of the elementary cross section was performed by scaling the Monte Carlo cross section (MCEEP) with the experimental-to-simulated yield ratio. The Monte Carlo cross section includes an experimental fit and extrapolation from the existing data for electroproduction of hyperons. Moreover, the estimated transverse component of the electroproduction cross section of H(e,e'K+)Λ was compared to the different predictions of the theoretical models and exisiting data curves for photoproductions of hyperons. None of the models fully describe the cross-section results over the entire angular range. Furthermore, measurements of the Σ0/Λ production ratio were performed at θCM, where data are not available. Finally, data for the measurements of the differential cross sections and the Σ0/Λ production were binned in Q2, W and θCM to understand the dependence on these variables. These results are not only a fundamental contribution to the hypernuclear spectroscopy studies but also an important experimental measurement to constrain existing theoretical models for the elementary reaction.
Resumo:
A compilation of chemical analyses of Pacific Ocean nodules using an x-ray fluorescence technique. The equipment used was a General Electric XRD-5 with a tungsten tube. Lithium fluoride was used as the diffraction element in assaying for all elements above calcium in the atomic table and EDDT was used in conjunction with a helium path for all elements with an atomic number less than calcium. Flow counters were used in conjunction with a pulse height analyzer to eliminate x-ray lines of different but integral orders in gathering count data. The stability of the equipment was found to be excellent by the author. The equipment was calibrated by the use of standard ores made from pure oxide forms of the elements in the nodules and carefully mixed in proportion to the amounts of these elements generally found in the manganese nodules. Chemically analyzed standards of the nodules themselves were also used. As a final check, a known amount of the element in question was added to selected samples of the nodules and careful counts were taken on these samples before and after the addition of the extra amount of the element. The method involved the determination and subsequent use of absorption and activation factors for the lines of the various elements. All the absorption and activation factors were carefully determined using the standard ores. The chemically analyzed samples of the nodules by these methods yielded an accuracy to at least three significant figures.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.