3 resultados para Multiple decisions
em Duke University
Resumo:
Some luxury goods manufacturers offer limited editions of their products, whereas some others market multiple product lines. Researchers have found that reference groups shape consumer evaluations of these product categories. Yet little empirical research has examined how reference groups affect the product line decisions of firms. Indeed, in a field setting it is quite a challenge to isolate reference group effects from contextual effects and correlated effects. In this paper, we propose a parsimonious model that allows us to study how reference groups influence firm behavior and that lends itself to experimental analysis. With the aid of the model we investigate the behavior of consumers in a laboratory setting where we can focus on the reference group effects after controlling for the contextual and correlated effects. The experimental results show that in the presence of strong reference group effects, limited editions and multiple products can help improve firms' profits. Furthermore, the trends in the purchase decisions of our participants point to the possibility that they are capable of introspecting close to two steps of thinking at the outset of the game and then learning through reinforcement mechanisms. © 2010 INFORMS.
Resumo:
BACKGROUND: Few educational resources have been developed to inform patients' renal replacement therapy (RRT) selection decisions. Patients progressing toward end stage renal disease (ESRD) must decide among multiple treatment options with varying characteristics. Complex information about treatments must be adequately conveyed to patients with different educational backgrounds and informational needs. Decisions about treatment options also require family input, as families often participate in patients' treatment and support patients' decisions. We describe the development, design, and preliminary evaluation of an informational, evidence-based, and patient-and family-centered decision aid for patients with ESRD and varying levels of health literacy, health numeracy, and cognitive function. METHODS: We designed a decision aid comprising a complementary video and informational handbook. We based our development process on data previously obtained from qualitative focus groups and systematic literature reviews. We simultaneously developed the video and handbook in "stages." For the video, stages included (1) directed interviews with culturally appropriate patients and families and preliminary script development, (2) video production, and (3) screening the video with patients and their families. For the handbook, stages comprised (1) preliminary content design, (2) a mixed-methods pilot study among diverse patients to assess comprehension of handbook material, and (3) screening the handbook with patients and their families. RESULTS: The video and handbook both addressed potential benefits and trade-offs of treatment selections. The 50-minute video consisted of demographically diverse patients and their families describing their positive and negative experiences with selecting a treatment option. The video also incorporated health professionals' testimonials regarding various considerations that might influence patients' and families' treatment selections. The handbook was comprised of written words, pictures of patients and health care providers, and diagrams describing the findings and quality of scientific studies comparing treatments. The handbook text was written at a 4th to 6th grade reading level. Pilot study results demonstrated that a majority of patients could understand information presented in the handbook. Patient and families screening the nearly completed video and handbook reviewed the materials favorably. CONCLUSIONS: This rigorously designed decision aid may help patients and families make informed decisions about their treatment options for RRT that are well aligned with their values.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.