31 resultados para discrete choice models
em CentAUR: Central Archive University of Reading - UK
Resumo:
We present a new Bayesian econometric specification for a hypothetical Discrete Choice Experiment (DCE) incorporating respondent ranking information about attribute importance. Our results indicate that a DCE debriefing question that asks respondents to rank the importance of attributes helps to explain the resulting choices. We also examine how mode of survey delivery (online and mail) impacts model performance, finding that results are not substantively a§ected by the mode of survey delivery. We conclude that the ranking data is a complementary source of information about respondent utility functions within hypothetical DCEs
Resumo:
The joint and alternative uses of attribute non-attendance and importance ranking data within discrete choice experiments are investigated using data from Lebanon examining consumers’ preferences for safety certification in food. We find that both types of information; attribute non-attendance and importance rankings, improve estimates of respondent utility. We introduce a method of integrating both types of information simultaneously and find that this outperforms models where either importance ranking or non-attendance data are used alone. As in previous studies, stated non-attendance of attributes was not found to be consistent with respondents having zero marginal utility for those attributes
Resumo:
Income growth in highly industrialised countries has resulted in consumer choice of foodstuffs no longer being primarily influenced by basic factors such as price and organoleptic features. From this perspective, the present study sets out to evaluate how and to what extent consumer choice is influenced by the possible negative effects on health and environment caused by the consumption of fruit containing deposits of pesticides and chemical products. The study describes the results of a survey which explores and estimates consumer willingness to pay in two forms: a yearly contribution for the abolition of the use of pesticides on fruit, and a premium price for organically grown apples guaranteed by a certified label. The same questionnaire was administered to two samples. The first was a conventional face-to-face survey of customers of large retail outlets located around Bologna (Italy); the second was an Internet sample. The discrete choice data were analysed by means of probit and tobit models to estimate the utility consumers attribute to organically grown fruit and to a pesticide ban. The research also addresses questions of validity and representativeness as a fundamental problem in web-based surveys.
Resumo:
In this paper we employ a hypothetical discrete choice experiment (DCE) to examine how much consumers are willing to pay to use technology to customize their food shopping. We conjecture that customized information provision can aid in the composition of a healthier shop. Our results reveal that consumers are prepared to pay relatively more for individual specic information as opposed to generic nutritional information that is typically provided on food labels. In arriving at these results we have examined various model specications including those that make use of ex-post de-brieng questions on attribute nonattendance and attribute ranking information and those that consider the time taken to complete the survey. Our main results are robust to the various model specications we examine
Resumo:
We present a procedure for estimating two quantities defining the spatial externality in discrete-choice commonly referred to as 'the neighbourhood effect'. One quantity, the propensity for neighbours to make the same decision, reflects traditional preoccupations; the other quantity, the magnitude of the neighbourhood itself, is novel. Because both quantities have fundamental bearing on the magnitude of the spatial externality, it is desirable to have a robust algorithm for their estimation. Using recent advances in Bayesian estimation and model comparison, we devise such an algorithm and illustrate its application to a sample of northern-Filipino smallholders. We determine that a significant, positive, neighbourhood effect exists; that, among the 12 geographical units comprising the sample, the neighbourhood spans a three-unit radius; and that policy prescriptions are significantly altered when calculations account for the spatial externality.
Resumo:
We investigate the factors precipitating market entry where smallholders make decisions about participation (a discrete choice about whether to sell quantities of products) and supply (a continuous-valued choice about how much quantity to sell) in a cross-section of smallholders in Northern Luzon, Philippines, in a model that combines basic probit and Tobit ideas, is implemented using Bayesian methods, and generates precise estimates of the inputs required in order to effect entry among the non-participants. We estimate the total amounts of (cattle, buffalo, pig and chicken) livestock input required to effect entry and compare and contrast the alternative input requirements. To the extent that our smallholder sample may be representative of a wide and broader set of circumstances, our findings shed light on offsetting impacts of conflicting factors that complicate the roles for policy in the context of expanding the density of participation.
Resumo:
We conduct the first empirical economic investigation of the decision to cheat by University students. We investigate student demand for essays, using hypothetical discrete choice experiments in conjunction with consequential Holt-Laury gambles to derive subjects risk preferences. Students stated willingness to participate in the essay market, and their valuation of purchased essays, vary with the characteristics of student and institutional environment. Risk preferring students, those working in a non-native language, and those believing they will attain a lower grade are willing to pay more. Purchase likelihoods and essay valuations decline as the probability of detection and associated penalty increase.
Resumo:
The Routh-stability method is employed to reduce the order of discrete-time system transfer functions. It is shown that the Routh approximant is well suited to reduce both the denominator and the numerator polynomials, although alternative methods, such as PadÃ�Â(c)-Markov approximation, are also used to fit the model numerator coefficients.
Resumo:
With the increasing pressure on crop production from the evolution of herbicide resistance, farmers are increasingly adopting Integrated Weed Management (IWM) strategies to augment their weed control. These include measures to increase the competitiveness of the crop canopy such as increased sowing rate and the use of more competitive cultivars. While there are data on the relative impact of these non-chemical weed control methods assessed in isolation, there is uncertainty about their combined contribution, which may be hindering their adoption. In this article, the INTERCOM simulation model of crop / weed competition was used to examine the combined impact of crop density, sowing date and cultivar choice on the outcomes of competition between wheat (Triticum aestivum) and Alopecurus myosuroides. Alopecurus myosuroides is a problematic weed of cereal crops in North-Western Europe and the primary target for IWM in the UK because it has evolved resistance to a range of herbicides. The model was parameterised for two cultivars with contrasting competitive ability, and simulations run across 10 years at different crop densities and two sowing dates. The results suggest that sowing date, sowing density and cultivar choice largely work in a complementary fashion, allowing enhanced competitive ability against weeds when used in combination. However, the relative benefit of choosing a more competitive cultivar decreases at later sowing dates and higher crop densities. Modelling approaches could be further employed to examine the effectiveness of IWM, reducing the need for more expensive and cumbersome long-term in situ experimentation.
Resumo:
The uptake and storage of anthropogenic carbon in the North Atlantic is investigated using different configurations of ocean general circulation/carbon cycle models. We investigate how different representations of the ocean physics in the models, which represent the range of models currently in use, affect the evolution of CO2 uptake in the North Atlantic. The buffer effect of the ocean carbon system would be expected to reduce ocean CO2 uptake as the ocean absorbs increasing amounts of CO2. We find that the strength of the buffer effect is very dependent on the model ocean state, as it affects both the magnitude and timing of the changes in uptake. The timescale over which uptake of CO2 in the North Atlantic drops to below preindustrial levels is particularly sensitive to the ocean state which sets the degree of buffering; it is less sensitive to the choice of atmospheric CO2 forcing scenario. Neglecting physical climate change effects, North Atlantic CO2 uptake drops below preindustrial levels between 50 and 300 years after stabilisation of atmospheric CO2 in different model configurations. Storage of anthropogenic carbon in the North Atlantic varies much less among the different model configurations, as differences in ocean transport of dissolved inorganic carbon and uptake of CO2 compensate each other. This supports the idea that measured inventories of anthropogenic carbon in the real ocean cannot be used to constrain the surface uptake. Including physical climate change effects reduces anthropogenic CO2 uptake and storage in the North Atlantic further, due to the combined effects of surface warming, increased freshwater input, and a slowdown of the meridional overturning circulation. The timescale over which North Atlantic CO2 uptake drops to below preindustrial levels is reduced by about one-third, leading to an estimate of this timescale for the real world of about 50 years after the stabilisation of atmospheric CO2. In the climate change experiment, a shallowing of the mixed layer depths in the North Atlantic results in a significant reduction in primary production, reducing the potential role for biology in drawing down anthropogenic CO2.
Resumo:
Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.
Resumo:
The MarQUEST (Marine Biogeochemistry and Ecosystem Modelling Initiative in QUEST) project was established to develop improved descriptions of marine biogeochemistry, suited for the next generation of Earth system models. We review progress in these areas providing insight on the advances that have been made as well as identifying remaining key outstanding gaps for the development of the marine component of next generation Earth system models. The following issues are discussed and where appropriate results are presented; the choice of model structure, scaling processes from physiology to functional types, the ecosystem model sensitivity to changes in the physical environment, the role of the coastal ocean and new methods for the evaluation and comparison of ecosystem and biogeochemistry models. We make recommendations as to where future investment in marine ecosystem modelling should be focused, highlighting a generic software framework for model development, improved hydrodynamic models, and better parameterisation of new and existing models, reanalysis tools and ensemble simulations. The final challenge is to ensure that experimental/observational scientists are stakeholders in the models and vice versa.
Resumo:
Reform of agricultural policies, notably the continuing elimination of production-enhancing subsidies, makes it possible for policies to respond to social issues such as the rural environment and health in future. In this paper, we draw on a Rural Economy and Land Use (RELU) research project which is examining the potential for the development of healthy food chains and the implications for human health and the environment. One of the key issues to be addressed is consumers' willingness to pay for the nutritionally enhanced food products from these new chains, but it is evident that only a partial understanding can be gained from a traditional economics approach. In the paper, we discuss how economists are beginning to incorporate views from other disciplines into their models of consumer choice.
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
With the current concern over climate change, descriptions of how rainfall patterns are changing over time can be useful. Observations of daily rainfall data over the last few decades provide information on these trends. Generalized linear models are typically used to model patterns in the occurrence and intensity of rainfall. These models describe rainfall patterns for an average year but are more limited when describing long-term trends, particularly when these are potentially non-linear. Generalized additive models (GAMS) provide a framework for modelling non-linear relationships by fitting smooth functions to the data. This paper describes how GAMS can extend the flexibility of models to describe seasonal patterns and long-term trends in the occurrence and intensity of daily rainfall using data from Mauritius from 1962 to 2001. Smoothed estimates from the models provide useful graphical descriptions of changing rainfall patterns over the last 40 years at this location. GAMS are particularly helpful when exploring non-linear relationships in the data. Care is needed to ensure the choice of smooth functions is appropriate for the data and modelling objectives. (c) 2008 Elsevier B.V. All rights reserved.