83 resultados para Stated preference methods
Resumo:
In experiments with two-person sequential games we analyzewhether responses to favorable and unfavorable actions dependon the elicitation procedure. In our hot treatment thesecond player responds to the first player s observed actionwhile in our cold treatment we follow the strategy method and have the second player decide on a contingent action foreach and every possible first player move, without firstobserving this move. Our analysis centers on the degree towhich subjects deviate from the maximization of their pecuniaryrewards, as a response to others actions. Our results show nodifference in behavior between the two treatments. We also findevidence of the stability of subjects preferences with respectto their behavior over time and to the consistency of theirchoices as first and second mover.
Resumo:
Opinion polls are widely used to capture public sentiments on a varietyof issues. If citizens are unwilling to reveal certain policy preferences toothers, opinion polls may fail to characterize population preferences accurately.The innovation of this paper is to use unique data to measurebiases in opinion polls for a broad range of policies. I combine data on184 referenda held in Switzerland between 1987 and 2007, with postballotsurveys that ask for each proposal how the citizens voted. Thedifference between stated preferences in the survey and revealed preferences at the ballot box provides a direct measure of bias in opinion polls.I find that these biases vary by policy areas, with the largest ones occurring in policies on immigration, international integration, and votesinvolving liberal/conservative attitudes. Also, citizens show a tendencyto respond in accordance to the majority.
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
It is shown that preferences can be constructed from observed choice behavior in a way that is robust to indifferent selection (i.e., the agent is indifferent between two alternatives but, nevertheless, is only observed selecting one of them). More precisely, a suggestion by Savage (1954) to reveal indifferent selection by considering small monetary perturbations of alternatives is formalized and generalized to a purely topological framework: references over an arbitrary topological space can be uniquely derived from observed behavior under the assumptions that they are continuous and nonsatiated and that a strictly preferred alternative is always chosen, and indifferent selection is then characterized by discontinuity in choice behavior. Two particular cases are then analyzed: monotonic preferences over a partially ordered set, and preferences representable by a continuous pseudo-utility function.
Resumo:
Two concentration methods for fast and routine determination of caffeine (using HPLC-UV detection) in surface, and wastewater are evaluated. Both methods are based on solid-phase extraction (SPE) concentration with octadecyl silica sorbents. A common “offline” SPE procedure shows that quantitative recovery of caffeine is obtained with 2 mL of an elution mixture solvent methanol-water containing at least 60% methanol. The method detection limit is 0.1 μg L−1 when percolating 1 L samples through the cartridge. The development of an “online” SPE method based on a mini-SPE column, containing 100 mg of the same sorbent, directly connected to the HPLC system allows the method detection limit to be decreased to 10 ng L−1 with a sample volume of 100 mL. The “offline” SPE method is applied to the analysis of caffeine in wastewater samples, whereas the “on-line” method is used for analysis in natural waters from streams receiving significant water intakes from local wastewater treatment plants
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
This paper explores the possibility of using data from social bookmarking services to measure the use of information by academic researchers. Social bookmarking data can be used to augment participative methods (e.g. interviews and surveys) and other, non-participative methods (e.g. citation analysis and transaction logs) to measure the use of scholarly information. We use BibSonomy, a free resource-sharing system, as a case study. Results show that published journal articles are by far the most popular type of source bookmarked, followed by conference proceedings and books. Commercial journal publisher platforms are the most popular type of information resource bookmarked, followed by websites, records in databases and digital repositories. Usage of open access information resources is low in comparison with toll access journals. In the case of open access repositories, there is a marked preference for the use of subject-based repositories over institutional repositories. The results are consistent with those observed in related studies based on surveys and citation analysis, confirming the possible use of bookmarking data in studies of information behaviour in academic settings. The main advantages of using social bookmarking data are that is an unobtrusive approach, it captures the reading habits of researchers who are not necessarily authors, and data are readily available. The main limitation is that a significant amount of human resources is required in cleaning and standardizing the data.
Resumo:
We argue that preferences for secession are the expression of a common unobserved mechanisms determining national identity. This paper examines the hypothesis of independence of both preferences for secession (independent Euskadi) and Basque national identity in the light of Akerloff and Kranton (2000). We deal with psychological determinants of individuals' national identity formation as well as those that influence the propensity of individuals to support the secession of their perceived ¿imagined community¿ or nation.. We undertake econometric survey analysis for the Basque Country using a bivariate probit model and publicly available data from the Spanish Centre for Sociological Research. Our results provide robust evidence of a common determination of national identity and political preferences for the secession of the Basque Country consistently with Akerloff and Kranton model.
Resumo:
The influence of hole-hole (h-h) propagation in addition to the conventional particle-particle (p-p) propagation, on the energy per particle and the momentum distribution is investigated for the v2 central interaction which is derived from Reid¿s soft-core potential. The results are compared to Brueckner-Hartree-Fock calculations with a continuous choice for the single-particle (SP) spectrum. Calculation of the energy from a self-consistently determined SP spectrum leads to a lower saturation density. This result is not corroborated by calculating the energy from the hole spectral function, which is, however, not self-consistent. A generalization of previous calculations of the momentum distribution, based on a Goldstone diagram expansion, is introduced that allows the inclusion of h-h contributions to all orders. From this result an alternative calculation of the kinetic energy is obtained. In addition, a direct calculation of the potential energy is presented which is obtained from a solution of the ladder equation containing p-p and h-h propagation to all orders. These results can be considered as the contributions of selected Goldstone diagrams (including p-p and h-h terms on the same footing) to the kinetic and potential energy in which the SP energy is given by the quasiparticle energy. The results for the summation of Goldstone diagrams leads to a different momentum distribution than the one obtained from integrating the hole spectral function which in general gives less depletion of the Fermi sea. Various arguments, based partly on the results that are obtained, are put forward that a self-consistent determination of the spectral functions including the p-p and h-h ladder contributions (using a realistic interaction) will shed light on the question of nuclear saturation at a nonrelativistic level that is consistent with the observed depletion of SP orbitals in finite nuclei.