926 resultados para Econometric methods of discrete choice
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
Delayed-choice experiments in quantum mechanics are often taken to undermine a realistic interpretation of the quantum state. More specifically, Healey has recently argued that the phenomenon of delayed-choice entanglement swapping is incompatible with the view that entanglement is a physical relation between quantum systems. This paper argues against these claims. It first reviews two paradigmatic delayed-choice experiments and analyzes their metaphysical implications. It then applies the results of this analysis to the case of entanglement swapping, showing that such experiments pose no threat to realism about entanglement.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
Despite the importance of supplier inducement and brand loyalty inthe drug purchasing process, little empirical evidence is to be foundwith regard to the influence that these factors exert on patients decisions. Under the new scenario of easier access to information,patients are becoming more demanding and even go as far asquestioning their physicians prescription. Furthermore, newregulation also encourages patients to adopt an active role in thedecision between brand-name and generic drugs. Using a statedpreference model based on a choice survey, I have found evidenceof how significant physicians prescription and pharmacists recommendation become throughout the drug purchase process and,to what extent, brand loyalty influences the final decision. Asfar as we are aware, this paper is the first to explicitlytake consumers preferences into account rather than focusingon the behavior of health professionals.
Resumo:
The student´s screening made by schools corresponds to a regulatory mechanism for school inclusion and exclusion that normally overlaps the parental expectations of school choice. Based in "Parents survey 2006" data (n=188.073) generated by the Chilean Educational Ministry, this paper describe the parents reasons for choosing their children's school, and school´s criteria for screening students. It concludes that the catholic schools are the most selective institutions and usually exceed the capacity of parental choice. One of the reasons to select students would be the direct relationship between this practice and increasing the average score on the test of the Chilean Educational Quality Measurement System (SIMCE).
Resumo:
In this paper we study the commuting and moving decisions of workers in Catalonia (Spain) and its evolution in the 1986-1996 period. Using a microdata sample from the 1991 Spanish Population Census, we estimate a simultaneous, discrete choice model of commuting and moves, thus indirectly addressing the home and job location decisions. The econometrical framework is a simultaneous, binary probit model with a commute equation and a move equation
Resumo:
This paper tests for real interest parity (RIRP) among the nineteen major OECD countries over the period 1978:Q2-1998:Q4. The econometric methods applied consist of combining the use of several unit root or stationarity tests designed for panels valid under cross-section dependence and presence of multiple structural breaks. Our results strongly support the fulfilment of the weak version of the RIRP for the studied period once dependence and structural breaks are accounted for.
Resumo:
This paper tests for real interest parity (RIRP) among the nineteen major OECD countries over the period 1978:Q2-1998:Q4. The econometric methods applied consist of combining the use of several unit root or stationarity tests designed for panels valid under cross-section dependence and presence of multiple structural breaks. Our results strongly support the fulfilment of the weak version of the RIRP for the studied period once dependence and structural breaks are accounted for.
Resumo:
In this paper we study the commuting and moving decisions of workers in Catalonia (Spain) and its evolution in the 1986-1996 period. Using a microdata sample from the 1991 Spanish Population Census, we estimate a simultaneous, discrete choice model of commuting and moves, thus indirectly addressing the home and job location decisions. The econometrical framework is a simultaneous, binary probit model with a commute equation and a move equation
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR) in a drainage lysimeter. We used Darcy's law with K(θ) functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ) predicted by the method of Hillel et al. (1972) provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980), Sisson et al. (1980) and van Genuchten (1980) underestimated water percolation.