918 resultados para Random Regret Minimization
Resumo:
This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the dominant choice-modelling paradigm based on Random Utility Maximization-theory (RUM-theory). We highlight how RRM-based models provide closed form, logit-type formulations for choice probabilities that allow for capturing semi-compensatory behaviour and choice set-composition effects while being equally parsimonious as their utilitarian counterparts. Using data from a Stated Choice-experiment aimed at identifying valuations of characteristics of nature parks, we compare RRM-based models and RUM-based models in terms of parameter estimates, goodness of fit, elasticities and consequential policy implications.
Resumo:
This paper compares the Random Regret Minimization and the Random Utility Maximization models for determining recreational choice. The Random Regret approach is based on the idea that, when choosing, individuals aim to minimize their regret – regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. The Random Regret paradigm, recently developed in transport economics, presents a tractable, regret-based alternative to the dominant choice paradigm based on Random Utility. Using data from a travel cost study exploring factors that influence kayakers’ site-choice decisions in the Republic of Ireland, we estimate both the traditional Random Utility multinomial logit model (RU-MNL) and the Random Regret multinomial logit model (RR-MNL) to gain more insights into site choice decisions. We further explore whether choices are driven by a utility maximization or a regret minimization paradigm by running a binary logit model to examine the likelihood of the two decision choice paradigms using site visits and respondents characteristics as explanatory variables. In addition to being one of the first studies to apply the RR-MNL to an environmental good, this paper also represents the first application of the RR-MNL to compute the Logsum to test and strengthen conclusions on welfare impacts of potential alternative policy scenarios.
Resumo:
This paper proposes a discrete mixture model which assigns individuals, up to a probability, to either a class of random utility (RU) maximizers or a class of random regret (RR) minimizers, on the basis of their sequence of observed choices. Our proposed model advances the state of the art of RU-RR mixture models by (i) adding and simultaneously estimating a membership model which predicts the probability of belonging to a RU or RR class; (ii) adding a layer of random taste heterogeneity within each behavioural class; and (iii) deriving a welfare measure associated with the RU-RR mixture model and consistent with referendum-voting, which is the adequate mechanism of provision for such local public goods. The context of our empirical application is a stated choice experiment concerning traffic calming schemes. We find that the random parameter RU-RR mixture model not only outperforms its fixed coefficient counterpart in terms of fit-as expected-but also in terms of plausibility of membership determinants of behavioural class. In line with psychological theories of regret, we find that, compared to respondents who are familiar with the choice context (i.e. the traffic calming scheme), unfamiliar respondents are more likely to be regret minimizers than utility maximizers. © 2014 Elsevier Ltd.
Resumo:
This paper introduces the discrete choice model-paradigm of Random Regret Minimisation (RRM) to the field of health economics. The RRM is a regret-based model that explores a driver of choice different from the traditional utility-based Random Utility Maximisation (RUM). The RRM approach is based on the idea that, when choosing, individuals aim to minimise their regret–regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. Analysing data from a discrete choice experiment on diet, physical activity and risk of a fatal heart attack in the next ten years administered to a sample of the Northern Ireland population, we find that the combined use of RUM and RRM models offer additional information, providing useful behavioural insights for better informed policy appraisal.
Resumo:
This study is the first to compare random regret minimisation (RRM) and random utility maximisation (RUM) in freight transport application. This paper aims to compare RRM and RUM in a freight transport scenario involving negative shock in the reference alternative. Based on data from two stated choice experiments conducted among Swiss logistics managers, this study contributes to related literature by exploring for the first time the use of mixed logit models in the most recent version of the RRM approach. We further investigate two paradigm choices by computing elasticities and forecasting choice probability. We find that regret is important in describing the managers’ choices. Regret increases in the shock scenario, supporting the idea that a shift in reference point can cause a shift towards regret minimisation. Differences in elasticities and forecast probability are identified and discussed appropriately.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We study the influence of the choice of template in tensor-based morphometry. Using 3D brain MR images from 10 monozygotic twin pairs, we defined a tensor-based distance in the log-Euclidean framework [1] between each image pair in the study. Relative to this metric, twin pairs were found to be closer to each other on average than random pairings, consistent with evidence that brain structure is under strong genetic control. We also computed the intraclass correlation and associated permutation p-value at each voxel for the determinant of the Jacobian matrix of the transformation. The cumulative distribution function (cdf) of the p-values was found at each voxel for each of the templates and compared to the null distribution. Surprisingly, there was very little difference between CDFs of statistics computed from analyses using different templates. As the brain with least log-Euclidean deformation cost, the mean template defined here avoids the blurring caused by creating a synthetic image from a population, and when selected from a large population, avoids bias by being geometrically centered, in a metric that is sensitive enough to anatomical similarity that it can even detect genetic affinity among anatomies.
Resumo:
Conventional Random access scan (RAS) for testing has lower test application time, low power dissipation, and low test data volume compared to standard serial scan chain based design In this paper, we present two cluster based techniques, namely, Serial Input Random Access Scan and Variable Word Length Random Access Scan to reduce test application time even further by exploiting the parallelism among the clusters and performing write operations on multiple bits Experimental results on benchmarks circuits show on an average 2-3 times speed up in test write time and average 60% reduction in write test data volume
Resumo:
Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.
Resumo:
A new model to explain animal spacing, based on a trade-off between foraging efficiency and predation risk, is derived from biological principles. The model is able to explain not only the general tendency for animal groups to form, but some of the attributes of real groups. These include the independence of mean animal spacing from group population, the observed variation of animal spacing with resource availability and also with the probability of predation, and the decline in group stability with group size. The appearance of "neutral zones" within which animals are not motivated to adjust their relative positions is also explained. The model assumes that animals try to minimize a cost potential combining the loss of intake rate due to foraging interference and the risk from exposure to predators. The cost potential describes a hypothetical field giving rise to apparent attractive and repulsive forces between animals. Biologically based functions are given for the decline in interference cost and increase in the cost of predation risk with increasing animal separation. Predation risk is calculated from the probabilities of predator attack and predator detection as they vary with distance. Using example functions for these probabilities and foraging interference, we calculate the minimum cost potential for regular lattice arrangements of animals before generalizing to finite-sized groups and random arrangements of animals, showing optimal geometries in each case and describing how potentials vary with animal spacing. (C) 1999 Academic Press.</p>
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.