893 resultados para Consideration sets
Resumo:
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identi fied uniquely by stochastic choice data.
Resumo:
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963; Wright & Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.
Resumo:
We study a psychologically based foundation for choice errors. The decision maker applies a preference ranking after forming a 'consideration set' prior to choosing an alternative. Membership of the consideration set is determined both by the alternative specific salience and by the rationality of the agent (his general propensity to consider all alternatives). The model turns out to include a logit formulation as a special case. In general, it has a rich set of implications both for exogenous parameters and for a situation in which alternatives can a¤ect their own salience (salience games). Such implications are relevant to assess the link between 'revealed' preferences and 'true' preferences: for example, less rational agents may paradoxically express their preference through choice more truthfully than more rational agents.
Resumo:
We introduce attention games. Alternatives ranked by quality (producers, politicians, sexual partners...) desire to be chosen and compete for the imperfect attention of a chooser by investing in their own salience. We prove that if alternatives can control the attention they get, then ”the showiest is the best”: the equilibrium ordering of salience (weakly) reproduces the quality ranking and the best alternative is the one that gets picked most often. This result also holds under more general conditions. However, if those conditions fail, then even the worst alternative can be picked most often.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
Asset correlations are of critical importance in quantifying portfolio credit risk and economic capitalin financial institutions. Estimation of asset correlation with rating transition data has focusedon the point estimation of the correlation without giving any consideration to the uncertaintyaround these point estimates. In this article we use Bayesian methods to estimate a dynamicfactor model for default risk using rating data (McNeil et al., 2005; McNeil and Wendin, 2007).Bayesian methods allow us to formally incorporate human judgement in the estimation of assetcorrelation, through the prior distribution and fully characterize a confidence set for the correlations.Results indicate: i) a two factor model rather than the one factor model, as proposed bythe Basel II framework, better represents the historical default data. ii) importance of unobservedfactors in this type of models is reinforced and point out that the levels of the implied asset correlationscritically depend on the latent state variable used to capture the dynamics of default,as well as other assumptions on the statistical model. iii) the posterior distributions of the assetcorrelations show that the Basel recommended bounds, for this parameter, undermine the levelof systemic risk.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
EWT solar cells start from drilled wafers with approximately 100 holes/cm2. These holes act as stress concentrators leading to a reduction in the mechanical strength of this type of wafers. The viability of cells with higher density of holes has been studied. To this end, sets of wafers with different density of holes have been characterized. The ring on ring test has been employed and FE models have been developed to simulate the test. The statistical evaluation permits to draw conclusions about the reduction of the strength depending on the density of holes. Moreover, the stress concentration around the holes has been studied by means of the FE method employing the sub-modeling technique. The maximum principal stress of EWT wafers with twice the density of holes of commercial ones is almost the same. However, the mutual interaction between the stress concentration effects around neighboring holes is only observed for wafers with a density of 200 holes/cm2
Resumo:
The issues relating fuzzy sets definition are under consideration including the analogue for separation axiom, statistical interpretation and membership function representation by the conditional Probabilities.
Resumo:
The classification of minimal sets is a central theme in abstract topological dynamics. Recently this work has been strengthened and extended by consideration of homomorphisms. Background material is presented in Chapter I. Given a flow on a compact Hausdorff space, the action extends naturally to the space of closed subsets, taken with the Hausdorff topology. These hyperspaces are discussed and used to give a new characterization of almost periodic homomorphisms. Regular minimal sets may be described as minimal subsets of enveloping semigroups. Regular homomorphisms are defined in Chapter II by extending this notion to homomorphisms with minimal range. Several characterizations are obtained. In Chapter III, some additional results on homomorphisms are obtained by relativizing enveloping semigroup notions. In Veech's paper on point distal flows, hyperspaces are used to associate an almost one-to-one homomorphism with a given homomorphism of metric minimal sets. In Chapter IV, a non-metric generalization of this construction is studied in detail using the new notion of a highly proximal homomorphism. An abstract characterization is obtained, involving only the abstract properties of homomorphisms. A strengthened version of the Veech Structure Theorem for point distal flows is proved. In Chapter V, the work in the earlier chapters is applied to the study of homomorphisms for which the almost periodic elements of the associated hyperspace are all finite. In the metric case, this is equivalent to having at least one fiber finite. Strong results are obtained by first assuming regularity, and then assuming that the relative proximal relation is closed as well.
Resumo:
The diagnosis of intraductal carcinoma (IDC) of the prostate remains subjective because 3 sets of diagnostic criteria are in use. An internet survey was compiled from 38 photomicrographs showing duct proliferations: 14 signed out as high-grade prostatic intraepithelial neoplasia (HGPIN), 17 IDC, and 7 invasive cribriform/ductal carcinoma. Each image was assessed for the presence of 9 histologic criteria ascribed to IDC. Thirty-nine respondents were asked to rate images as (1) benign/reactive, (2) HGPIN, (3) borderline between HGPIN and IDC, (4) IDC, or (5) invasive cribriform/ductal carcinoma. Intraclass correlation coefficient was 0.68. There was 70% overall agreement with HGPIN, 43% with IDC, and 73% with invasive carcinoma (P < .001, χ(2)). Respondents considered 19 (50%) of 38 cases as IDC candidates, of which 5 (26%) had a two-thirds consensus for IDC; two-thirds consensus for either borderline or IDC was reached in 9 (47%). Two-thirds consensus other than IDC was reached in the remaining 19 of 38 cases, with 15 supporting HGPIN and 4 supporting invasive carcinoma. Findings that differed across diagnostic categories were lumen-spanning neoplastic cells (P < .001), 2× benign duct diameters (P < .001), duct space contours (round, irregular, and branched) (P < .001), papillary growth (P = .048), dense cribriform or solid growth (both P = .023), and comedonecrosis (P = .015). When the 19 of 38 images that attained consensus for HGPIN or invasive carcinoma were removed from consideration, lack of IDC consensus was most often attributable to only loose cribriform growth (5/19), central nuclear maturation (5/19), or comedonecrosis (3/19). Of the 9 histologic criteria, only 1 retained significant correlation with a consensus diagnosis of IDC: the presence of solid areas (P = .038). One case that attained IDC consensus had less than 2× duct enlargement yet still had severe nuclear atypia and nucleomegaly. Six fold nuclear enlargement was not significant (P = .083), although no image had both 6× nuclei and papillary or loose cribriform growth: a combination postulated as sufficient criteria for IDC. Finally, 20.5% of respondents agreed that an isolated diagnosis of IDC on needle biopsy warrants definitive therapy, 20.5% disagreed, and 59.0% considered the decision to depend upon clinicopathologic variables. Although IDC diagnosis remains challenging, we propose these criteria: a lumen-spanning proliferation of neoplastic cells in preexisting ducts with a dense cribriform or partial solid growth pattern. Solid growth, in any part of the duct space, emerges as the most reproducible finding to rule in a diagnosis of IDC. Comedonecrosis is a rarer finding, but in most cases, it should rule in IDC. Duct space enlargement to greater than 2× the diameter of the largest, adjacent benign spaces is usually present in IDC, although there may be rare exceptions.