895 resultados para Anchoring heuristic
Resumo:
Our objective was to study whether “compensatory” models provide better descriptions of clinical judgment than fast and frugal models, according to expertise and experience. Fifty practitioners appraised 60 vignettes describing a child with an exacerbation of asthma and rated their propensities to admit the child. Linear logistic (LL) models of their judgments were compared with a matching heuristic (MH) model that searched available cues in order of importance for a critical value indicating an admission decision. There was a small difference between the 2 models in the proportion of patients allocated correctly (admit or not-admit decisions), 91.2% and 87.8%, respectively. The proportion allocated correctly by the LL model was lower for consultants than juniors, whereas the MH model performed equally well for both. In this vignette study, neither model provided any better description of judgments made by consultants or by pediatricians compared to other grades and specialties.
Resumo:
Feature selection and feature weighting are useful techniques for improving the classification accuracy of K-nearest-neighbor (K-NN) rule. The term feature selection refers to algorithms that select the best subset of the input feature set. In feature weighting, each feature is multiplied by a weight value proportional to the ability of the feature to distinguish pattern classes. In this paper, a novel hybrid approach is proposed for simultaneous feature selection and feature weighting of K-NN rule based on Tabu Search (TS) heuristic. The proposed TS heuristic in combination with K-NN classifier is compared with several classifiers on various available data sets. The results have indicated a significant improvement in the performance in classification accuracy. The proposed TS heuristic is also compared with various feature selection algorithms. Experiments performed revealed that the proposed hybrid TS heuristic is superior to both simple TS and sequential search algorithms. We also present results for the classification of prostate cancer using multispectral images, an important problem in biomedicine.
Resumo:
This paper introduces a recursive rule base adjustment to enhance the performance of fuzzy logic controllers. Here the fuzzy controller is constructed on the basis of a decision table (DT), relying on membership functions and fuzzy rules that incorporate heuristic knowledge and operator experience. If the controller performance is not satisfactory, it has previously been suggested that the rule base be altered by combined tuning of membership functions and controller scaling factors. The alternative approach proposed here entails alteration of the fuzzy rule base. The recursive rule base adjustment algorithm proposed in this paper has the benefit that it is computationally more efficient for the generation of a DT, and advantage for online realization. Simulation results are presented to support this thesis. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a random iterative graph based hyper-heuristic to produce a collection of heuristic sequences to construct solutions of different quality. These heuristic sequences can be seen as dynamic hybridisations of different graph colouring heuristics that construct solutions step by step. Based on these sequences, we statistically analyse the way in which graph colouring heuristics are automatically hybridised. This, to our knowledge, represents a new direction in hyper-heuristic research. It is observed that spending the search effort on hybridising Largest Weighted Degree with Saturation Degree at the early stage of solution construction tends to generate high quality solutions. Based on these observations, an iterative hybrid approach is developed to adaptively hybridise these two graph colouring heuristics at different stages of solution construction. The overall aim here is to automate the heuristic design process, which draws upon an emerging research theme on developing computer methods to design and adapt heuristics automatically. Experimental results on benchmark exam timetabling and graph colouring problems demonstrate the effectiveness and generality of this adaptive hybrid approach compared with previous methods on automatically generating and adapting heuristics. Indeed, we also show that the approach is competitive with the state of the art human produced methods.
Resumo:
Use of the Dempster-Shafer (D-S) theory of evidence to deal with uncertainty in knowledge-based systems has been widely addressed. Several AI implementations have been undertaken based on the D-S theory of evidence or the extended theory. But the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major problem. This paper presents an approach to representing such knowledge, in which Yen’s probabilistic multi-set mappings have been extended to evidential mappings, and Shafer’s partition technique is used to get the mass function in a complex evidence space. Then, a new graphic method for describing the knowledge is introduced which is an extension of the graphic model by Lowrance et al. Finally, an extended framework for evidential reasoning systems is specified.
Resumo:
Nurse rostering is a difficult search problem with many constraints. In the literature, a number of approaches have been investigated including penalty function methods to tackle these constraints within genetic algorithm frameworks. In this paper, we investigate an extension of a previously proposed stochastic ranking method, which has demonstrated superior performance to other constraint handling techniques when tested against a set of constrained optimisation benchmark problems. An initial experiment on nurse rostering problems demonstrates that the stochastic ranking method is better in finding feasible solutions but fails to obtain good results with regard to the objective function. To improve the performance of the algorithm, we hybridise it with a recently proposed simulated annealing hyper-heuristic within a local search and genetic algorithm framework. The hybrid algorithm shows significant improvement over both the genetic algorithm with stochastic ranking and the simulated annealing hyper-heuristic alone. The hybrid algorithm also considerably outperforms the methods in the literature which have the previously best known results.
Resumo:
In two experiments we tested the prediction derived from Tversky and Kahneman's (1983) work on the causal conjunction fallacy that the strength of the causal connection between constituent events directly affects the magnitude of the causal conjunction fallacy. We also explored whether any effects of perceived causal strength were due to graded output from heuristic Type 1 reasoning processes or the result of analytic Type 2 reasoning processes. As predicted, Experiment 1 demonstrated that fallacy rates were higher for strongly than for weakly related conjunctions. Weakly related conjunctions in turn attracted higher rates of fallacious responding than did unrelated conjunctions. Experiment 2 showed that a concurrent memory load increased rates of fallacious responding for strongly related but not for weakly related conjunctions. We interpret these results as showing that manipulations of the strength of the perceived causal relationship between the conjuncts result in graded output from heuristic reasoning process and that additional mental resources are required to suppress strong heuristic output.
Resumo:
This research is set in the context of today’s societies, in which the corporate visual symbology of a business, corporation or institution constitutes an essential way to transmit its corporate image. Traditional discursive procedures can be discovered in the development of these signs. The rhetorical strategies developed by the great classical authors appear in the logo-symbols expressing the corporate values of today’s companies. Thus, rhetoric is emerging once again in the sense it had many centuries ago: A repertory of rules that, paradoxically, standardizes the deviations of language and whose control is synonymous with power. The main objective of this study is to substantiate the rhetorical construction of logos using as a model of analysis the classical process of creating discourse. This involves understanding logos as persuasive discourses addressed to a modern audience. Our findings show that the rhetorical paradigm can be considered as a creative model for the construction of an original logo consistent with a company’s image.
Resumo:
We characterize the structural transitions in an initially homeotropic bent-rod nematic liquid crystal excited by ac fields of frequency f well above the dielectric inversion point f(i). From the measured principal dielectric constants and electrical conductivities of the compound, the Carr-Helfrich conduction regime is anticipated to extend into the sub-megahertz region. Periodic patterned states occur through secondary bifurcations from the Freedericksz distorted state. An anchoring transition between the bend Freedericksz (1317) and degenerate planar (DP) states is detected. The BF state is metastable well above the Freedericksz threshold and gives way to the DP state, which persists in the field-off condition for several hours. Numerous +1 and -1 umbilics form at the onset of BF distortion, the former being largely of the chiral type. They survive in the DP configuration as linear defects, nonsingular in the core. In the BF regime, not far from fi, periodic Williams-like domains form around the umbilics; they drift along the director easy axis right from their onset. With increasing f, the wave vector of the periodic domains switches from parallel to normal disposition with respect to the c vector. Well above fi, a broadband instability is found.
Resumo:
We report on the electric-field-generated effects in the nematic phase of a twin mesogen formed of bent-core and calamitic units, aligned homeotropically in the initial ground state and examined beyond the dielectric inversion point. The bend-Freedericksz (BF) state occurring at the primary bifurcation and containing a network of umbilics is metastable; we focus here on the degenerate planar (DP) configuration that establishes itself at the expense of the BF state in the course of an anchoring transition. In the DP regime, normal rolls, broad domains, and chevrons (both defect-mediated and defect-free types) form at various linear defect-sites, in different regions of the frequency-voltage plane. A significant novel aspect common to all these patterned states is the sustained propagative instability, which does not seem explicable on the basis of known driving mechanisms.
Resumo:
We extend the contingent valuation (CV) method to test three differing conceptions of individuals' preferences as either (i) a-priori well-formed or readily divined and revealed through a single dichotomous choice question (as per the NOAA CV guidelines [K. Arrow, R. Solow, P.R. Portney, E.E. Learner, R. Radner, H. Schuman, Report of the NOAA panel on contingent valuation, Fed. Reg. 58 (1993) 4601-4614]); (ii) learned or 'discovered' through a process of repetition and experience [J.A. List, Does market experience eliminate market anomalies? Q. J. Econ. (2003) 41-72; C.R. Plott, Rational individual behaviour in markets and social choice processes: the discovered preference hypothesis, in: K. Arrow, E. Colombatto, M. Perleman, C. Schmidt (Eds.), Rational Foundations of Economic Behaviour, Macmillan, London, St. Martin's, New York, 1996, pp. 225-250]; (iii) internally coherent but strongly influenced by some initial arbitrary anchor [D. Ariely, G. Loewenstein, D. Prelec, 'Coherent arbitrariness': stable demand curves without stable preferences, Q. J. Econ. 118(l) (2003) 73-105]. Findings reject both the first and last of these conceptions in favour of a model in which preferences converge towards standard expectations through a process of repetition and learning. In doing so, we show that such a 'learning design CV method overturns the 'stylised facts' of bias and anchoring within the double bound dichotomous choice elicitation format. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
To improve the performance of classification using Support Vector Machines (SVMs) while reducing the model selection time, this paper introduces Differential Evolution, a heuristic method for model selection in two-class SVMs with a RBF kernel. The model selection method and related tuning algorithm are both presented. Experimental results from application to a selection of benchmark datasets for SVMs show that this method can produce an optimized classification in less time and with higher accuracy than a classical grid search. Comparison with a Particle Swarm Optimization (PSO) based alternative is also included.
Resumo:
Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes pL induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004).
Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O(n2D?ln?), where n is the sequence length, ? is the number of local minima in the partial landscape pL under consideration and D is the maximum number of steepest descent steps in attraction basins associated with pL.
Resumo:
Apparent reversals in rotating trapezia have been regarded as evidence that human vision favours methods which are heuristic or form dependent. However, the argument is based on the assumption that general algorithmic methods would avoid the illusion, and that has never been clear. A general algorithm for interpreting moving parallels has been developed to address the issue. It handles a considerable range of stimuli successfully, but finds multiple interpretations in situations which correspond closely to those where apparent reversals occur. This strengthens the hypothesis that apparent reversals may occur when general algorithmic methods fail and heuristics are invoked as a stopgap.