530 resultados para Judgmental heuristics
Resumo:
Previous covering models for emergency service consider all the calls to be of the sameimportance and impose the same waiting time constraints independently of the service's priority.This type of constraint is clearly inappropriate in many contexts. For example, in urban medicalemergency services, calls that involve danger to human life deserve higher priority over calls formore routine incidents. A realistic model in such a context should allow prioritizing the calls forservice.In this paper a covering model which considers different priority levels is formulated andsolved. The model heritages its formulation from previous research on Maximum CoverageModels and incorporates results from Queuing Theory, in particular Priority Queuing. Theadditional complexity incorporated in the model justifies the use of a heuristic procedure.
Resumo:
Several studies have reported high performance of simple decision heuristics multi-attribute decision making. In this paper, we focus on situations where attributes are binary and analyze the performance of Deterministic-Elimination-By-Aspects (DEBA) and similar decision heuristics. We consider non-increasing weights and two probabilistic models for the attribute values: one where attribute values are independent Bernoulli randomvariables; the other one where they are binary random variables with inter-attribute positive correlations. Using these models, we show that good performance of DEBA is explained by the presence of cumulative as opposed to simple dominance. We therefore introduce the concepts of cumulative dominance compliance and fully cumulative dominance compliance and show that DEBA satisfies those properties. We derive a lower bound with which cumulative dominance compliant heuristics will choose a best alternative and show that, even with many attributes, this is not small. We also derive an upper bound for the expected loss of fully cumulative compliance heuristics and show that this is moderateeven when the number of attributes is large. Both bounds are independent of the values ofthe weights.
Resumo:
INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.
Resumo:
It is well accepted that people resist evidence that contradicts their beliefs.Moreover, despite their training, many scientists reject results that are inconsistent withtheir theories. This phenomenon is discussed in relation to the field of judgment anddecision making by describing four case studies. These concern findings that clinical judgment is less predictive than actuarial models; simple methods have proven superiorto more theoretically correct methods in times series forecasting; equal weighting ofvariables is often more accurate than using differential weights; and decisions cansometimes be improved by discarding relevant information. All findings relate to theapparently difficult-to-accept idea that simple models can predict complex phenomenabetter than complex ones. It is true that there is a scientific market place for ideas.However, like its economic counterpart, it is subject to inefficiencies (e.g., thinness,asymmetric information, and speculative bubbles). Unfortunately, the market is only correct in the long-run. The road to enlightenment is bumpy.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
The paper explores the consequences that relying on different behavioral assumptions intraining managers may have on their future performance. We argue that training with anemphasis on the standard assumptions used in economics (rationality and self-interest) is goodfor technical posts but may also lead future managers to rely excessively on rational and explicitsafeguarding, crowding out instinctive relational heuristics and signaling a bad human type topotential partners. In contrast, human assumptions used in management theories, because oftheir diverse, implicit and even contradictory nature, do not conflict with the innate set ofcooperative tools and may provide a good training ground for such tools. We present tentativeconfirmatory evidence by examining how the weight given to behavioral assumptions in the corecourses of the top 100 business schools influences the average salaries of their MBA graduates.Controlling for the self-selected average quality of their students and some other schools characteristics, average salaries are seen to be significantly greater for schools whose core MBAcourses contain a higher proportion of management courses as opposed to courses based oneconomics or technical disciplines.
Resumo:
This paper presents a simple Optimised Search Heuristic for the Job Shop Scheduling problem that combines a GRASP heuristic with a branch-and-bound algorithm. The proposed method is compared with similar approaches and leads to better results in terms of solution quality and computing times.
Resumo:
We present new metaheuristics for solving real crew scheduling problemsin a public transportation bus company. Since the crews of thesecompanies are drivers, we will designate the problem by the bus-driverscheduling problem. Crew scheduling problems are well known and severalmathematical programming based techniques have been proposed to solvethem, in particular using the set-covering formulation. However, inpractice, there exists the need for improvement in terms of computationalefficiency and capacity of solving large-scale instances. Moreover, thereal bus-driver scheduling problems that we consider can present variantaspects of the set covering, as for example a different objectivefunction, implying that alternative solutions methods have to bedeveloped. We propose metaheuristics based on the following approaches:GRASP (greedy randomized adaptive search procedure), tabu search andgenetic algorithms. These metaheuristics also present some innovationfeatures based on and genetic algorithms. These metaheuristics alsopresent some innovation features based on the structure of the crewscheduling problem, that guide the search efficiently and able them tofind good solutions. Some of these new features can also be applied inthe development of heuristics to other combinatorial optimizationproblems. A summary of computational results with real-data problems ispresented.
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
Background: General practitioners play a central role in taking deprivation into consideration when caring for patients in primary care. Validated questions to identify deprivation in primary-care practices are still lacking. For both clinical and research purposes, this study therefore aims to develop and validate a standardized instrument measuring both material and social deprivation at an individual level. Methods: The Deprivation in Primary Care Questionnaire (DiPCare-Q) was developed using qualitative and quantitative approaches between 2008 and 2011. A systematic review identified 199 questions related to deprivation. Using judgmental item quality, these were reduced to 38 questions. Two focus groups (primary-care physicians, and primary-care researchers), structured interviews (10 laymen), and think aloud interviews (eight cleaning staff) assured face validity. Item response theory analysis was then used to derive the DiPCare-Q index using data obtained from a random sample of 200 patients who were to complete the questionnaire a second time over the phone. For construct and criterion validity, the final 16 questions were administered to a random sample of 1,898 patients attending one of 47 different private primary-care practices in western Switzerland (validation set) along with questions on subjective social status (subjective SES ladder), education, source of income, welfare status, and subjective poverty. Results: Deprivation was defined in three distinct dimensions (table); material deprivation (eight items), social deprivation (five items) and health deprivation (three items). Item consistency was high in both the derivation (KR20 = 0.827) and the validation set (KR20 = 0.778). The DiPCare-Q index was reliable (ICC = 0.847). For construct validity, we showed the DiPCare-Q index to be correlated to patients' estimation of their position on the subjective SES ladder (rs = 0.539). This position was correlated to both material and social deprivation independently suggesting two separate mechanisms enhancing the feeling of deprivation. Conclusion: The DiPCare-Q is a rapid, reliable and validated instrument useful for measuring both material and social deprivation in primary care. Questions from the DiPCare-Q are easy to use when investigating patients' social history and could improve clinicians' ability to detect underlying social distress related to deprivation.
Resumo:
It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.