855 resultados para decision analysis
Resumo:
Since its introduction in 1978, data envelopment analysis (DEA) has become one of the preeminent nonparametric methods for measuring efficiency and productivity of decision making units (DMUs). Charnes et al. (1978) provided the original DEA constant returns to scale (CRS) model, later extended to variable returns to scale (VRS) by Banker et al. (1984). These ‘standard’ models are known by the acronyms CCR and BCC, respectively, and are now employed routinely in areas that range from assessment of public sectors, such as hospitals and health care systems, schools, and universities, to private sectors, such as banks and financial institutions (Emrouznejad et al. 2008; Emrouznejad and De Witte 2010). The main objective of this volume is to publish original studies that are beyond the two standard CCR and BCC models with both theoretical and practical applications using advanced models in DEA.
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. This chapter provides a taxonomy and review of the fuzzy DEA (FDEA) methods. We present a classification scheme with six categories, namely, the tolerance approach, the α-level based approach, the fuzzy ranking approach, the possibility approach, the fuzzy arithmetic, and the fuzzy random/type-2 fuzzy set. We discuss each classification scheme and group the FDEA papers published in the literature over the past 30 years. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
Projects that are exposed to uncertain environments can be effectively controlled with the application of risk analysis during the planning stage. The Analytic Hierarchy Process, a multiattribute decision-making technique, can be used to analyse and assess project risks which are objective or subjective in nature. Among other advantages, the process logically integrates the various elements in the planning process. The results from risk analysis and activity analysis are then used to develop a logical contingency allowance for the project through the application of probability theory. The contingency allowance is created in two parts: (a) a technical contingency, and (b) a management contingency. This provides a basis for decision making in a changing project environment. Effective control of the project is made possible by the limitation of the changes within the monetary contingency allowance for the work package concerned, and the utilization of the contingency through proper appropriation. The whole methodology is applied to a pipeline-laying project in India, and its effectiveness in project control is demonstrated.
Resumo:
Abstract Oxidation of proteins has received a lot of attention in the last decades due to the fact that they have been shown to accumulate and to be implicated in the progression and the patho-physiology of several diseases such as Alzheimer, coronary heart diseases, etc. This has also resulted in the fact that research scientist became more eager to be able to measure accurately the level of oxidized protein in biological materials, and to determine the precise site of the oxidative attack on the protein, in order to get insights into the molecular mechanisms involved in the progression of diseases. Several methods for measuring protein carbonylation have been implemented in different laboratories around the world. However, to date no methods prevail as the most accurate, reliable and robust. The present paper aims at giving an overview of the common methods used to determine protein carbonylation in biological material as well as to highlight the limitations and the potential. The ultimate goal is to give quick tips for a rapid decision making when a method has to be selected and taking into consideration the advantage and drawback of the methods.
Resumo:
This paper explores differences in how primary care doctors process the clinical presentation of depression by African American and African-Caribbean patients compared with white patients in the US and the UK. The aim is to gain a better understanding of possible pathways by which racial disparities arise in depression care. One hundred and eight doctors described their thought processes after viewing video recorded simulated patients presenting with identical symptoms strongly suggestive of depression. These descriptions were analysed using the CliniClass system, which captures information about micro-components of clinical decision making and permits a systematic, structured and detailed analysis of how doctors arrive at diagnostic, intervention and management decisions. Video recordings of actors portraying black (both African American and African-Caribbean) and white (both White American and White British) male and female patients (aged 55 years and 75 years) were presented to doctors randomly selected from the Massachusetts Medical Society list and from Surrey/South West London and West Midlands National Health Service lists, stratified by country (US v.UK), gender, and years of clinical experience (less v. very experienced). Findings demonstrated little evidence of bias affecting doctors' decision making processes, with the exception of less attention being paid to the potential outcomes associated with different treatment options for African American compared with White American patients in the US. Instead, findings suggest greater clinical uncertainty in diagnosing depression amongst black compared with white patients, particularly in the UK. This was evident in more potential diagnoses. There was also a tendency for doctors in both countries to focus more on black patients' physical rather than psychological symptoms and to identify endocrine problems, most often diabetes, as a presenting complaint for them. This suggests that doctors in both countries have a less well developed mental model of depression for black compared with white patients. © 2014 The Authors.
Resumo:
Data envelopment analysis (DEA) has gained a wide range of applications in measuring comparative efficiency of decision making units (DMUs) with multiple incommensurate inputs and outputs. The standard DEA method requires that the status of all input and output variables be known exactly. However, in many real applications, the status of some measures is not clearly known as inputs or outputs. These measures are referred to as flexible measures. This paper proposes a flexible slacks-based measure (FSBM) of efficiency in which each flexible measure can play input role for some DMUs and output role for others to maximize the relative efficiency of the DMU under evaluation. Further, we will show that when an operational unit is efficient in a specific flexible measure, this measure can play both input and output roles for this unit. In this case, the optimal input/output designation for flexible measure is one that optimizes the efficiency of the artificial average unit. An application in assessing UK higher education institutions used to show the applicability of the proposed approach. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
To be competitive in contemporary turbulent environments, firms must be capable of processing huge amounts of information, and effectively convert it into actionable knowledge. This is particularly the case in the marketing context, where problems are also usually highly complex, unstructured and ill-defined. In recent years, the development of marketing management support systems has paralleled this evolution in informational problems faced by managers, leading to a growth in the study (and use) of artificial intelligence and soft computing methodologies. Here, we present and implement a novel intelligent system that incorporates fuzzy logic and genetic algorithms to operate in an unsupervised manner. This approach allows the discovery of interesting association rules, which can be linguistically interpreted, in large scale databases (KDD or Knowledge Discovery in Databases.) We then demonstrate its application to a distribution channel problem. It is shown how the proposed system is able to return a number of novel and potentially-interesting associations among variables. Thus, it is argued that our method has significant potential to improve the analysis of marketing and business databases in practice, especially in non-programmed decisional scenarios, as well as to assist scholarly researchers in their exploratory analysis. © 2013 Elsevier Inc.
Resumo:
Four bar mechanisms are basic components of many important mechanical devices. The kinematic synthesis of four bar mechanisms is a difficult design problem. A novel method that combines the genetic programming and decision tree learning methods is presented. We give a structural description for the class of mechanisms that produce desired coupler curves. Constructive induction is used to find and characterize feasible regions of the design space. Decision trees constitute the learning engine, and the new features are created by genetic programming.
Resumo:
The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.
Resumo:
Aim: To explore current risk assessment processes in general practice and Improving Access to Psychological Therapies (IAPT) services, and to consider whether the Galatean Risk and Safety Tool (GRiST) can help support improved patient care. Background: Much has been written about risk assessment practice in secondary mental health care, but little is known about how it is undertaken at the beginning of patients' care pathways, within general practice and IAPT services. Methods: Interviews with eight general practice and eight IAPT clinicians from two primary care trusts in the West Midlands, UK, and eight service users from the same region. Interviews explored current practice and participants' views and experiences of mental health risk assessment. Two focus groups were also carried out, one with general practice and one with IAPT clinicians, to review interview findings and to elicit views about GRiST from a demonstration of its functionality. Data were analysed using thematic analysis. Findings Variable approaches to mental health risk assessment were observed. Clinicians were anxious that important risk information was being missed, and risk communication was undermined. Patients felt uninvolved in the process, and both clinicians and patients expressed anxiety about risk assessment skills. Clinicians were positive about the potential for GRiST to provide solutions to these problems. Conclusions: A more structured and systematic approach to risk assessment in general practice and IAPT services is needed, to ensure important risk information is captured and communicated across the care pathway. GRiST has the functionality to support this aspect of practice.
Resumo:
This paper introduces a new technique for optimizing the trading strategy of brokers that autonomously trade in re- tail and wholesale markets. Simultaneous optimization of re- tail and wholesale strategies has been considered by existing studies as intractable. Therefore, each of these strategies is optimized separately and their interdependence is generally ignored, with resulting broker agents not aiming for a glob- ally optimal retail and wholesale strategy. In this paper, we propose a novel formalization, based on a semi-Markov deci- sion process (SMDP), which globally and simultaneously op- timizes retail and wholesale strategies. The SMDP is solved using hierarchical reinforcement learning (HRL) in multi- agent environments. To address the curse of dimensionality, which arises when applying SMDP and HRL to complex de- cision problems, we propose an ecient knowledge transfer approach. This enables the reuse of learned trading skills in order to speed up the learning in new markets, at the same time as making the broker transportable across market envi- ronments. The proposed SMDP-broker has been thoroughly evaluated in two well-established multi-agent simulation en- vironments within the Trading Agent Competition (TAC) community. Analysis of controlled experiments shows that this broker can outperform the top TAC-brokers. More- over, our broker is able to perform well in a wide range of environments by re-using knowledge acquired in previously experienced settings.
Resumo:
The paper presents a multicriteria decision support system, called MultiDecision-2, which consists of two independent parts - MKA-2 subsystem and MKO-2 subsystem. MultiDecision-2 software system supports the decision makers (DMs) in the solving process of different problems of multicriteria analysis and linear (continues and integer) problems of multicriteria optimization. The two subsystems MKA-2 and MKO-2 of of MultiDecision-2 are briefly described in the paper in the terms of the class of the problems being solved, the system structure, the operation with the interface modules for input data entry and the information about DM’s local preferences, as well as the operation with the interface modules for visualization of the current and final solutions.
Resumo:
An expert system (ES) is a class of computer programs developed by researchers in artificial intelligence. In essence, they are programs made up of a set of rules that analyze information about a specific class of problems, as well as provide analysis of the problems, and, depending upon their design, recommend a course of user action in order to implement corrections. ES are computerized tools designed to enhance the quality and availability of knowledge required by decision makers in a wide range of industries. Decision-making is important for the financial institutions involved due to the high level of risk associated with wrong decisions. The process of making decision is complex and unstructured. The existing models for decision-making do not capture the learned knowledge well enough. In this study, we analyze the beneficial aspects of using ES for decision- making process.
Resumo:
The reasonable choice is a critical success factor for decision- making in the field of software engineering (SE). A case-driven comparative analysis has been introduced and a procedure for its systematic application has been suggested. The paper describes how the proposed method can be built in a general framework for SE activities. Some examples of experimental versions of the framework are brie y presented.
Resumo:
The description of the support system for marking decision in terms of prognosing the inflation level based on the multifactor dependence represented by the decision – marking “tree” is given in the paper. The interrelation of factors affecting the inflation level – economic, financial, political, socio-demographic ones, is considered. The perspectives for developing the method of decision – marking “tree”, and pointing out the so- called “narrow” spaces and further analysis of possible scenarios for inflation level prognosing in particular, are defined.