781 resultados para Multiple Criteria Decision Making (MCDM)
Resumo:
Background. Molecular tests for breast cancer (BC) risk assessment are reimbursed by health insurances in Switzerland since the beginning of year 2015. The main current role of these tests is to help oncologists to decide about the usefulness of adjuvant chemotherapy in patients with early stage endocrine-sensitive and human epidermal growth factor receptor 2 (HER2)-negative BC. These gene expression signatures aim at predicting the risk of recurrence in this subgroup. One of them (OncotypeDx/OT) also predicts distant metastases rate with or without the addition of cytotoxic chemotherapy to endocrine therapy. The clinical utility of these tests -in addition to existing so-called "clinico-pathological" prognostic and predictive criteria (e.g. stage, grade, biomarkers status)-is still debated. We report a single center one year experience of the use of one molecular test (OT) in clinical decision making. Methods. We extracted from the CHUV Breast Cancer Center data base the total number of BC cases with estrogen-receptor positive (ER+), HER2-negative early breast cancer (node negative (pN0) disease or micrometastases in up to 3 lymph nodes) operated between September 2014 and August 2015. For the cases from this group in which a molecular test had been decided by the tumor board, we collected the clinicopathologic parameters, the initial tumor board decision, and the final adjuvant systemic therapy decision. Results. A molecular test (OT) was done in 12.2% of patients with ER + HER2 negative early BC. The median age was 57.4 years and the median invasive tumor size was 1.7 cm. These patients were classified by ODX testing (Recurrence Score) into low-, intermediate-, and high risk groups, respectively in 27.2%, 63.6% and 9% of cases. Treatment recommendations changed in 18.2%, predominantly from chemotherapyendocrine therapy to endocrine treatment alone. Of 8 patients originally recommended chemotherapy, 25% were recommended endocrine treatment alone after receiving the Recurrence Score result. Conclusions. Though reimbursed by health insurances since January 2015, molecular tests are used moderately in our institution as per the decision of the multidisciplinary tumor board. It's mainly used to obtain a complementary confirmation supporting the decision of no chemotherapy. The OncotypeDx Recurrence Score results were in the intermediate group in 66% of the 9 tested cases but contributed to avoid chemotherapy in 2 patients during the last 12 months.
Resumo:
BACKGROUND AND PURPOSE: For the STroke Imaging Research (STIR) and VISTA-Imaging Investigators The purpose of this study was to collect precise information on the typical imaging decisions given specific clinical acute stroke scenarios. Stroke centers worldwide were surveyed regarding typical imaging used to work up representative acute stroke patients, make treatment decisions, and willingness to enroll in clinical trials. METHODS: STroke Imaging Research and Virtual International Stroke Trials Archive-Imaging circulated an online survey of clinical case vignettes through its website, the websites of national professional societies from multiple countries as well as through email distribution lists from STroke Imaging Research and participating societies. Survey responders were asked to select the typical imaging work-up for each clinical vignette presented. Actual images were not presented to the survey responders. Instead, the survey then displayed several types of imaging findings offered by the imaging strategy, and the responders selected the appropriate therapy and whether to enroll into a clinical trial considering time from onset, clinical presentation, and imaging findings. A follow-up survey focusing on 6 h from onset was conducted after the release of the positive endovascular trials. RESULTS: We received 548 responses from 35 countries including 282 individual centers; 78% of the centers originating from Australia, Brazil, France, Germany, Spain, United Kingdom, and United States. The specific onset windows presented influenced the type of imaging work-up selected more than the clinical scenario. Magnetic Resonance Imaging usage (27-28%) was substantial, in particular for wake-up stroke. Following the release of the positive trials, selection of perfusion imaging significantly increased for imaging strategy. CONCLUSIONS: Usage of vascular or perfusion imaging by Computed Tomography or Magnetic Resonance Imaging beyond just parenchymal imaging was the primary work-up (62-87%) across all clinical vignettes and time windows. Perfusion imaging with Computed Tomography or Magnetic Resonance Imaging was associated with increased probability of enrollment into clinical trials for 0-3 h. Following the release of the positive endovascular trials, selection of endovascular only treatment for 6 h increased across all clinical vignettes.
Resumo:
In order to broaden our knowledge and understanding of the decision steps in the criminal investigation process, we started by evaluating the decision to analyse a trace and the factors involved in this decision step. This decision step is embedded in the complete criminal investigation process, involving multiple decision and triaging steps. Considering robbery cases occurring in a geographic region during a 2-year-period, we have studied the factors influencing the decision to submit biological traces, directly sampled on the scene of the robbery or on collected objects, for analysis. The factors were categorised into five knowledge dimensions: strategic, immediate, physical, criminal and utility and decision tree analysis was carried out. Factors in each category played a role in the decision to analyse a biological trace. Interestingly, factors involving information available prior to the analysis are of importance, such as the fact that a positive result (a profile suitable for comparison) is already available in the case, or that a suspect has been identified through traditional police work before analysis. One factor that was taken into account, but was not significant, is the matrix of the trace. Hence, the decision to analyse a trace is not influenced by this variable. The decision to analyse a trace first is very complex and many of the tested variables were taken into account. The decisions are often made on a case-by-case basis.
Resumo:
This master’s thesis examines budgeting decision-making in Finnish municipalities; an issue that has not received a lot of attention in the academic literature. Furthermore, this thesis investigates whether the current budgeting decision-making practices could be improved by using a new kind of budget decision-making tool that is based on presenting multiple investment or divestment alternatives simultaneously to the decision makers as a frontier, rather than one by one. In the empirical part of the thesis, the results from three case interviews are introduced in order to answer the research questions of the study. The empirical evidence of this thesis suggests that there is a need for the presented budgeting decision-making tool in Finnish municipalities. The current routine is seen as good even though the interviewees would warmly welcome the alternative method that would function as a linkage be-tween strategy and the budget. The results also indicate that even though municipalities are left with a lot of room in their budgeting decision-making routine, the routine closely, though not always purposely, follows given guidelines and legislation. The major problem in the current practices seems to be the lack of understanding, as the decision-makers find it hard fully to understand the multiplicative effects of the budget-related decisions.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
This doctoral study conducts an empirical analysis of the impact of Word-of-Mouth (WOM) on marketing-relevant outcomes such as attitudes and consumer choice, during a high-involvement and complex service decision. Due to its importance to decisionmaking, WOM has attracted interest from academia and practitioners for decades. Consumers are known to discuss products and services with one another. These discussions help consumers to form an evaluative opinion, as WOM reduces perceived risk, simplifies complexity, and increases the confidence of consumers in decisionmaking. These discussions are also highly impactful as WOM is a trustworthy source of information, since it is independent from the company or brand. In responding to the calls for more research on what happens after WOM information is received, and how it affects marketing-relevant outcomes, this dissertation extends prior WOM literature by investigating how consumers process information in a highinvolvement service domain, in particular higher-education. Further, the dissertation studies how the form of WOM influences consumer choice. The research contributes to WOM and services marketing literature by developing and empirically testing a framework for information processing and studying the long-term effects of WOM. The results of the dissertation are presented in five research publications. The publications are based on longitudinal data. The research leads to the development of a proposed theoretical framework for the processing of WOM, based on theories from social psychology. The framework is specifically focused on service decisions, as it takes into account evaluation difficulty through the complex nature of choice criteria associated with service purchase decisions. Further, other gaps in current WOM literature are taken into account by, for example, examining how the source of WOM and service values affects the processing mechanism. The research also provides implications for managers aiming to trigger favorable WOM through marketing efforts, such as advertising and testimonials. The results provide suggestions on how to design these marketing efforts by taking into account the mechanism through which information is processed, or the form of social influence.
Resumo:
In the context of decision making under uncertainty, we formalize the concept of analogy: an analogy between two decision problems is a mapping that transforms one problem into the other while preserving the problem's structure. We identify the basic structure of a decision problem, and provide a representation of the mappings that pre- serve this structure. We then consider decision makers who use multiple analogies. Our main results are a representation theorem for "aggregators" of analogies satisfying certain minimal requirements, and the identification of preferences emerging from analogical reasoning. We show that a large variety of multiple-prior preferences can be thought of as emerging from analogical reasoning.
Resumo:
The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.
Resumo:
Many different individuals, who have their own expertise and criteria for decision making, are involved in making decisions on construction projects. Decision-making processes are thus significantly affected by communication, in which a dynamic performance of human intentions leads to unpredictable outcomes. In order to theorise the decision making processes including communication, it is argued here that the decision making processes resemble evolutionary dynamics in terms of both selection and mutation, which can be expressed by the replicator-mutator equation. To support this argument, a mathematical model of decision making has been made from an analogy with evolutionary dynamics, in which there are three variables: initial support rate, business hierarchy, and power of persuasion. On the other hand, a survey of patterns in decision making in construction projects has also been performed through self-administered mail questionnaire to construction practitioners. Consequently, comparison between the numerical analysis of mathematical model and the statistical analysis of empirical data has shown a significant potential of the replicator-mutator equation as a tool to study dynamic properties of intentions in communication.
Resumo:
The games-against-nature approach to the analysis of uncertainty in decision-making relies on the assumption that the behaviour of a decision-maker can be explained by concepts such as maximin, minimax regret, or a similarly defined criterion. In reality, however, these criteria represent a spectrum and, the actual behaviour of a decision-maker is most likely to embody a mixture of such idealisations. This paper proposes that in game-theoretic approach to decision-making under uncertainty, a more realistic representation of a decision-maker's behaviour can be achieved by synthesising games-against-nature with goal programming into a single framework. The proposed formulation is illustrated by using a well-known example from the literature on mathematical programming models for agricultural-decision-making. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
INTRODUCTION: Guidelines for the treatment of patients in severe hypothermia and mainly in hypothermic cardiac arrest recommend the rewarming using the extracorporeal circulation (ECC). However,guidelines for the further in-hospital diagnostic and therapeutic approach of these patients, who often suffer from additional injuries—especially in avalanche casualties, are lacking. Lack of such algorithms may relevantly delay treatment and put patients at further risk. Together with a multidisciplinary team, the Emergency Department at the University Hospital in Bern, a level I trauma centre, created an algorithm for the in-hospital treatment of patients with hypothermic cardiac arrest. This algorithm primarily focuses on the decision-making process for the administration of ECC. THE BERNESE HYPOTHERMIA ALGORITHM: The major difference between the traditional approach, where all hypothermic patients are primarily admitted to the emergency centre, and our new algorithm is that hypothermic cardiac arrest patients without obvious signs of severe trauma are taken to the operating theatre without delay. Subsequently, the interdisciplinary team decides whether to rewarm the patient using ECC based on a standard clinical trauma assessment, serum potassium levels, core body temperature, sonographic examinations of the abdomen, pleural space, and pericardium, as well as a pelvic X-ray, if needed. During ECC, sonography is repeated and haemodynamic function as well as haemoglobin levels are regularly monitored. Standard radiological investigations according to the local multiple trauma protocol are performed only after ECC. Transfer to the intensive care unit, where mild therapeutic hypothermia is maintained for another 12 h, should not be delayed by additional X-rays for minor injuries. DISCUSSION: The presented algorithm is intended to facilitate in-hospital decision-making and shorten the door-to-reperfusion time for patients with hypothermic cardiac arrest. It was the result of intensive collaboration between different specialties and highlights the importance of high-quality teamwork for rare cases of severe accidental hypothermia. Information derived from the new International Hypothermia Registry will help to answer open questions and further optimize the algorithm.
Resumo:
OBJECTIVES: Fever is one of the most commonly seen symptoms in the pediatric emergency department. The objective of this study was to observe how the rapid testing for influenza virus impacts on the management of children with fever. METHODS: We performed a review of our pediatric emergency department records during the 2008/2009 annual influenza season. The BinaxNow Influenza A+B test was performed on patients with the following criteria: age 1.0 to 16.0 years, fever greater than 38.5 °C, fever of less than 96 hours' duration after the onset of clinical illness, clinical signs compatible with acute influenza, and nontoxic appearance. Additional laboratory tests were performed at the treating physician's discretion. RESULTS: The influenza rapid antigen test was performed in 192 children. One hundred nine (57%) were influenza positive, with the largest fraction (101 patients) positive for influenza A. The age distribution did not differ between children with negative and positive test results (mean, 5.3 vs. 5.1 years, not statistically significant). A larger number of diagnostic tests were performed in the group of influenza-negative patients. Twice as many complete blood counts, C-reactive protein determinations, lumbar punctures, and urinalyses were ordered in the latter group. CONCLUSIONS: Rapid diagnosis of influenza in the pediatric emergency department affects the management of febrile children as the confirmation of influenza virus infection decreases additional diagnostic tests ordered.
Resumo:
BACKGROUND: Systematic need for angiography in diagnosis of carotid artery stenosis and indication of surgical therapy is still debated. Noninvasive imaging techniques such as MR angiography (MRA) or CT angiography (CTA) offer an alternative to digital subtraction angiography (DSA) and are increasingly used in clinical practice. In this study, we present the radiological characteristics and clinical results of a series of patients operated on the basis of combined ultrasonography (US)/MRA. METHODS: This observational study included all the patients consecutively operated for a carotid stenosis in our Department from October 1998 to December 2004. The applied MRA protocol had previously been established in a large correlation study with DSA. DSA was used only in case of discordance between US and MRA. The preoperative radiological information furnished by MRA was compared with intraoperative findings. The outcome of the operation was assessed according to ECST criteria. RESULTS: Among 327 patients, preoperative MRA was performed in 278 (85%), DSA in 44 (13.5%) and CT angiography in 5 (1.5%). Most of DSA studies were performed as emergency for preparation of endovascular therapy or for reasons other than carotid stenosis. Eleven additional DSA (3.3%) complemented US/MRA, mostly because diverging diagnosis of subocclusion of ICA. No direct morbidity or intraoperative difficulty was related to preoperative MRA. Combined mortality/major morbidity rate was 0.9% (3 patients) and minor morbidity rate 5.5% (18 patients). CONCLUSIONS: This observational study describes a well-established practice of carotid surgery and supports the exclusive use of non invasive diagnostic imaging for indicating and deciding the operation.
Resumo:
Most recently discussion about the optimal treatment for different subsets of patients suffering from coronary artery disease has re-emerged, mainly because of the uncertainty caused by doctors and patients regarding the phenomenon of unpredictable early and late stent thrombosis. Surgical revascularization using multiple arterial bypass grafts has repeatedly proven its superiority compared to percutaneous intervention techniques, especially in patients suffering from left main stem disease and coronary 3-vessels disease. Several prospective randomized multicenter studies comparing early and mid-term results following PCI and CABG have been really restrictive, with respect to patient enrollment, with less than 5% of all patients treated during the same time period been enrolled. Coronary artery bypass grafting allows the most complete revascularization in one session, because all target coronary vessels larger than 1 mm can be bypassed in their distal segments. Once the patient has been turn-off for surgery, surgeons have to consider the most complete arterial revascularization in order to decrease the long-term necessity for re-revascularization; for instance patency rate of the left internal thoracic artery grafted to the distal part left anterior descending artery may be as high as 90-95% after 10 to 15 years. Early mortality following isolated CABG operation has been as low as 0.6 to 1% in the most recent period (reports from the University Hospital Berne and the University Hospital of Zurich); beside these excellent results, the CABG option seems to be less expensive than PCI with time, since the necessity for additional PCI is rather high following initial PCI, and the price of stent devices is still very high, particularly in Switzerland. Patients, insurance and experts in health care should be better and more honestly informed concerning the risk and costs of PCI and CABG procedures as well as about the much higher rate of subsequent interventions following PCI. Team approach for all patients in whom both options could be offered seems mandatory to avoid unbalanced information of the patients. Looking at the recent developments in transcatheter valve treatments, the revival of cardiological-cardiosurgical conferences seems to a good option to optimize the cooperation between the two medical specialties: cardiology and cardiac surgery.
Resumo:
Low-flow, low-gradient severe aortic stenosis (AS) is characterised by a small aortic valve area (AVA) and low mean gradient (MG) secondary to a low cardiac output and may occur in patients with either a preserved or reduced left ventricular ejection fraction (LVEF). Symptomatic patients presenting with low-flow, low-gradient severe AS have a dismal prognosis independent of baseline LVEF if managed conservatively and should therefore undergo aortic valve replacement if feasible. Transthoracic echocardiography (TTE) is the first-line investigation for the assessment of AS haemodynamic severity. However, when confronted with guideline-discordant AVA (small) and MG (low) values, there are several reasons other than severe AS combined with a low cardiac output which may lead to such a situation, including erroneous measurements, small body size, inherent inconsistencies in the guidelines' criteria, prolonged ejection time and aortic pseudostenosis. The distinction between these various entities poses a diagnostic challenge. However, it is important to make a distinction because each has very different implications in terms of risk stratification and therapeutic management. In such instances, cardiac catheterisation forms an integral part of the work-up of these patients in order to confirm or refute the echocardiographic findings to guide management decisions appropriately.