14 resultados para decision under risk
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Decisions require careful weighing of the risks and benefits associated with a choice. Some people need to be offered large rewards to balance even minimal risks, whereas others take great risks in the hope for an only minimal benefit. We show here that risk-taking is a modifiable behavior that depends on right hemisphere prefrontal activity. We used low-frequency, repetitive transcranial magnetic stimulation to transiently disrupt left or right dorsolateral prefrontal cortex (DLPFC) function before applying a well known gambling paradigm that provides a measure of decision-making under risk. Individuals displayed significantly riskier decision-making after disruption of the right, but not the left, DLPFC. Our findings suggest that the right DLPFC plays a crucial role in the suppression of superficially seductive options. This confirms the asymmetric role of the prefrontal cortex in decision-making and reveals that this fundamental human capacity can be manipulated in normal subjects through cortical stimulation. The ability to modify risk-taking behavior may be translated into therapeutic interventions for disorders such as drug abuse or pathological gambling.
Resumo:
Individual risk preferences have a large influence on decisions, such as financial investments, career and health choices, or gambling. Decision making under risk has been studied both behaviorally and on a neural level. It remains unclear, however, how risk attitudes are encoded and integrated with choice. Here, we investigate how risk preferences are reflected in neural regions known to process risk. We collected functional magnetic resonance images of 56 human subjects during a gambling task (Preuschoff et al., 2006). Subjects were grouped into risk averters and risk seekers according to the risk preferences they revealed in a separate lottery task. We found that during the anticipation of high-risk gambles, risk averters show stronger responses in ventral striatum and anterior insula compared to risk seekers. In addition, risk prediction error signals in anterior insula, inferior frontal gyrus, and anterior cingulate indicate that risk averters do not dissociate properly between gambles that are more or less risky than expected. We suggest this may result in a general overestimation of prospective risk and lead to risk avoidance behavior. This is the first study to show that behavioral risk preferences are reflected in the passive evaluation of risky situations. The results have implications on public policies in the financial and health domain.
Resumo:
Background We manipulated predation risk in a field experiment with the cooperatively breeding cichlid Neolamprologus pulcher by releasing no predator, a medium- or a large-sized fish predator inside underwater cages enclosing two to three natural groups. We assessed whether helpers changed their helping behaviour, and whether within-group conflict changed, depending on these treatments, testing three hypotheses: ‘pay-to-stay’ PS, ‘risk avoidance’ RA, or (future) reproductive benefits RB. We also assessed whether helper food intake was reduced under risk, because this might reduce investments in other behaviours to save energy. Methodology/Principal Findings Medium and large helpers fed less under predation risk. Despite this effect helpers invested more in territory defence, but not territory maintenance, under the risk of predation (supporting PS). Experimentally covering only the breeding shelter with sand induced more helper digging under predation risk compared to the control treatment (supporting PS). Aggression towards the introduced predator did not differ between the two predator treatments and increased with group member size and group size (supporting PS and RA). Large helpers increased their help ratio (helping effort/breeder aggression received, ‘punishment’ by the dominant pair in the group) in the predation treatments compared to the control treatment, suggesting they were more willing to PS. Medium helpers did not show such effects. Large helpers also showed a higher submission ratio (submission/ breeder aggression received) in all treatments, compared to the medium helpers (supporting PS). Conclusions/Significance We conclude that predation risk reduces helper food intake, but despite this effect, helpers were more willing to support the breeders, supporting PS. Effects of breeder punishment suggests that PS might be more important for large compared to the medium helpers. Evidence for RA was also detected. Finally, the results were inconsistent with RB.
Resumo:
The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called elicitable. It is known that quantile-based risk measures such as value at risk are elicitable. In this paper, the existing result of the nonelicitability of expected shortfall is extended to all law-invariant spectral risk measures unless they reduce to minus the expected value. Hence, it is unclear how to perform forecast verification or comparison. However, the class of elicitable law-invariant coherent risk measures does not reduce to minus the expected value. We show that it consists of certain expectiles.
Resumo:
OBJECTIVES Valve-sparing root replacement (VSRR) is thought to reduce the rate of thromboembolic and bleeding events compared with aortic root replacement using a mechanical aortic root replacement (MRR) with a composite graft by avoiding oral anticoagulation. But as VSRR carries a certain risk for subsequent reinterventions, decision-making in the individual patient can be challenging. METHODS Of 100 Marfan syndrome (MFS) patients who underwent 169 aortic surgeries and were followed at our institution since 1995, 59 consecutive patients without a history of dissection or prior aortic surgery underwent elective VSRR or MRR and were retrospectively analysed. RESULTS VSRR was performed in 29 (David n = 24, Yacoub n = 5) and MRR in 30 patients. The mean age was 33 ± 15 years. The mean follow-up after VSRR was 6.5 ± 4 years (180 patient-years) compared with 8.8 ± 9 years (274 patient-years) after MRR. Reoperation rates after root remodelling (Yacoub) were significantly higher than after the reimplantation (David) procedure (60 vs 4.2%, P = 0.01). The need for reinterventions after the reimplantation procedure (0.8% per patient-year) was not significantly higher than after MRR (P = 0.44) but follow-up after VSRR was significantly shorter (P = 0.03). There was neither significant morbidity nor mortality associated with root reoperations. There were no neurological events after VSRR compared with four stroke/intracranial bleeding events in the MRR group (log-rank, P = 0.11), translating into an event rate of 1.46% per patient-year following MRR. CONCLUSION The calculated annual failure rate after VSRR using the reimplantation technique was lower than the annual risk for thromboembolic or bleeding events. Since the perioperative risk of reinterventions following VSRR is low, patients might benefit from VSRR even if redo surgery may become necessary during follow-up.
Resumo:
BACKGROUND High-risk prostate cancer (PCa) is an extremely heterogeneous disease. A clear definition of prognostic subgroups is mandatory. OBJECTIVE To develop a pretreatment prognostic model for PCa-specific survival (PCSS) in high-risk PCa based on combinations of unfavorable risk factors. DESIGN, SETTING, AND PARTICIPANTS We conducted a retrospective multicenter cohort study including 1360 consecutive patients with high-risk PCa treated at eight European high-volume centers. INTERVENTION Retropubic radical prostatectomy with pelvic lymphadenectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Two Cox multivariable regression models were constructed to predict PCSS as a function of dichotomization of clinical stage (< cT3 vs cT3-4), Gleason score (GS) (2-7 vs 8-10), and prostate-specific antigen (PSA; ≤ 20 ng/ml vs > 20 ng/ml). The first "extended" model includes all seven possible combinations; the second "simplified" model includes three subgroups: a good prognosis subgroup (one single high-risk factor); an intermediate prognosis subgroup (PSA >20 ng/ml and stage cT3-4); and a poor prognosis subgroup (GS 8-10 in combination with at least one other high-risk factor). The predictive accuracy of the models was summarized and compared. Survival estimates and clinical and pathologic outcomes were compared between the three subgroups. RESULTS AND LIMITATIONS The simplified model yielded an R(2) of 33% with a 5-yr area under the curve (AUC) of 0.70 with no significant loss of predictive accuracy compared with the extended model (R(2): 34%; AUC: 0.71). The 5- and 10-yr PCSS rates were 98.7% and 95.4%, 96.5% and 88.3%, 88.8% and 79.7%, for the good, intermediate, and poor prognosis subgroups, respectively (p = 0.0003). Overall survival, clinical progression-free survival, and histopathologic outcomes significantly worsened in a stepwise fashion from the good to the poor prognosis subgroups. Limitations of the study are the retrospective design and the long study period. CONCLUSIONS This study presents an intuitive and easy-to-use stratification of high-risk PCa into three prognostic subgroups. The model is useful for counseling and decision making in the pretreatment setting.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
Gebiet: Chirurgie Abstract: Introduction: Carotid endarterectomy (CEA) and coronary artery bypass grafting (CABG) could be approached in a combined or a staged fashion. Some crucial studies have shown no significant difference in peri-operative stroke and death rate in combined versus staged CEA/CABG. At present conventional extracorporeal circulation (CECC) is regarded as the gold standard for performing on-pump coronary artery bypass grafting. On contrary, the use of minimized extracorporeal circulation (MECC) for CABG diminishes hemodilution, blood-air contact, foreign surface contact and inflammatory response. At the same time, general anaesthesia (GA) is a potential risk factor for higher perioperative stroke rate after isolated CEA, not only for the ipsilateral but also for the contralateral side especially in case of contralateral high-grade stenosis or occlusion. The aim of the study was to analyze if synchronous CEA/CABG using MECC (CEA/CABG group) allows reducing the perioperative stroke risk to the level of isolated CEA performed under GA (CEA-GA group). – Methods: A retrospective analysis of all patients who underwent CEA at our institution between January 2005 and December 2012 was performed. We compared outcomes between all patients undergoing CEA/CABG to all isolated CEA-GA during the same time period. The CEA/CABG group was additionally compared to a reference group consisting of patients undergoing isolated CEA in local anaesthesia. Primary outcome was in-hospital stroke. – Results: A total of 367 CEAs were performed, from which 46 patients were excluded having either off-pump CABG or other cardiac surgery procedures than CABG combined with CEA. Out of 321 patients, 74 were in the CEA/CABG and 64 in the CEA-GA group. There was a significantly higher rate of symptomatic stenoses among patients in the CEA-GA group (p<0.002). Three (4.1%) strokes in the CEA/CABG group were registered, two ipsilateral (2.7%) and one contralateral (1.4%) to the operated side. In the CEA-GA group 2 ipsilateral strokes (3.1%) occurred. No difference was noticed between the groups (p=1.000). One patient with stroke in each group had a symptomatic stenosis preoperatively. – Conclusions: Outcome with regard to mortality and neurologic injury is very good in both -patients undergoing CEA alone as well as patients undergoing synchronous CEA and CABG using the MECC system. Although the CEA/CABG group showed slightly increased risk of stroke, it can be considered as combined treatment in particular clinical situations.
Resumo:
Decision strategies aim at enabling reasonable decisions in cases of uncertain policy decision problems which do not meet the conditions for applying standard decision theory. This paper focuses on decision strategies that account for uncertainties by deciding whether a proposed list of policy options should be accepted or revised (scope strategies) and whether to decide now or later (timing strategies). They can be used in participatory approaches to structure the decision process. As a basis, we propose to classify the broad range of uncertainties affecting policy decision problems along two dimensions, source of uncertainty (incomplete information, inherent indeterminacy and unreliable information) and location of uncertainty (information about policy options, outcomes and values). Decision strategies encompass multiple and vague criteria to be deliberated in application. As an example, we discuss which decision strategies may account for the uncertainties related to nutritive technologies that aim at reducing methane (CH4) emissions from ruminants as a means of mitigating climate change, limiting our discussion to published scientific information. These considerations not only speak in favour of revising rather than accepting the discussed list of options, but also in favour of active postponement or semi-closure of decision-making rather than closure or passive postponement.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
Ninety-one Swiss veal farms producing under a label with improved welfare standards were visited between August and December 2014 to investigate risk factors related to antimicrobial drug use and mortality. All herds consisted of own and purchased calves, with a median of 77.4% of purchased calves. The calves' mean age was 29±15days at purchasing and the fattening period lasted at average 120±28 days. The mean carcass weight was 125±12kg. A mean of 58±33 calves were fattened per farm and year, and purchased calves were bought from a mean of 20±17 farms of origin. Antimicrobial drug treatment incidence was calculated with the defined daily dose methodology. The mean treatment incidence (TIADD) was 21±15 daily doses per calf and year. The mean mortality risk was 4.1%, calves died at a mean age of 94±50 days, and the main causes of death were bovine respiratory disease (BRD, 50%) and gastro-intestinal disease (33%). Two multivariable models were constructed, for antimicrobial drug treatment incidence (53 farms) and mortality (91 farms). No quarantine, shared air space for several groups of calves, and no clinical examination upon arrival at the farm were associated with increased antimicrobial treatment incidence. Maximum group size and weight differences >100kg within a group were associated with increased mortality risk, while vaccination and beef breed were associated with decreased mortality risk. The majority of antimicrobial treatments (84.6%) were given as group treatments with oral powder fed through an automatic milk feeding system. Combination products containing chlortetracycline with tylosin and sulfadimidine or with spiramycin were used for 54.9%, and amoxicillin for 43.7% of the oral group treatments. The main indication for individual treatment was BRD (73%). The mean age at the time of treatment was 51 days, corresponding to an estimated weight of 80-100kg. Individual treatments were mainly applied through injections (88.5%), and included administration of fluoroquinolones in 38.3%, penicillines (amoxicillin or benzylpenicillin) in 25.6%, macrolides in 13.1%, tetracyclines in 12.0%, 3th and 4th generation cephalosporines in 4.7%, and florfenicol in 3.9% of the cases. The present study allowed for identifying risk factors for increased antimicrobial drug treatment and mortality. This is an important basis for future studies aiming at reducing treatment incidence and mortality in veal farms. Our results indicate that improvement is needed in the selection of drugs for the treatment of veal calves according to the principles of prudent use of antibiotics.
Resumo:
This in-depth study of the decision-making processes of the early 2000s shows that the Swiss consensus democracy has changed considerably. Power relations have transformed, conflict has increased, coalitions have become more unstable and outputs less predictable. Yet these challenges to consensus politics provide opportunities for innovation.
Resumo:
Consensus democracies like Switzerland are generally known to have a low innovation capacity (Lijphart 1999). This is due to the high number of veto points such as perfect bicameralism or the popular referendum. These institutions provide actors opposing a policy with several opportunities to block potential policy change (Immergut 1990; Tsebelis 2002). In order to avoid a failure of a process because opposing actors activate veto points, decision-making processes in Switzerland tend to integrate a large number of actors with different - and often diverging - preferences (Kriesi and Trechsel 2008). Including a variety of actors in a decision-making process and taking into account their preferences implies important trade-offs. Integrating a large number of actors and accommodating their preferences takes time and carries the risk of resulting in lowest common denominator solutions. On the contrary, major innovative reforms usually fail or come only as a result of strong external pressures from either the international environment, economic turmoil or the public (Kriesi 1980: 635f.; Kriesi and Trechsel 2008; Sciarini 1994). Standard decision-making processes are therefore characterized as reactive, slow and capable of only marginal adjustments (Kriesi 1980; Kriesi and Trechsel 2008; Linder 2009; Sciarini 2006). This, in turn, may be at odds with the rapid developments of international politics, the flexibility of the private sector, or the speed of technological development.