11 resultados para Linear decision rules
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Background Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study. Methodology/Principal Findings We built up two prediction rules (“Snap-shot rule” for a single sample and “Track-shot rule” for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior ≥5% or <5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200×106/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold. Conclusions/Significance Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count >650 for a threshold of 200, >900 for 350, or >1150 for 500×106/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.
Resumo:
Venous thromboembolism (VTE) is a potentially lethal clinical condition that is suspected in patients with common clinical complaints, in many and varied, clinical care settings. Once VTE is diagnosed, optimal therapeutic management (thrombolysis, IVC filters, type and duration of anticoagulants) and ideal therapeutic management settings (outpatient, critical care) are also controversial. Clinical prediction tools, including clinical decision rules and D-Dimer, have been developed, and some validated, to assist clinical decision making along the diagnostic and therapeutic management paths for VTE. Despite these developments, practice variation is high and there remain many controversies in the use of the clinical prediction tools. In this narrative review, we highlight challenges and controversies in VTE diagnostic and therapeutic management with a focus on clinical decision rules and D-Dimer.
Resumo:
Minor brain injury is a frequent condition. Validated clinical decision rules can help in deciding whether a computed tomogram (CT) of the head is required. We hypothesized that institutional guidelines are not frequently used, and that psychological factors are a common reason for ordering an unnecessary CT.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
PURPOSE OF REVIEW Fever and neutropenia is the most common complication in the treatment of childhood cancer. This review will summarize recent publications that focus on improving the management of this condition as well as those that seek to optimize translational research efforts. RECENT FINDINGS A number of clinical decision rules are available to assist in the identification of low-risk fever and neutropenia however few have undergone external validation and formal impact analysis. Emerging evidence suggests acute fever and neutropenia management strategies should include time to antibiotic recommendations, and quality improvement initiatives have focused on eliminating barriers to early antibiotic administration. Despite reported increases in antimicrobial resistance, few studies have focused on the prediction, prevention, and optimal treatment of these infections and the effect on risk stratification remains unknown. A consensus guideline for paediatric fever and neutropenia research is now available and may help reduce some of the heterogeneity between studies that have previously limited the translation of evidence into clinical practice. SUMMARY Risk stratification is recommended for children with cancer and fever and neutropenia. Further research is required to quantify the overall impact of this approach and to refine exactly which children will benefit from early antibiotic administration as well as modifications to empiric regimens to cover antibiotic-resistant organisms.
Resumo:
We consider collective decision problems given by a profile of single-peaked preferences defined over the real line and a set of pure public facilities to be located on the line. In this context, Bochet and Gordon (2012) provide a large class of priority rules based on efficiency, object-population monotonicity and sovereignty. Each such rule is described by a fixed priority ordering among interest groups. We show that any priority rule which treats agents symmetrically — anonymity — respects some form of coherence across collective decision problems — reinforcement — and only depends on peak information — peakonly — is a weighted majoritarian rule. Each such rule defines priorities based on the relative size of the interest groups and specific weights attached to locations. We give an explicit account of the richness of this class of rules.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Resumo:
Once more, agriculture threatened to prevent all progress in multilateral trade rule-making at the Ninth WTO Ministerial Conference in December 2013. But this time, the “magic of Bali” worked. After the clock had been stopped mainly because of the food security file, the ministers adopted a comprehensive package of decisions and declarations mainly in respect of development issues. Five are about agriculture. Decision 38 on Public Stockholding for Food Security Purposes contains a “peace clause” which will now be shielding certain stockpile programmes from subsidy complaints in formal litigation. This article provides contextual background and analyses this decision from a legal perspective. It finds that, at best, Decision 38 provides a starting point for a WTO Work Programme for food security, for review at the Eleventh Ministerial Conference which will probably take place in 2017. At worst, it may unduly widen the limited window for government-financed competition existing under present rules in the WTO Agreement on Agriculture – yet without increasing global food security or even guaranteeing that no subsidy claims will be launched, or entertained, under the WTO dispute settlement mechanism. Hence, the Work Programme should find more coherence between farm support and socio-economic and trade objectives when it comes to stockpiles. This also encompasses a review of the present WTO rules applying to other forms of food reserves and to regional or “virtual” stockpiles. Another “low hanging fruit” would be a decision to exempt food aid purchases from export restrictions.
Resumo:
Based on common aspects of recent models of career decision-making (CDM) a sixphase model of CDM for secondary students is presented and empirically evaluated. The study tested the hypothesis that students who are in later phases possess more career choice readiness and consider different numbers of career alternatives. 266 Swiss secondary students completed measures tapping phase of CDM, career choice readiness, and number of considered career options. Career choice readiness showed an increase with phase of CDM. Later phases were generally associated with a larger increase in career choice readiness. Number of considered career options showed a curve-linear development with fewer options considered at the beginning and at the end of the process. Male students showed a larger variability in their distribution among the process with more male than female students in the first and last phase of the process. Implications for theory and practice are presented.
Resumo:
The previous chapter presented the overall decision-making structure in Swiss politics at the beginning of the 21st century. This provides us with a general picture and allows for a comparison over time with the decision-making structure in the 1970s. However, the analysis of the overall decision-making structure potentially neglects important differences between policy domains (Atkinson and Coleman 1989; Knoke et al. 1996; Kriesi et al. 2006a; Sabatier 1987). Policy issues vary across policy domains, as do the political actors involved. In addition, actors may hold different policy preferences from one policy domain to the next, and they may also collaborate with other partners depending on the policy domain at stake. Examining differences between policy domains is particularly appropriate in Switzerland. Because no fixed coalitions of government and opposition exist, actors create different coalitions in each policy domain (Linder and Schwarz 2008). Whereas important parts of the institutional setting are similar across policy domains, decision-making structures might still vary. As was the case with the cross-time analysis conducted in the two previous chapters, a stability of 'rules-in-form' might hide important variations in 'rules-in-use' also across different policy domains.