47 resultados para Price of risk
Resumo:
AIM To investigate risk factors for the loss of multi-rooted teeth (MRT) in subjects treated for periodontitis and enrolled in supportive periodontal therapy (SPT). MATERIAL AND METHODS A total of 172 subjects were examined before (T0) and after active periodontal therapy (APT)(T1) and following a mean of 11.5 ± 5.2 (SD) years of SPT (T2). The association of risk factors with loss of MRT was analysed with multilevel logistic regression. The tooth was the unit of analysis. RESULTS Furcation involvement (FI) = 1 before APT was not a risk factor for tooth loss compared with FI = 0 (p = 0.37). Between T0 and T2, MRT with FI = 2 (OR: 2.92, 95% CI: 1.68, 5.06, p = 0.0001) and FI = 3 (OR: 6.85, 95% CI: 3.40, 13.83, p < 0.0001) were at a significantly higher risk to be lost compared with those with FI = 0. During SPT, smokers lost significantly more MRT compared with non-smokers (OR: 2.37, 95% CI: 1.05, 5.35, p = 0.04). Non-smoking and compliant subjects with FI = 0/1 at T1 lost significantly less MRT during SPT compared with non-compliant smokers with FI = 2 (OR: 10.11, 95% CI: 2.91, 35.11, p < 0.0001) and FI = 3 (OR: 17.18, 95% CI: 4.98, 59.28, p < 0.0001) respectively. CONCLUSIONS FI = 1 was not a risk factor for tooth loss compared with FI = 0. FI = 2/3, smoking and lack of compliance with regular SPT represented risk factors for the loss of MRT in subjects treated for periodontitis.
Resumo:
In personal and in society related context, people often evaluate the risk of environmental and technological hazards. Previous research addressing neuroscience of risk evaluation assessed particularly the direct personal risk of presented stimuli, which may have comprised for instance aspects of fear. Further, risk evaluation primarily was compared to tasks of other cognitive domains serving as control conditions, thus revealing general risk related brain activity, but not such specifically associated with estimating a higher level of risk. We here investigated the neural basis on which lay-persons individually evaluated the risk of different potential hazards for the society. Twenty healthy subjects underwent functional magnetic resonance imaging while evaluating the risk of fifty more or less risky conditions presented as written terms. Brain activations during the individual estimations of 'high' against 'low' risk, and of negative versus neutral and positive emotional valences were analyzed. Estimating hazards to be of high risk was associated with activation in medial thalamus, anterior insula, caudate nucleus, cingulate cortex and further prefrontal and temporo-occipital areas. These areas were not involved according to an analysis of the emotion ratings. In conclusion, we emphasize a contribution of the mentioned brain areas involved to signal high risk, here not primarily associated with the emotional valence of the risk items. These areas have earlier been reported to be associated with, beside emotional, viscerosensitive and implicit processing. This leads to assumptions of an intuitive contribution, or a "gut-feeling", not necessarily dependent of the subjective emotional valence, when estimating a high risk of environmental hazards.
Resumo:
BACKGROUND The Cochrane risk of bias (RoB) tool has been widely embraced by the systematic review community, but several studies have reported that its reliability is low. We aim to investigate whether training of raters, including objective and standardized instructions on how to assess risk of bias, can improve the reliability of this tool. We describe the methods that will be used in this investigation and present an intensive standardized training package for risk of bias assessment that could be used by contributors to the Cochrane Collaboration and other reviewers. METHODS/DESIGN This is a pilot study. We will first perform a systematic literature review to identify randomized clinical trials (RCTs) that will be used for risk of bias assessment. Using the identified RCTs, we will then do a randomized experiment, where raters will be allocated to two different training schemes: minimal training and intensive standardized training. We will calculate the chance-corrected weighted Kappa with 95% confidence intervals to quantify within- and between-group Kappa agreement for each of the domains of the risk of bias tool. To calculate between-group Kappa agreement, we will use risk of bias assessments from pairs of raters after resolution of disagreements. Between-group Kappa agreement will quantify the agreement between the risk of bias assessment of raters in the training groups and the risk of bias assessment of experienced raters. To compare agreement of raters under different training conditions, we will calculate differences between Kappa values with 95% confidence intervals. DISCUSSION This study will investigate whether the reliability of the risk of bias tool can be improved by training raters using standardized instructions for risk of bias assessment. One group of inexperienced raters will receive intensive training on risk of bias assessment and the other will receive minimal training. By including a control group with minimal training, we will attempt to mimic what many review authors commonly have to do, that is-conduct risk of bias assessment in RCTs without much formal training or standardized instructions. If our results indicate that an intense standardized training does improve the reliability of the RoB tool, our study is likely to help improve the quality of risk of bias assessments, which is a central component of evidence synthesis.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
OBJECTIVE The natural course of chronic hepatitis C varies widely. To improve the profiling of patients at risk of developing advanced liver disease, we assessed the relative contribution of factors for liver fibrosis progression in hepatitis C. DESIGN We analysed 1461 patients with chronic hepatitis C with an estimated date of infection and at least one liver biopsy. Risk factors for accelerated fibrosis progression rate (FPR), defined as ≥0.13 Metavir fibrosis units per year, were identified by logistic regression. Examined factors included age at infection, sex, route of infection, HCV genotype, body mass index (BMI), significant alcohol drinking (≥20 g/day for ≥5 years), HIV coinfection and diabetes. In a subgroup of 575 patients, we assessed the impact of single nucleotide polymorphisms previously associated with fibrosis progression in genome-wide association studies. Results were expressed as attributable fraction (AF) of risk for accelerated FPR. RESULTS Age at infection (AF 28.7%), sex (AF 8.2%), route of infection (AF 16.5%) and HCV genotype (AF 7.9%) contributed to accelerated FPR in the Swiss Hepatitis C Cohort Study, whereas significant alcohol drinking, anti-HIV, diabetes and BMI did not. In genotyped patients, variants at rs9380516 (TULP1), rs738409 (PNPLA3), rs4374383 (MERTK) (AF 19.2%) and rs910049 (major histocompatibility complex region) significantly added to the risk of accelerated FPR. Results were replicated in three additional independent cohorts, and a meta-analysis confirmed the role of age at infection, sex, route of infection, HCV genotype, rs738409, rs4374383 and rs910049 in accelerating FPR. CONCLUSIONS Most factors accelerating liver fibrosis progression in chronic hepatitis C are unmodifiable.
Resumo:
Assessing and managing risks relating to the consumption of food stuffs for humans and to the environment has been one of the most complex legal issues in WTO law, ever since the Agreement on Sanitary and Phytosanitary Measures was adopted at the end of the Uruguay Round and entered into force in 1995. The problem was expounded in a number of cases. Panels and the Appellate Body adopted different philosophies in interpreting the agreement and the basic concept of risk assessment as defined in Annex A para. 4 of the Agreement. Risk assessment entails fundamental question on law and science. Different interpretations reflect different underlying perceptions of science and its relationship to the law. The present thesis supported by the Swiss National Research Foundation undertakes an in-depth analysis of these underlying perceptions. The author expounds the essence and differences of positivism and relativism in philosophy and natural sciences. He clarifies the relationship of fundamental concepts such as risk, hazards and probability. This investigation is a remarkable effort on the part of lawyer keen to learn more about the fundamentals based upon which the law – often unconsciously – is operated by the legal profession and the trade community. Based upon these insights, he turns to a critical assessment of jurisprudence both of panels and the Appellate Body. Extensively referring and discussing the literature, he deconstructs findings and decisions in light of implied and assumed underlying philosophies and perceptions as to the relationship of law and science, in particular in the field of food standards. Finding that both positivism and relativism does not provide adequate answers, the author turns critical rationalism and applies the methodologies of falsification developed by Karl R. Popper. Critical rationalism allows combining discourse in science and law and helps preparing the ground for a new approach to risk assessment and risk management. Linking the problem to the doctrine of multilevel governance the author develops a theory allocating risk assessment to international for a while leaving the matter of risk management to national and democratically accountable government. While the author throughout the thesis questions the possibility of separating risk assessment and risk management, the thesis offers new avenues which may assist in structuring a complex and difficult problem
Resumo:
BACKGROUND/AIMS Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc). METHODS This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure. RESULTS We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10-0.58)) than in the concept arm (0.27 (0.06-0.50)), but the difference was not significant (p = 0.67). LIMITATIONS The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law. CONCLUSION A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
Recent studies on the avalanche risk in alpine settlements suggested a strong dependency of the development of risk on variations in damage potential. Based on these findings, analyses on probable maximum losses in avalanche-prone areas of the municipality of Davos (CH) were used as an indicator for the long-term development of values at risk. Even if the results were subject to significant uncertainties, they underlined the dependency of today's risk on the historical development of land-use: Small changes in the lateral extent of endangered areas had a considerable impact on the exposure of values. In a second step, temporal variations in damage potential between 1950 and 2000 were compared in two different study areas representing typical alpine socio-economic development patterns: Davos (CH) and Galtür (A). The resulting trends were found to be similar; the damage potential increased significantly in number and value. Thus, the development of natural risk in settlements can for a major part be attributed to long-term shifts in damage potential.
Resumo:
The fatality risk caused by avalanches on road networks can be analysed using a long-term approach, resulting in a mean value of risk, and with emphasis on short-term fluctuations due to the temporal variability of both, the hazard potential and the damage potential. In this study, the approach for analysing the long-term fatality risk has been adapted by modelling the highly variable short-term risk. The emphasis was on the temporal variability of the damage potential and the related risk peaks. For defined hazard scenarios resulting from classified amounts of snow accumulation, the fatality risk was calculated by modelling the hazard potential and observing the traffic volume. The avalanche occurrence probability was calculated using a statistical relationship between new snow height and observed avalanche releases. The number of persons at risk was determined from the recorded traffic density. The method resulted in a value for the fatality risk within the observed time frame for the studied road segment. The long-term fatality risk due to snow avalanches as well as the short-term fatality risk was compared to the average fatality risk due to traffic accidents. The application of the method had shown that the long-term avalanche risk is lower than the fatality risk due to traffic accidents. The analyses of short-term avalanche-induced fatality risk provided risk peaks that were 50 times higher than the statistical accident risk. Apart from situations with high hazard level and high traffic density, risk peaks result from both, a high hazard level combined with a low traffic density and a high traffic density combined with a low hazard level. This provided evidence for the importance of the temporal variability of the damage potential for risk simulations on road networks. The assumed dependence of the risk calculation on the sum of precipitation within three days is a simplified model. Thus, further research is needed for an improved determination of the diurnal avalanche probability. Nevertheless, the presented approach may contribute as a conceptual step towards a risk-based decision-making in risk management.
Resumo:
We developed a model to calculate a quantitative risk score for individual aquaculture sites. The score indicates the risk of the site being infected with a specific fish pathogen (viral haemorrhagic septicaemia virus (VHSV); infectious haematopoietic necrosis virus, Koi herpes virus), and is intended to be used for risk ranking sites to support surveillance for demonstration of zone or member state freedom from these pathogens. The inputs to the model include a range of quantitative and qualitative estimates of risk factors organised into five risk themes (1) Live fish and egg movements; (2) Exposure via water; (3) On-site processing; (4) Short-distance mechanical transmission; (5) Distance-independent mechanical transmission. The calculated risk score for an individual aquaculture site is a value between zero and one and is intended to indicate the risk of a site relative to the risk of other sites (thereby allowing ranking). The model was applied to evaluate 76 rainbow trout farms in 3 countries (42 from England, 32 from Italy and 2 from Switzerland) with the aim to establish their risk of being infected with VHSV. Risk scores for farms in England and Italy showed great variation, clearly enabling ranking. Scores ranged from 0.002 to 0.254 (mean score 0.080) in England and 0.011 to 0.778 (mean of 0.130) for Italy, reflecting the diversity of infection status of farms in these countries. Requirements for broader application of the model are discussed. Cost efficient farm data collection is important to realise the benefits from a risk-based approach.
Resumo:
Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
CONTEXT Hyperthyroidism is an established risk factor for atrial fibrillation (AF), but information concerning the association with variations within the normal range of thyroid function and subgroups at risk is lacking. OBJECTIVE This study aimed to investigate the association between normal thyroid function and AF prospectively and explore potential differential risk patterns. DESIGN, SETTING, AND PARTICIPANTS From the Rotterdam Study we included 9166 participants ≥ 45 y with TSH and/or free T4 (FT4) measurements and AF assessment (1997-2012 median followup, 6.8 y), with 399 prevalent and 403 incident AF cases. MAIN OUTCOME MEASURES Outcome measures were 3-fold: 1) hazard ratios (HRs) for the risk of incident AF by Cox proportional-hazards models, 2) 10-year absolute risks taking competing risk of death into account, and 3) discrimination ability of adding FT4 to the CHARGE-AF simple model, an established prediction model for AF. RESULTS Higher FT4 levels were associated with higher risks of AF (HR 1.63, 95% confidence interval, 1.19-2.22), when comparing those in the highest quartile to those in lowest quartile. Absolute 10-year risks increased with higher FT4 in participants ≤ 65 y from 1-9% and from 6-12% in subjects ≥ 65 y. Discrimination of the prediction model improved when adding FT4 to the simple model (c-statistic, 0.722 vs 0.729; P = .039). TSH levels were not associated with AF. CONCLUSIONS There is an increased risk of AF with higher FT4 levels within the normal range, especially in younger subjects. Adding FT4 to the simple model slightly improved discrimination of risk prediction.