793 resultados para sensemaking of risk


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The blaESBL and blaAmpC genes in Enterobacteriaceae are spread by plasmid-mediated integrons, insertion sequences, and transposons, some of which are homologous in bacteria from food animals, foods, and humans. These genes have been frequently identified in Escherichia coli and Salmonella from food animals, the most common being blaCTX-M-1, blaCTX-M-14, and blaCMY-2. Identification of risk factors for their occurrence in food animals is complex. In addition to generic antimicrobial use, cephalosporin usage is an important risk factor for selection and spread of these genes. Extensive international trade of animals is a further risk factor. There are no data on the effectiveness of individual control options in reducing public health risks. A highly effective option would be to stop or restrict cephalosporin usage in food animals. Decreasing total antimicrobial use is also of high priority. Implementation of measures to limit strain dissemination (increasing farm biosecurity, controls in animal trade, and other general postharvest controls) are also important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To assess the prevalence of tooth wear on buccal/facial and lingual/palatal tooth surfaces and identify related risk factors in a sample of young European adults, aged 18-35 years. Calibrated and trained examiners measured tooth wear, using the basic erosive wear examination (BEWE) on in 3187 patients in seven European countries and assessed the impact of risk factors with a previously validated questionnaire. Each individual was characterized by the highest BEWE score recorded for any scoreable surface. Bivariate analyses examined the proportion of participants who scored 2 or 3 in relation to a range of demographic, dietary and oral care variables. The highest tooth wear BEWE score was 0 for 1368 patients (42.9%), 1 for 883 (27.7%), 2 for 831 (26.1%) and 3 for 105 (3.3%). There were large differences between different countries with the highest levels of tooth wear observed in the UK. Important risk factors for tooth wear included heartburn or acid reflux, repeated vomiting, residence in rural areas, electric tooth brushing and snoring. We found no evidence that waiting after breakfast before tooth brushing has any effect on the degree of tooth wear (p=0.088). Fresh fruit and juice intake was positively associated with tooth wear. In this adult sample 29% had signs of tooth wear making it a common presenting feature in European adults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Patent foramen ovale (PFO) is associated with cryptogenic stroke (CS), although the pathogenicity of a discovered PFO in the setting of CS is typically unclear. Transesophageal echocardiography features such as PFO size, associated hypermobile septum, and presence of a right-to-left shunt at rest have all been proposed as markers of risk. The association of these transesophageal echocardiography features with other markers of pathogenicity has not been examined. METHODS AND RESULTS We used a recently derived score based on clinical and neuroimaging features to stratify patients with PFO and CS by the probability that their stroke is PFO-attributable. We examined whether high-risk transesophageal echocardiography features are seen more frequently in patients more likely to have had a PFO-attributable stroke (n=637) compared with those less likely to have a PFO-attributable stroke (n=657). Large physiologic shunt size was not more frequently seen among those with probable PFO-attributable strokes (odds ratio [OR], 0.92; P=0.53). The presence of neither a hypermobile septum nor a right-to-left shunt at rest was detected more often in those with a probable PFO-attributable stroke (OR, 0.80; P=0.45; OR, 1.15; P=0.11, respectively). CONCLUSIONS We found no evidence that the proposed transesophageal echocardiography risk markers of large PFO size, hypermobile septum, and presence of right-to-left shunt at rest are associated with clinical features suggesting that a CS is PFO-attributable. Additional tools to describe PFOs may be useful in helping to determine whether an observed PFO is incidental or pathogenically related to CS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INFLUENCE OF ANCHORING ON MISCARRIAGE RISK PERCEPTION ASSOCIATED WITH AMNIOCENTESIS Publication No. ___________ Regina Nuccio, BS Supervisory Professor: Claire N. Singletary, MS, CGC Amniocentesis is the most common invasive procedure performed during pregnancy (Eddleman, et al., 2006). One important factor that women consider when making a decision about amniocentesis is the risk of miscarriage associated with the procedure. People use heuristics such as anchoring, the action of using a prior belief regarding the magnitude of risk as a frame of reference for new information to be synthesized, to better understand risks that they encounter in their lives. This study aimed to determine a woman’s perception of miscarriage risk associated with amniocentesis before and after a genetic counseling session and to determine what factors are most likely to anchor a woman’s perception of miscarriage risk associated with amniocentesis. Most women perceived the risk as low or average pre-counseling and were likely to indicate the numeric risk of amniocentesis as <1% risk. A higher percentage of patients correctly identified the numeric risk as <1% post-counseling when compared to pre-counseling. However, the majority of patients’ feeling about the risk perception did not change after the genetic counseling session (60%), regardless of how they perceived the risk before discussing amniocentesis with a genetic counselor. Those whose risk perception did change after discussing amniocentesis with a genetic counselor showed a decreased risk perception (p<0.0001). Of the multitude of factors studied, only two showed significance: having a friend or relative with a personal or family history of a genetic disorder was associated with a lower risk perception (p=0.001) and having a child already was associated with a lower risk perception (p=0.038). The lack of significant factors may reflect the uniqueness of each patient’s heuristic framework and reinforces the importance of genetic counseling to elucidate individual concerns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The risk of second malignant neoplasms (SMNs) following prostate radiotherapy is a concern due to the large population of survivors and decreasing age at diagnosis. It is known that parallel-opposed beam proton therapy carries a lower risk than photon IMRT. However, a comparison of SMN risk following proton and photon arc therapies has not previously been reported. The purpose of this study was to predict the ratio of excess relative risk (RRR) of SMN incidence following proton arc therapy to that after volumetric modulated arc therapy (VMAT). Additionally, we investigated the impact of margin size and the effect of risk-minimized proton beam weighting on predicted RRR. Physician-approved treatment plans were created for both modalities for three patients. Therapeutic dose was obtained with differential dose-volume histograms from the treatment planning system, and stray dose was estimated from the literature or calculated with Monte Carlo simulations. Then, various risk models were applied to the total dose. Additional treatment plans were also investigated with varying margin size and risk-minimized proton beam weighting. The mean RRR ranged from 0.74 to 0.99, depending on risk model. The additional treatment plans revealed that the RRR remained approximately constant with varying margin size, and that the predicted RRR was reduced by 12% using a risk-minimized proton arc therapy planning technique. In conclusion, proton arc therapy was found to provide an advantage over VMAT in regard to predicted risk of SMN following prostate radiotherapy. This advantage was independent of margin size and was amplified with risk-optimized proton beam weighting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blood cholesterol and blood pressure development in childhood and adolescence have important impact on the future adult level of cholesterol and blood pressure, and on increased risk of cardiovascular diseases. The U.S. has higher mortality rates of coronary heart diseases than Japan. A longitudinal comparison in children of risk factor development in the two countries provides more understanding about the causes of cardiovascular disease and its prevention. Such comparisons have not been reported in the past. ^ In Project HeartBeat!, 506 non-Hispanic white, 136 black and 369 Japanese children participated in the study in the U.S. and Japan from 1991 to 1995. A synthetic cohort of ages 8 to 18 years was composed by three cohorts with starting ages at 8, 11, and 14. A multilevel regression model was used for data analysis. ^ The study revealed that the Japanese children had significantly higher slopes of mean total cholesterol (TC) and high density lipoprotein (HDL) cholesterol levels than the U.S. children after adjusting for age and sex. The mean TC level of Japanese children was not significantly different from white and black children. The mean HDL level of Japanese children was significantly higher than white and black children after adjusting for age and sex. The ratio of HDL/TC in Japanese children was significantly higher than in U.S. whites, but not significantly different from the black children. The Japanese group had significantly lower mean diastolic blood pressure phase IV (DBP4) and phase V (DBP5) than the two U.S. groups. The Japanese group also showed significantly higher slopes in systolic blood pressure, DBP5 and DBP4 during the study period than both U.S. groups. The differences were independent from height and body mass index. ^ The study provided the first longitudinal comparison of blood cholesterol and blood pressure between the U.S. and Japanese children and adolescents. It revealed the dynamic process of these factors in the three ethnic groups. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM To investigate risk factors for the loss of multi-rooted teeth (MRT) in subjects treated for periodontitis and enrolled in supportive periodontal therapy (SPT). MATERIAL AND METHODS A total of 172 subjects were examined before (T0) and after active periodontal therapy (APT)(T1) and following a mean of 11.5 ± 5.2 (SD) years of SPT (T2). The association of risk factors with loss of MRT was analysed with multilevel logistic regression. The tooth was the unit of analysis. RESULTS Furcation involvement (FI) = 1 before APT was not a risk factor for tooth loss compared with FI = 0 (p = 0.37). Between T0 and T2, MRT with FI = 2 (OR: 2.92, 95% CI: 1.68, 5.06, p = 0.0001) and FI = 3 (OR: 6.85, 95% CI: 3.40, 13.83, p < 0.0001) were at a significantly higher risk to be lost compared with those with FI = 0. During SPT, smokers lost significantly more MRT compared with non-smokers (OR: 2.37, 95% CI: 1.05, 5.35, p = 0.04). Non-smoking and compliant subjects with FI = 0/1 at T1 lost significantly less MRT during SPT compared with non-compliant smokers with FI = 2 (OR: 10.11, 95% CI: 2.91, 35.11, p < 0.0001) and FI = 3 (OR: 17.18, 95% CI: 4.98, 59.28, p < 0.0001) respectively. CONCLUSIONS FI = 1 was not a risk factor for tooth loss compared with FI = 0. FI = 2/3, smoking and lack of compliance with regular SPT represented risk factors for the loss of MRT in subjects treated for periodontitis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In personal and in society related context, people often evaluate the risk of environmental and technological hazards. Previous research addressing neuroscience of risk evaluation assessed particularly the direct personal risk of presented stimuli, which may have comprised for instance aspects of fear. Further, risk evaluation primarily was compared to tasks of other cognitive domains serving as control conditions, thus revealing general risk related brain activity, but not such specifically associated with estimating a higher level of risk. We here investigated the neural basis on which lay-persons individually evaluated the risk of different potential hazards for the society. Twenty healthy subjects underwent functional magnetic resonance imaging while evaluating the risk of fifty more or less risky conditions presented as written terms. Brain activations during the individual estimations of 'high' against 'low' risk, and of negative versus neutral and positive emotional valences were analyzed. Estimating hazards to be of high risk was associated with activation in medial thalamus, anterior insula, caudate nucleus, cingulate cortex and further prefrontal and temporo-occipital areas. These areas were not involved according to an analysis of the emotion ratings. In conclusion, we emphasize a contribution of the mentioned brain areas involved to signal high risk, here not primarily associated with the emotional valence of the risk items. These areas have earlier been reported to be associated with, beside emotional, viscerosensitive and implicit processing. This leads to assumptions of an intuitive contribution, or a "gut-feeling", not necessarily dependent of the subjective emotional valence, when estimating a high risk of environmental hazards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND The Cochrane risk of bias (RoB) tool has been widely embraced by the systematic review community, but several studies have reported that its reliability is low. We aim to investigate whether training of raters, including objective and standardized instructions on how to assess risk of bias, can improve the reliability of this tool. We describe the methods that will be used in this investigation and present an intensive standardized training package for risk of bias assessment that could be used by contributors to the Cochrane Collaboration and other reviewers. METHODS/DESIGN This is a pilot study. We will first perform a systematic literature review to identify randomized clinical trials (RCTs) that will be used for risk of bias assessment. Using the identified RCTs, we will then do a randomized experiment, where raters will be allocated to two different training schemes: minimal training and intensive standardized training. We will calculate the chance-corrected weighted Kappa with 95% confidence intervals to quantify within- and between-group Kappa agreement for each of the domains of the risk of bias tool. To calculate between-group Kappa agreement, we will use risk of bias assessments from pairs of raters after resolution of disagreements. Between-group Kappa agreement will quantify the agreement between the risk of bias assessment of raters in the training groups and the risk of bias assessment of experienced raters. To compare agreement of raters under different training conditions, we will calculate differences between Kappa values with 95% confidence intervals. DISCUSSION This study will investigate whether the reliability of the risk of bias tool can be improved by training raters using standardized instructions for risk of bias assessment. One group of inexperienced raters will receive intensive training on risk of bias assessment and the other will receive minimal training. By including a control group with minimal training, we will attempt to mimic what many review authors commonly have to do, that is-conduct risk of bias assessment in RCTs without much formal training or standardized instructions. If our results indicate that an intense standardized training does improve the reliability of the RoB tool, our study is likely to help improve the quality of risk of bias assessments, which is a central component of evidence synthesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE The natural course of chronic hepatitis C varies widely. To improve the profiling of patients at risk of developing advanced liver disease, we assessed the relative contribution of factors for liver fibrosis progression in hepatitis C. DESIGN We analysed 1461 patients with chronic hepatitis C with an estimated date of infection and at least one liver biopsy. Risk factors for accelerated fibrosis progression rate (FPR), defined as ≥0.13 Metavir fibrosis units per year, were identified by logistic regression. Examined factors included age at infection, sex, route of infection, HCV genotype, body mass index (BMI), significant alcohol drinking (≥20 g/day for ≥5 years), HIV coinfection and diabetes. In a subgroup of 575 patients, we assessed the impact of single nucleotide polymorphisms previously associated with fibrosis progression in genome-wide association studies. Results were expressed as attributable fraction (AF) of risk for accelerated FPR. RESULTS Age at infection (AF 28.7%), sex (AF 8.2%), route of infection (AF 16.5%) and HCV genotype (AF 7.9%) contributed to accelerated FPR in the Swiss Hepatitis C Cohort Study, whereas significant alcohol drinking, anti-HIV, diabetes and BMI did not. In genotyped patients, variants at rs9380516 (TULP1), rs738409 (PNPLA3), rs4374383 (MERTK) (AF 19.2%) and rs910049 (major histocompatibility complex region) significantly added to the risk of accelerated FPR. Results were replicated in three additional independent cohorts, and a meta-analysis confirmed the role of age at infection, sex, route of infection, HCV genotype, rs738409, rs4374383 and rs910049 in accelerating FPR. CONCLUSIONS Most factors accelerating liver fibrosis progression in chronic hepatitis C are unmodifiable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessing and managing risks relating to the consumption of food stuffs for humans and to the environment has been one of the most complex legal issues in WTO law, ever since the Agreement on Sanitary and Phytosanitary Measures was adopted at the end of the Uruguay Round and entered into force in 1995. The problem was expounded in a number of cases. Panels and the Appellate Body adopted different philosophies in interpreting the agreement and the basic concept of risk assessment as defined in Annex A para. 4 of the Agreement. Risk assessment entails fundamental question on law and science. Different interpretations reflect different underlying perceptions of science and its relationship to the law. The present thesis supported by the Swiss National Research Foundation undertakes an in-depth analysis of these underlying perceptions. The author expounds the essence and differences of positivism and relativism in philosophy and natural sciences. He clarifies the relationship of fundamental concepts such as risk, hazards and probability. This investigation is a remarkable effort on the part of lawyer keen to learn more about the fundamentals based upon which the law – often unconsciously – is operated by the legal profession and the trade community. Based upon these insights, he turns to a critical assessment of jurisprudence both of panels and the Appellate Body. Extensively referring and discussing the literature, he deconstructs findings and decisions in light of implied and assumed underlying philosophies and perceptions as to the relationship of law and science, in particular in the field of food standards. Finding that both positivism and relativism does not provide adequate answers, the author turns critical rationalism and applies the methodologies of falsification developed by Karl R. Popper. Critical rationalism allows combining discourse in science and law and helps preparing the ground for a new approach to risk assessment and risk management. Linking the problem to the doctrine of multilevel governance the author develops a theory allocating risk assessment to international for a while leaving the matter of risk management to national and democratically accountable government. While the author throughout the thesis questions the possibility of separating risk assessment and risk management, the thesis offers new avenues which may assist in structuring a complex and difficult problem

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND/AIMS Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc). METHODS This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure. RESULTS We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10-0.58)) than in the concept arm (0.27 (0.06-0.50)), but the difference was not significant (p = 0.67). LIMITATIONS The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law. CONCLUSION A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent studies on the avalanche risk in alpine settlements suggested a strong dependency of the development of risk on variations in damage potential. Based on these findings, analyses on probable maximum losses in avalanche-prone areas of the municipality of Davos (CH) were used as an indicator for the long-term development of values at risk. Even if the results were subject to significant uncertainties, they underlined the dependency of today's risk on the historical development of land-use: Small changes in the lateral extent of endangered areas had a considerable impact on the exposure of values. In a second step, temporal variations in damage potential between 1950 and 2000 were compared in two different study areas representing typical alpine socio-economic development patterns: Davos (CH) and Galtür (A). The resulting trends were found to be similar; the damage potential increased significantly in number and value. Thus, the development of natural risk in settlements can for a major part be attributed to long-term shifts in damage potential.