97 resultados para Probability of choice
Resumo:
Whereas several clinical endpoints in monitoring the response to treatment in patients with Huntington's disease (HD) have been explored, there has been a paucity of research in the quality of life in such patients. The aim of this study was to validate the use of two generic health-related quality of life instruments (the Short Form 36 health survey questionnaire [SF-36] and the Sickness Impact Profile [SIP]) and to evaluate their psychometric properties. We found that both instruments demonstrated acceptable convergent validity and reliability for patients and carers. However, there was an advantage in using the SF-36 because of its more robust construct validity and test-retest reliability; furthermore, motor symptoms appeared to influence some strictly nonmotor dimensions of the SIP. On a pragmatic level, the SF-36 is shorter and quicker to administer and, therefore, easier for patients at various stages of the disease to complete. Thus, the SF-36 would appear to be the recommended instrument of choice for patients with HD and their carers, although further work needs to be done to investigate the sensitivity of this instrument longitudinally. (C) 2004 Movement Disorder Society.
Communicating risk of medication side effects: an empirical evaluation of EU recommended terminology
Resumo:
Two experiments compared people's interpretation of verbal and numerical descriptions of the risk of medication side effects occurring. The verbal descriptors were selected from those recommended for use by the European Union (very common, common, uncommon, rare, very rare). Both experiments used a controlled empirical methodology, in which nearly 500 members of the general population were presented with a fictitious (but realistic) scenario about visiting the doctor and being prescribed medication, together with information about the medicine's side effects and their probability of occurrence. Experiment 1 found that, in all three age groups tested (18 - 40, 41 - 60 and over 60), participants given a verbal descriptor (very common) estimated side effect risk to be considerably higher than those given a comparable numerical description. Furthermore, the differences in interpretation were reflected in their judgements of side effect severity, risk to health, and intention to comply. Experiment 2 confirmed these findings using two different verbal descriptors (common and rare) and in scenarios which described either relatively severe or relatively mild side effects. Strikingly, only 7 out of 180 participants in this study gave a probability estimate which fell within the EU assigned numerical range. Thus, large scale use of the descriptors could have serious negative consequences for individual and public health. We therefore recommend that the EU and National authorities suspend their recommendations regarding these descriptors until a more substantial evidence base is available to support their appropriate use.
Resumo:
Objectives: To examine doctors' (Experiment 1) and doctors' and lay people's (Experiment 2) interpretations of two sets of recommended verbal labels for conveying information about side effects incidence rates. Method: Both studies used a controlled empirical methodology in which participants were presented with a hypothetical, but realistic, scenario involving a prescribed medication that was said to be associated with either mild or severe side effects. The probability of each side effect was described using one of the five descriptors advocated by the European Union (Experiment 1) or one of the six descriptors advocated in Calman's risk scale (Experiment 2), and study participants were required to estimate (numerically) the probability of each side effect occurring. Key findings: Experiment 1 showed that the doctors significantly overestimated the risk of side effects occurring when interpreting the five EU descriptors, compared with the assigned probability ranges. Experiment 2 showed that both groups significantly overestimated risk when given the six Calman descriptors, although the degree of overestimation was not as great for the doctors as for the lay people. Conclusion: On the basis of our findings, we argue that we are still a long way from achieving a standardised language of risk for use by both professionals and the general public, although there might be more potential for use of standardised terms among professionals. In the meantime, the EU and other regulatory bodies and health professionals should be very cautious about advocating the use of particular verbal labels for describing medication side effects.
Resumo:
A study examined people's interpretation of European Commission (EC) recommended verbal descriptors for risk of medicine side effects, and actions to take if they do occur. Members of the general public were presented with a fictitious (but realistic) scenario about suffering from a stiff neck, visiting the local pharmacy and purchasing an over the counter (OTC) medicine (Ibruprofen). The medicine came with an information leaflet which included information about the medicine's side effects, their risk of occurrence, and recommended actions to take if adverse effects are experienced. Probability of occurrence was presented numerically (6%) or verbally, using the recommended EC descriptor (common). Results showed that, in line with findings of our earlier work with prescribed medicines, participants significantly overestimated side effect risk. Furthermore, the differences in interpretation were reflected in their judgements of satisfaction, side effect severity, risk to health, and intention to take the medicine. Finally, we observed no significant difference between people's interpretation of the recommended action descriptors ('immediately' and 'as soon as possible'). (C) 2003 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Background/Objectives: Prebiotics have attracted interest for their ability to positively affect the colonic microbiota composition, thus increasing resistance to infection and diarrhoeal disease. This study assessed the effectiveness of a prebiotic galacto-oligosaccharide mixture (B-GOS) on the severity and/or incidence of travellers' diarrhoea (TD) in healthy subjects. Subjects/Methods: The study was a placebo-controlled, randomized, double blind of parallel design in 159 healthy volunteers, who travelled for minimum of 2 weeks to a country of low or high risk for TD. The investigational product was the B-GOS and the placebo was maltodextrin. Volunteers were randomized into groups with an equal probability of receiving either the prebiotic or placebo. The protocol comprised of a 1 week pre-holiday period recording bowel habit, while receiving intervention and the holiday period. Bowel habit included the number of bowel movements and average consistency of the stools as well as occurrence of abdominal discomfort, flatulence, bloating or vomiting. A clinical report was completed in the case of diarrhoeal incidence. A post-study questionnaire was also completed by all subjects on their return. Results: Results showed significant differences between the B-GOS and the placebo group in the incidence (P<0.05) and duration (P<0.05) of TD. Similar findings occurred on abdominal pain (P<0.05) and the overall quality of life assessment (P<0.05). Conclusions: Consumption of the tested galacto-oligosaccharide mixture showed significant potential in preventing the incidence and symptoms of TD.
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.
Resumo:
The intensity and distribution of daily precipitation is predicted to change under scenarios of increased greenhouse gases (GHGs). In this paper, we analyse the ability of HadCM2, a general circulation model (GCM), and a high-resolution regional climate model (RCM), both developed at the Met Office's Hadley Centre, to simulate extreme daily precipitation by reference to observations. A detailed analysis of daily precipitation is made at two UK grid boxes, where probabilities of reaching daily thresholds in the GCM and RCM are compared with observations. We find that the RCM generally overpredicts probabilities of extreme daily precipitation but that, when the GCM and RCM simulated values are scaled to have the same mean as the observations, the RCM captures the upper-tail distribution more realistically. To compare regional changes in daily precipitation in the GHG-forced period 2080-2100 in the GCM and the RCM, we develop two methods. The first considers the fractional changes in probability of local daily precipitation reaching or exceeding a fixed 15 mm threshold in the anomaly climate compared with the control. The second method uses the upper one-percentile of the control at each point as the threshold. Agreement between the models is better in both seasons with the latter method, which we suggest may be more useful when considering larger scale spatial changes. On average, the probability of precipitation exceeding the 1% threshold increases by a factor of 2.5 (GCM and RCM) in winter and by I .7 (GCM) or 1.3 (RCM) in summer.
Resumo:
Empirical studies using satellite data and radiosondes have shown that precipitation increases with column water vapor (CWV) in the tropics, and that this increase is much steeper above some critical CWV value. Here, eight years of 1-min-resolution microwave radiometer and optical gauge data at Nauru Island are analyzed to better understand the relationships among CWV, column liquid water (CLW), and precipitation at small time scales. CWV is found to have large autocorrelation times compared with CLW and precipitation. Before precipitation events, CWV increases on both a synoptic-scale time period and a subsequent shorter time period consistent with mesoscale convective activity; the latter period is associated with the highest CWV levels. Probabilities of precipitation increase greatly with CWV. Given initial high CWV, this increased probability of precipitation persists at least 10–12 h. Even in periods of high CWV, however, probabilities of initial precipitation in a 5-min period remain low enough that there tends to be a lag before the start of the next precipitation event. This is consistent with precipitation occurring stochastically within environments containing high CWV, with the latter being established by a combination of synoptic-scale and mesoscale forcing.
Resumo:
Background: Reviews and practice guidelines for paediatric obsessive-compulsive disorder (OCD) recommend cognitive-behaviour therapy (CBT) as the psychological treatment of choice, but note that it has not been sufficiently evaluated for children and adolescents and that more randomized controlled trials are needed. The aim of this trial was to evaluate effectiveness and optimal delivery of CBT, emphasizing cognitive interventions. Methods: A total of 96 children and adolescents with OCD were randomly allocated to the three conditions each of approximately 12 weeks duration: full CBT (average therapist contact: 12 sessions) and brief CBT (average contact: 5 sessions, with use of therapist-guided workbooks), and wait-list/delayed treatment. The primary outcome measure was the child version of the semi-structured interviewer-based Yale-Brown Obsessive Compulsive Scale. Clinical Trial registration: http://www.controlled-trials.com/ISRCTN/; unique identifier: ISRCTN29092580. Results: There was statistically significant symptomatic improvement in both treatment groups compared with the wait-list group, with no significant differences in outcomes between the two treatment groups. Controlled treatment effect sizes in intention-to-treat analyses were 2.2 for full CBT and 1.6 for brief CBT. Improvements were maintained at follow-up an average of 14 weeks later. Conclusions: The findings demonstrate the benefits of CBT emphasizing cognitive interventions for children and adolescents with OCD and suggest that relatively lower therapist intensity delivery with use of therapist-guided workbooks is an efficient mode of delivery.
Resumo:
Summary 1. In recent decades there have been population declines of many UK bird species, which have become the focus of intense research and debate. Recently, as the populations of potential predators have increased there is concern that increased rates of predation may be contributing to the declines. In this review, we assess the methodologies behind the current published science on the impacts of predators on avian prey in the UK. 2. We identified suitable studies, classified these according to study design (experimental ⁄observational) and assessed the quantity and quality of the data upon which any variation in predation rates was inferred. We then explored whether the underlying study methodology had implications for study outcome. 3. We reviewed 32 published studies and found that typically observational studies comprehensively monitored significantly fewer predator species than experimental studies. Data for a difference in predator abundance from targeted (i.e. bespoke) census techniques were available for less than half of the 32 predator species studied. 4. The probability of a study detecting an impact on prey abundance was strongly, positively related to the quality and quantity of data upon which the gradient in predation rates was inferred. 5. The findings suggest that if a study is based on good quality abundance data for a range of predator species then it is more likely to detect an effect than if it relies on opportunistic data for a smaller number of predators. 6. We recommend that the findings from studies which use opportunistic data, for a limited number of predator species, should be treated with caution and that future studies employ bespoke census techniques to monitor predator abundance for an appropriate suite of predators.
Resumo:
Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.
Resumo:
Several methods for assessing the sustainability of agricultural systems have been developed. These methods do not fully: (i) take into account the multi‐functionality of agriculture; (ii) include multidimensionality; (iii) utilize and implement the assessment knowledge; and (iv) identify conflicting goals and trade‐offs. This paper reviews seven recently developed multidisciplinary indicator‐based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis, (3) a reproducible structure of the approach. The approaches can be categorized into three typologies. The top‐down farm assessments focus on field or farm assessment. They have a clear procedure for measuring the indicators and assessing the sustainability of the system, which allows for benchmarking across farms. The degree of participation is low, potentially affecting the implementation of the results negatively. The top‐down regional assessment assesses the on‐farm and the regional effects. They include some participation to increase acceptance of the results. However, they miss the analysis of potential trade‐offs. The bottom‐up, integrated participatory or transdisciplinary approaches focus on a regional scale. Stakeholders are included throughout the whole process assuring the acceptance of the results and increasing the probability of implementation of developed measures. As they include the interaction between the indicators in their system representation, they allow for performing a trade‐off analysis. The bottom‐up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.
Resumo:
Worries about the possibility of consent recall a more familiar problem about promising raised by Hume. To see the parallel here we must distinguish the power of consent from the normative significance of choice. I'll argue that we have normative interests, interests in being able to control the rights and obligations of ourselves and those around us, interests distinct from our interest in controlling the non-normative situation. Choice gets its normative significance from our non-normative control interests. By contrast, the possibility of consent depends on a species of normative interest that I'll call a permissive interest, an interest in its being the case that certain acts wrong us unless we declare otherwise. In the final section, I'll show how our permissive interests underwrite the possibility of consent.
Resumo:
Consumer studies of meat have tended to use quantitative methodologies providing a wealth of statistically malleable information, but little in-depth insight into consumer perceptions of meat. The aim of the present study was therefore, to understand factors perceived important in the selection of chicken meat, using qualitative methodology. Focus group discussions were tape recorded, transcribed verbatim and content analysed for major themes. Themes arising implied that “appearance” and “convenience” were the most important determinants of choice of chicken meat and these factors appeared to be associated with perceptions of freshness, healthiness, product versatility and concepts of value. A descriptive model has been developed to illustrate the interrelationship between factors affecting chicken meat choice. This study indicates that those involved in the production and retailing of chicken products should concentrate upon product appearance and convenience as market drivers for their products.