25 resultados para Risk interval measure
em CentAUR: Central Archive University of Reading - UK
Resumo:
Objective To model the overall and income specific effect of a 20% tax on sugar sweetened drinks on the prevalence of overweight and obesity in the UK. Design Econometric and comparative risk assessment modelling study. Setting United Kingdom. Population Adults aged 16 and over. Intervention A 20% tax on sugar sweetened drinks. Main outcome measures The primary outcomes were the overall and income specific changes in the number and percentage of overweight (body mass index ≥25) and obese (≥30) adults in the UK following the implementation of the tax. Secondary outcomes were the effect by age group (16-29, 30-49, and ≥50 years) and by UK constituent country. The revenue generated from the tax and the income specific changes in weekly expenditure on drinks were also estimated. Results A 20% tax on sugar sweetened drinks was estimated to reduce the number of obese adults in the UK by 1.3% (95% credible interval 0.8% to 1.7%) or 180 000 (110 000 to 247 000) people and the number who are overweight by 0.9% (0.6% to 1.1%) or 285 000 (201 000 to 364 000) people. The predicted reductions in prevalence of obesity for income thirds 1 (lowest income), 2, and 3 (highest income) were 1.3% (0.3% to 2.0%), 0.9% (0.1% to 1.6%), and 2.1% (1.3% to 2.9%). The effect on obesity declined with age. Predicted annual revenue was £276m (£272m to £279m), with estimated increases in total expenditure on drinks for income thirds 1, 2, and 3 of 2.1% (1.4% to 3.0%), 1.7% (1.2% to 2.2%), and 0.8% (0.4% to 1.2%). Conclusions A 20% tax on sugar sweetened drinks would lead to a reduction in the prevalence of obesity in the UK of 1.3% (around 180 000 people). The greatest effects may occur in young people, with no significant differences between income groups. Both effects warrant further exploration. Taxation of sugar sweetened drinks is a promising population measure to target population obesity, particularly among younger adults.
Resumo:
In a cross-sectional study of 400 randomly selected smallholder dairy farms in the Tanga and Iringa regions of Tanzania, 14.2% (95% confidence interval (CI) = 11.6-17.3) of cows had developed clinical mastitis during the previous year. The point prevalence of subclinical mastitis, defined as a quarter positive by the California Mastitis Test (CMT) or by bacteriological culture, was 46.2% (95% Cl = 43.6-48.8) and 24.3% (95% Cl = 22.2-26.6), respectively. In a longitudinal disease study in Iringa, the incidence of clinical mastitis was 31.7 cases per 100 cow-years. A randomised intervention trial indicated that intramammary antibiotics significantly reduced the proportion of bacteriologically positive quarters in the short-term (14 days post-infusion) but teat dipping had no detectable effect on bacteriological infection and CMT positive quarters. Other risk and protective factors were identified from both the cross-sectional and longitudinal included animals with Boran breeding (odds ratio (OR) = 3,40, 95% CI = 1.00-11.57, P < 0.05 for clinical mastitis, and OR = 3.51, 95% CI = 1.299.55, P < 0.01 for a CMT positive quarter), while the practice of residual calf suckling was protective for a bacteriologically positive quarter (OR = 0.63, 95% Cl = 0.48-0.81, P <= 0.001) and for a CMT positive quarter (OR = 0.69, 95% Cl = 0.63-0.75, P < 0.001). A mastitis training course for farmers and extension officers was held, and the knowledge gained and use of different methods of dissemination were assessed over time. In a subsequent randomised controlled trial, there were strong associations between knowledge gained and both the individual question asked and the combination of dissemination methods (village meeting, video and handout) used. This study demonstrated that both clinical and subclinical mastitis is common in smallholder dairying in Tanzania, and that some of the risk and protective factors for mastitis can be addressed by practical management of dairy cows following effective knowledge transfer. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In a cross-sectional study of 400 randomly selected smallholder dairy farms in the Tanga and Iringa regions of Tanzania, 14.2% (95% confidence interval (CI) = 11.6-17.3) of cows had developed clinical mastitis during the previous year. The point prevalence of subclinical mastitis, defined as a quarter positive by the California Mastitis Test (CMT) or by bacteriological culture, was 46.2% (95% Cl = 43.6-48.8) and 24.3% (95% Cl = 22.2-26.6), respectively. In a longitudinal disease study in Iringa, the incidence of clinical mastitis was 31.7 cases per 100 cow-years. A randomised intervention trial indicated that intramammary antibiotics significantly reduced the proportion of bacteriologically positive quarters in the short-term (14 days post-infusion) but teat dipping had no detectable effect on bacteriological infection and CMT positive quarters. Other risk and protective factors were identified from both the cross-sectional and longitudinal included animals with Boran breeding (odds ratio (OR) = 3,40, 95% CI = 1.00-11.57, P < 0.05 for clinical mastitis, and OR = 3.51, 95% CI = 1.299.55, P < 0.01 for a CMT positive quarter), while the practice of residual calf suckling was protective for a bacteriologically positive quarter (OR = 0.63, 95% Cl = 0.48-0.81, P <= 0.001) and for a CMT positive quarter (OR = 0.69, 95% Cl = 0.63-0.75, P < 0.001). A mastitis training course for farmers and extension officers was held, and the knowledge gained and use of different methods of dissemination were assessed over time. In a subsequent randomised controlled trial, there were strong associations between knowledge gained and both the individual question asked and the combination of dissemination methods (village meeting, video and handout) used. This study demonstrated that both clinical and subclinical mastitis is common in smallholder dairying in Tanzania, and that some of the risk and protective factors for mastitis can be addressed by practical management of dairy cows following effective knowledge transfer. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Consumers' attitudes to trust and risk are key issues in food safety research and attention needs to be focused on clearly defining a framework for analysing consumer behaviour in these terms. In order to achieve this, a detailed review of the recent literature surrounding risk, trust and the relationship between the two must be conducted. This paper aims to collate the current social sciences literature in the fields of food safety, trust and risk. It provides an insight into the economic and other modelling procedures available to measure consumers' attitudes to risk and trust in food safety and specifically notes the need for future research to concentrate on examining risk and trust as inter-related variables rather than two distinct, mutually exclusive concepts. A framework is proposed which it is hoped will assist in devising more effective research to support risk communication to consumers.
Resumo:
Background noise should in theory hinder detection of auditory cues associated with approaching danger. We tested whether foraging chaffinches Fringilla coelebs responded to background noise by increasing vigilance, and examined whether this was explained by predation risk compensation or by a novel stimulus hypothesis. The former predicts that only inter-scan interval should be modified in the presence of background noise, not vigilance levels generally. This is because noise hampers auditory cue detection and increases perceived predation risk primarily when in the head-down position, and also because previous tests have shown that only interscan interval is correlated with predator detection ability in this system. Chaffinches only modified interscan interval supporting this hypothesis. At the same time they made significantly fewer pecks when feeding during the background noise treatment and so the increased vigilance led to a reduction in intake rate, suggesting that compensating for the increased predation risk could indirectly lead to a fitness cost. Finally, the novel stimulus hypothesis predicts that chaffinches should habituate to the noise, which did not occur within a trial or over 5 subsequent trials. We conclude that auditory cues may be an important component of the trade-off between vigilance and feeding, and discuss possible implications for anti-predation theory and ecological processes
Resumo:
The release of genetically modified plants is governed by regulations that aim to provide an assessment of potential impact on the environment. One of the most important components of this risk assessment is an evaluation of the probability of gene flow. In this review, we provide an overview of the current literature on gene flow from transgenic plants, providing a framework of issues for those considering the release of a transgenic plant into the environment. For some plants gene flow from transgenic crops is well documented, and this information is discussed in detail in this review. Mechanisms of gene flow vary from plant species to plant species and range from the possibility of asexual propagation, short- or long-distance pollen dispersal mediated by insects or wind and seed dispersal. Volunteer populations of transgenic plants may occur where seed is inadvertently spread during harvest or commercial distribution. If there are wild populations related to the transgenic crop then hybridization and eventually introgression in the wild may occur, as it has for herbicide resistant transgenic oilseed rape (Brassica napus). Tools to measure the amount of gene flow, experimental data measuring the distance of pollen dispersal, and experiments measuring hybridization and seed survivability are discussed in this review. The various methods that have been proposed to prevent gene flow from genetically modified plants are also described. The current "transgenic traits'! in the major crops confer resistance to herbicides and certain insects. Such traits could confer a selective advantage (an increase in fitness) in wild plant populations in some circumstances, were gene flow to occur. However, there is ample evidence that gene flow from crops to related wild species occurred before the development of transgenic crops and this should be taken into account in the risk assessment process.
Resumo:
Objective: To determine the risk of lung cancer associated with exposure at home to the radioactive disintegration products of naturally Occurring radon gas. Design: Collaborative analysis of individual data from 13 case-control studies of residential radon and lung cancer. Setting Nine European countries. Subjects 7148 cases Of lung cancer and 14 208 controls. Main outcome measures: Relative risks of lung cancer and radon gas concentrations in homes inhabited during the previous 5-34 years measured in becquerels (radon disintegrations per second) per cubic incite (Bq/m(3)) Of household air. Results: The mean measured radon concentration in homes of people in tire control group was 97 Bq/m(3), with 11% measuring > 200 and 4% measuring > 400 Bq/m(3). For cases of lung cancer the mean concentration was 104 Bq/m(3). The risk of lung cancer increased by 8.4% (95% confidence interval 3.0% to 15.8%) per 100 Bq/m(3) increase in measured radon (P = 0.0007). This corresponds to an increase of 16% (5% to 31%) per 100 Bq/m(3) increase in usual radon-that is, after correction for the dilution caused by random uncertainties in measuring radon concentrations. The dose-response relation seemed to be linear with no threshold and remained significant (P=0.04) in analyses limited to individuals from homes with measured radon < 200 Bq/m(3). The proportionate excess risk did not differ significantly with study, age, sex, or smoking. In the absence of other causes of death, the absolute risks of lung cancer by age 75 years at usual radon concentrations of 0, 100, and 400 Bq/m(3) would be about 0.4%, 0.5%, and 0.7%, respectively, for lifelong non-smokers, and about 25 times greater (10%, 12%, and 16%) for cigarette smokers. Conclusions: Collectively, though not separately, these studies show appreciable hazards from residential radon, particularly for smokers and recent ex-smokers, and indicate that it is responsible for about 2% of all deaths from cancer in Europe.
Resumo:
Context: Evidence is limited on the effects of different patterns of use of postmenopausal hormone therapy on fracture incidence and particularly on the effects of ceasing use. Objective: To investigate the effect of different patterns of hormone therapy use on fracture incidence. Design, Setting, and Participants: Prospective study of 138737 postmenopausal women aged 50 to 69 years recruited from the UK general population in 19961998 (the Million Women Study) and followed up for 1.9 to 3.9 years (average, 2.8 years) for fracture incidence. Main Outcome Measure: Adjusted relative risk (RR) for incident fracture (except fracture of the fingers, toes, and ribs) in hormone therapy users compared with never users at baseline. Results: A total of 5197 women (3.7%) reported 1 or more fractures, 79% resulting from falls. Current users of hormone therapy at baseline had a significantly reduced incidence of fracture (RR, 0.62; 95% confidence interval [CI], 0.58-0.66; P<.001). This protection was evident soon after hormone therapy began, and the RR decreased with increasing duration of use (P=.001). Among current users at baseline the RR of fracture did not vary significantly according to whether estrogen-only, estrogen-progestin, or other types of hormones were used (RR [95% CI], 0.64 [0.58-0.71], 0.58 [0.53-0.64], and 0.67 [0.56-0.80], respectively; P=19), nor did it vary significantly according to estrogen dose or estrogen or progestin constituents. The RR associated with current use of hormone therapy did not vary significantly according to 11 personal characteristics of study participants, including their age at menopause, body mass index, and physical activity. Past users of hormone therapy at baseline experienced no significant protection against fractures (RR, 1.07; 95% CI, 0.99-1.15); incidence rates returned to those of never-users within about a year of ceasing use. Conclusions: All types of hormone therapy studied confer substantial protection against fracture while they are used. This protection appears rapidly after use commences and wears off rapidly after use ceases. The older women are, the greater is their absolute reduction in fracture incidence while using hormone therapy.
Resumo:
OBJECTIVES: To determine the cost-effectiveness of influenza vaccination in people aged 65-74 years in the absence of co-morbidity. DESIGN: Primary research: randomised controlled trial. SETTING: Primary care. PARTICIPANTS: People without risk factors for influenza or contraindications to vaccination were identified from 20 general practitioner (GP) practices in Liverpool in September 1999 and invited to participate in the study. There were 5875/9727 (60.4%) people aged 65-74 years identified as potentially eligible and, of these, 729 (12%) were randomised. INTERVENTION: Participants were randomised to receive either influenza vaccine or placebo (ratio 3:1), with all individuals receiving pneumococcal vaccine unless administered in the previous 10 years. Of the 729 people randomised, 552 received vaccine and 177 received placebo; 726 individuals were administered pneumococcal vaccine. MAIN OUTCOME MEASURES AND METHODOLOGY OF ECONOMIC EVALUATION: GP attendance with influenza-like illness (ILI) or pneumonia (primary outcome measure); or any respiratory symptoms; hospitalisation with a respiratory illness; death; participant self-reported ILI; quality of life (QoL) measures at 2, 4 and 6 months post-study vaccination; adverse reactions 3 days after vaccination. A cost-effectiveness analysis was undertaken to identify the incremental cost associated with the avoidance of episodes of influenza in the vaccination population and an impact model was used to extrapolate the cost-effectiveness results obtained from the trial to assess their generalisability throughout the NHS. RESULTS: In England and Wales, weekly consultations for influenza and ILI remained at baseline levels (less than 50 per 100,000 population) until week 50/1999 and then increased rapidly, peaking during week 2/2000 with a rate of 231/100,000. This rate fell within the range of 'higher than expected seasonal activity' of 200-400/100,000. Rates then quickly declined, returning to baseline levels by week 5/2000. The predominant circulating strain during this period was influenza A (H3N2). Five (0.9%) people in the vaccine group were diagnosed by their GP with an ILI compared to two (1.1%) in the placebo group [relative risk (RR), 0.8; 95% confidence interval (CI) = 0.16 to 4.1]. No participants were diagnosed with pneumonia by their GP and there were no hospitalisations for respiratory illness in either group. Significantly fewer vaccinated individuals self-reported a single ILI (4.6% vs 8.9%, RR, 0.51; 95% CI for RR, 0.28 to 0.96). There was no significant difference in any of the QoL measurements over time between the two groups. Reported systemic side-effects showed no significant differences between groups. Local side-effects occurred with a significantly increased incidence in the vaccine group (11.3% vs 5.1%, p = 0.02). Each GP consultation avoided by vaccination was estimated from trial data to generate a net NHS cost of 174 pounds. CONCLUSIONS: No difference was seen between groups for the primary outcome measure, although the trial was underpowered to demonstrate a true difference. Vaccination had no significant effect on any of the QoL measures used, although vaccinated individuals were less likely to self-report ILI. The analysis did not suggest that influenza vaccination in healthy people aged 65-74 years would lead to lower NHS costs. Future research should look at ways to maximise vaccine uptake in people at greatest risk from influenza and also the level of vaccine protection afforded to people from different age and socio-economic populations.
Resumo:
Objectives: This study reports the cost-effectiveness of a preventive intervention, consisting of counseling and specific support for the mother-infant relationship, targeted at women at high risk of developing postnatal depression. Methods: A prospective economic evaluation was conducted alongside a pragmatic randomized controlled trial in which women considered at high risk of developing postnatal depression were allocated randomly to the preventive intervention (n = 74) or to routine primary care (n = 77). The primary outcome measure was the duration of postnatal depression experienced during the first 18 months postpartum. Data on health and social care use by women and their infants up to 18 months postpartum were collected, using a combination of prospective diaries and face-to-face interviews, and then were combined with unit costs ( pound, year 2000 prices) to obtain a net cost per mother-infant dyad. The nonparametric bootstrap method was used to present cost-effectiveness acceptability curves and net benefit statistics at alternative willingness to pay thresholds held by decision makers for preventing 1 month of postnatal depression. Results: Women in the preventive intervention group were depressed for an average of 2.21 months (9.57 weeks) during the study period, whereas women in the routine primary care group were depressed for an average of 2.70 months (11.71 weeks). The mean health and social care costs were estimated at 2,396.9 pound per mother-infant dyad in the preventive intervention group and 2,277.5 pound per mother-infant dyad in the routine primary care group, providing a mean cost difference of 119.5 pound (bootstrap 95 percent confidence interval [Cl], -535.4, 784.9). At a willingness to pay threshold of 1,000 pound per month of postnatal depression avoided, the probability that the preventive intervention is cost-effective is .71 and the mean net benefit is 383.4 pound (bootstrap 95 percent Cl, -863.3- pound 1,581.5) pound. Conclusions: The preventive intervention is likely to be cost-effective even at relatively low willingness to pay thresholds for preventing 1 month of postnatal depression during the first 18 months postpartum. Given the negative impact of postnatal depression on later child development, further research is required that investigates the longer-term cost-effectiveness of the preventive intervention in high risk women.
Resumo:
Objective: To determine whether the use of verbal descriptors suggested by the European Union (EU) such as "common" (1-10% frequency) and "rare" (0.01-0.1%) effectively conveys the level of risk of side effects to people taking a medicine. Design: Randomised controlled study with unconcealed allocation. Participants: 120 adults taking simvastatin or atorvastatin after cardiac surgery or myocardial infarction. Setting: Cardiac rehabilitation clinics at two hospitals in Leeds, UK. Intervention: A written statement about one of the side effects of the medicine (either constipation or pancreatitis). Within each side effect condition half the patients were given the information in verbal form and half in numerical form (for constipation, "common" or 2.5%; for pancreatitis, "rare" or 0.04%). Main outcome measure: The estimated likelihood of the side effect occurring. Other outcome measures related to the perceived severity of the side effect, its risk to health, and its effect on decisions about whether to take the medicine. Results: The mean likelihood estimate given for the constipation side effect was 34.2% in the verbal group and 8.1% in the numerical group; for pancreatitis it was 18% in the verbal group and 2.1% in the numerical group. The verbal descriptors were associated with more negative perceptions of the medicine than their equivalent numerical descriptors. Conclusions: Patients want and need understandable information about medicines and their risks and benefits. This is essential if they are to become partners in medicine taking. The use of verbal descriptors to improve the level of information about side effect risk leads to overestimation of the level of harm and may lead patients to make inappropriate decisions about whether or not they take the medicine.
Resumo:
Background:Excessive energy intake and obesity lead to the metabolic syndrome (MetS). Dietary saturated fatty acids (SFAs) may be particularly detrimental on insulin sensitivity (SI) and on other components of the MetS. Objective:This study determined the relative efficacy of reducing dietary SFA, by isoenergetic alteration of the quality and quantity of dietary fat, on risk factors associated with MetS. Design:A free-living, single-blinded dietary intervention study. Subjects and Methods:MetS subjects (n=417) from eight European countries completed the randomized dietary intervention study with four isoenergetic diets distinct in fat quantity and quality: high-SFA; high-monounsaturated fatty acids and two low-fat, high-complex carbohydrate (LFHCC) diets, supplemented with long chain n-3 polyunsaturated fatty acids (LC n-3 PUFAs) (1.2 g per day) or placebo for 12 weeks. SI estimated from an intravenous glucose tolerance test (IVGTT) was the primary outcome measure. Lipid and inflammatory markers associated with MetS were also determined. Results:In weight-stable subjects, reducing dietary SFA intake had no effect on SI, total and low-density lipoprotein cholesterol concentration, inflammation or blood pressure in the entire cohort. The LFHCC n-3 PUFA diet reduced plasma triacylglycerol (TAG) and non-esterified fatty acid concentrations (P<0.01), particularly in men. Conclusion:There was no effect of reducing SFA on SI in weight-stable obese MetS subjects. LC n-3 PUFA supplementation, in association with a low-fat diet, improved TAG-related MetS risk profiles.
Resumo:
In this study, we report on the development and psychometric evaluation of the Risk-Taking (RT) and Self-Harm (SH) Inventory for Adolescents (RTSHIA), a self-report measure designed to assess adolescent RT and SH in community and clinical settings. 651 young people from secondary schools in England ranging in age from 11.6 years to 18.7 years and 71 young people referred to mental health services for SH behavior in London between the ages of 11.9 years and 17.5 years completed the RTSHIA along with standardized measures of adolescent psychopathology. Two factors emerged from the principal axis factoring, and RT and SH were further validated by a confirmatory factor analysis as related, but different, constructs, rather than elements of a single continuum. Inter-item and test–retest reliabilities were high for both components (Cronbach's α = .85, rtt = .90; Cronbach's α .93, rtt = .87), and considerable evidence emerged in support of the measure's convergent, concurrent, and divergent validity. The findings are discussed with regard to potential usefulness of the RTSHIA for research and clinical purposes with adolescents.
Resumo:
The increased frequency in reporting UK property performance figures, coupled with the acceptance of the IPD database as the market standard, has enabled property to be analysed on a comparable level with other more frequently traded assets. The most widely utilised theory for pricing financial assets, the Capital Asset Pricing Model (CAPM), gives market (systematic) risk, beta, centre stage. This paper seeks to measure the level of systematic risk (beta) across various property types, market conditions and investment holding periods. This paper extends the authors’ previous work on investment holding periods and how excess returns (alpha) relate to those holding periods. We draw on the uniquely constructed IPD/Gerald Eve transactions database, containing over 20,000 properties over the period 1983-2005. This research allows us to confirm our initial findings that properties held over longer periods perform in line with overall market performance. One implication of this is that over the long-term performance may be no different from an index tracking approach.
Resumo:
Research into the topic of liquidity has greatly benefited from the availability of data. Although bid-ask spreads were inaccessible to researchers, Roll (1984) provided a conceptual model that estimated the effective bid-ask prices from regular time series data, recorded on a daily or longer interval. Later data availability improved and researchers were able to address questions regarding the factors that influenced the spreads and the relationship between spreads and risk, return and liquidity. More recently transaction data have been used to measure the effective spread and researchers have been able to refine the concepts of liquidity to include the impact of transactions on price movements (Clayton and McKinnon, 2000) on a trade-by-trade analysis. This paper aims to use techniques that combine elements from all three approaches and, by studying US data over a relatively long time period, to throw light on earlier research as well as to reveal the changes in liquidity over the period controlling for extraneous factors such as market, age and size of REIT. It also reveals some comparable results for the UK market over the same period.