34 resultados para PSYCHOLOGICAL-FACTORS INCREASE
em CentAUR: Central Archive University of Reading - UK
Resumo:
Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.
Resumo:
Objective To determine the prevalence and nature of prescribing and monitoring errors in general practices in England. Design Retrospective case note review of unique medication items prescribed over a 12 month period to a 2% random sample of patients. Mixed effects logistic regression was used to analyse the data. Setting Fifteen general practices across three primary care trusts in England. Data sources Examination of 6048 unique prescription items prescribed over the previous 12 months for 1777 patients. Main outcome measures Prevalence of prescribing and monitoring errors, and severity of errors, using validated definitions. Results Prescribing and/or monitoring errors were detected in 4.9% (296/6048) of all prescription items (95% confidence interval 4.4 - 5.5%). The vast majority of errors were of mild to moderate severity, with 0.2% (11/6048) of items having a severe error. After adjusting for covariates, patient-related factors associated with an increased risk of prescribing and/or monitoring errors were: age less than 15 (Odds Ratio (OR) 1.87, 1.19 to 2.94, p=0.006) or greater than 64 years (OR 1.68, 1.04 to 2.73, p=0.035), and higher numbers of unique medication items prescribed (OR 1.16, 1.12 to 1.19, p<0.001). Conclusion Prescribing and monitoring errors are common in English general practice, although severe errors are unusual. Many factors increase the risk of error. Having identified the most common and important errors, and the factors associated with these, strategies to prevent future errors should be developed based on the study findings.
Resumo:
Background Epidemiological studies indicate that the prevalence of psychological problems in patients attending primary care services may be as high as 25%. Aim To identify factors that influence the detection of psychological difficulties in adolescent patients receiving primary care in the UK. Design of study A prospective study of 13-16 year olds consecutively attending general practices. Setting General practices, Norfolk, UK. Method Information was obtained from adolescents and parents using the validated Strengths and Difficulties Questionnaire (SDQ) and from GF`s using the consultation assessment form. Results Ninety-eight adolescents were recruited by 13 GPs in Norfolk (mean age = 14.4 years, SD = 1.08; 38 males, 60 females). The study identified psychological difficulties in almost one-third of adolescents (31/98, 31.6%). Three factors significant to the detection of psychological disorders in adolescents were identified: adolescents' perceptions of difficulties according to the self-report SDQ, the severity of their problems as indicated by the self-report SDQ, and whether psychological issues were discussed in the consultation. GPs did not always explore psychological problems with adolescents, even if GPs perceived these to be present. Nineteen of 31 adolescents with psychological difficulties were identified by GPs (sensitivity = 61.2%, specificity = 85.1%). A management plan or follow-up was made for only seven of 19 adolescents identified, suggesting that ongoing psychological difficulties in many patients are not being addressed. Conclusions GPs are in a good position to identify psychological issues in adolescents, but GPs and adolescents seem reluctant to explore these openly. Open discussion of psychological issues in GP consultations was found to be the most important factor in determining whether psychological difficulties in adolescents are detected by GPs.
Resumo:
A set of lysimeter based experiments was carried out during 2000/01 to evaluate the impact of soil type and grassland management on potassium (K) leaching. The effects of (1) four soil textures (sand, loam, loam over chalk and clay), (2) grazing and cutting (with farmyard manure application), and (3) K applied as inorganic fertilizer, dairy slurry or a mixture of both sources were tested. Total K losses in the clay soil were more than twice those in the sand soil (13 and 6 kg K ha(-1), respectively) because of the development of preferential flow in the clay soil. They were also greater in the cut treatment than in the grazed treatment (82 and 51 kg K ha(-1), respectively; P less than or equal to0.01), associated with a 63% increase of K concentration in the leachates from the former (6.7 +/- 0.28 and 4.1 +/- 0.22 mg K L-1 for cut and grazed, respectively; P less than or equal to0.01) because of the K input from the farmyard manure. The source of fertilizer did not affect total K losses or the average K concentration in the leachates (P >0.05), but it changed the pattern of these over time.
Resumo:
This paper presents the results of (a) On-farm trials (eight) over a two-year period designed to test the effectiveness of leguminous cover crops in terms of increasing maize yields in Igalaland, Nigeria. (b) A survey designed to monitor the extent of, and reasons behind, adoption of the leguminous cover crop technology in subsequent years by farmers involved, to varying degrees, in the trial programme. particular emphasis was placed on comparing adoption of leguminous cover crops with that of new crop varieties released by a non-governmental organization in the same area since the mid 1980s. While the leguminous cover crop technology boosted maize grain yields by 127 to 136% above an untreated control yield of between 141 and 171 kg ha(-1), the adoption rate (number of farmers adopting) was only 18%. By way of contrast, new crop varieties had a highly variable benefit in terms of yield advantage over local varieties, with the best average increase of around 20%. Adoption rates for new crop varieties, assessed as both the number of farmers growing the varieties and the number of plots planted to the varieties, were 40% on average. The paper discusses some key factors influencing adoption of the leguminous cover crop technology, including seed availability. Implications of these results for a local non-governmental organization, the Diocesan Development Services, concerned with promoting the leguminous cover crop technology are also discussed.
Resumo:
A laboratory incubation experiment was conducted to evaluate the soil factors that influence the dissolution of two phosphate rocks (PRs) of different reactivity (Gafsa, GPR, reactive PR; and Togo-Hahotoe, HPR, low reactivity PR) in seven agricultural soils from Cameroon having variable phosphorus (P)- sorption capacities, organic carbon (C) contents, and exchangeable acidities. Ground PR was mixed with the soils at a rate of 500 mg P kg 21 soil and incubated at 30 degrees C for 85 days. Dissolution of the PRs was determined at various intervals using the Delta NaOH-P method ( the difference of the amount of P extracted by 0.5 M NaOH between the PR-treated soils and the control). Between 4 and 27% of HPR and 33 and 50% of GPR were dissolved in the soils. Calcium (Ca) saturation of cation exchange sites and proton supply strongly affected PR dissolution in these soils. Acid soils with pH-(H2O), < 5 (NKL, ODJ, NSM, MTF) dissolved more phosphate rock than those with pH-(H2O) > 5 (DSC, FGT, BAF). However, the lack of a sufficient Ca sink in the former constrained the dissolution of both PRs. The dissolution of GPR in the slightly acidic soils was limited by increase in Ca saturation and that of HPR was constrained by limited supply in protons. Generally, the dissolution of GPR was higher than that of HPR for each soil. The kinetics of dissolution of PR in the soils was best described by the power function equation P At B. More efficient use of PR in these soils can be achieved by raising the soil cation exchange capacity, thereby increasing the Ca sink size. This could be done by amending such soils with organic materials.
Resumo:
Ozone and its precursors were measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the monsoon season 2006 as part of the African Monsoon Multidisciplinary Analysis (AMMA) campaign. One of the main features observed in the west African boundary layer is the increase of the ozone mixing ratios from 25 ppbv over the forested area (south of 12° N) up to 40 ppbv over the Sahelian area. We employ a two-dimensional (latitudinal versus vertical) meteorological model coupled with an O3-NOx-VOC chemistry scheme to simulate the distribution of trace gases over West Africa during the monsoon season and to analyse the processes involved in the establishment of such a gradient. Including an additional source of NO over the Sahelian region to account for NO emitted by soils we simulate a mean NOx concentration of 0.7 ppbv at 16° N versus 0.3 ppbv over the vegetated region further south in reasonable agreement with the observations. As a consequence, ozone is photochemically produced with a rate of 0.25 ppbv h−1 over the vegetated region whilst it reaches up to 0.75 ppbv h−1 at 16° N. We find that the modelled gradient is due to a combination of enhanced deposition to vegetation, which decreases the ozone levels by up to 11 pbbv, and the aforementioned enhanced photochemical production north of 12° N. The peroxy radicals required for this enhanced production in the north come from the oxidation of background CO and CH4 as well as from VOCs. Sensitivity studies reveal that both the background CH4 and partially oxidised VOCs, produced from the oxidation of isoprene emitted from the vegetation in the south, contribute around 5–6 ppbv to the ozone gradient. These results suggest that the northward transport of trace gases by the monsoon flux, especially during nighttime, can have a significant, though secondary, role in determining the ozone gradient in the boundary layer. Convection, anthropogenic emissions and NO produced from lightning do not contribute to the establishment of the discussed ozone gradient.
Resumo:
Despite the acknowledged benefits of reducing SFA intake few countries within the EU meet recognised targets. Milk and dairy products represent the single largest source of dietary SFA in most countries, yet epidemiological evidence indicates that milk has cardioprotective properties such that simply reducing consumption of dairy foods to meet SFA targets may not be a sound public health approach. The present paper explores the options for replacing some of the SFA in milk fat with cis-MUFA through alteration of the diet of the dairy cow, and the evidence that such changes can improve the indicators for CHD and CVD in general for the consumer. In addition, the outcome of such changes on risk factors for CHD and CVD at the population level is examined in the light of a modelling exercise involving data for eleven EU member states. Given the current and projected costs of health care, the results indicate that urgent consideration should be given to such a strategy.
Resumo:
Despite the acknowledged benefits of reducing SFA intake few countries within the EU meet recognised targets. Milk and dairy products represent the single largest source of dietary SFA in most countries, yet epidemiological evidence indicates that milk has cardioprotective properties such that simply reducing consumption of dairy foods to meet SFA targets may not be a sound public health approach. The present paper explores the options for replacing some of the SFA in milk fat with cis-MUFA through alteration of the diet of the dairy cow, and the evidence that such changes can improve the indicators for CHD and CVD in general for the consumer. In addition, the outcome of such changes on risk factors for CHD and CVD at the population level is examined in the light of a modelling exercise involving data for eleven EU member states. Given the current and projected costs of health care, the results indicate that urgent consideration should be given to such a strategy.
Resumo:
The effect of variety, agronomic and environmental factors on the chemical composition and energy value for ruminants and non-ruminants of husked and naked oats grain was studied. Winter oats were grown as experimental plots in each of 2 years on three sites in England. At each site two conventional husked oat cultivars (Gerald and Image) and two naked cultivars (Kynon and Pendragon) were grown. At each site, crops were sown on two dates and all crops were grown with the application of either zero or optimum fertiliser nitrogen. Variety and factors contained within the site + year effect had the greatest influence on the chemical composition and nutritive value of oats, followed by nitrogen ferfiliser treatment. For example, compared with zero nitrogen, the optimum nitrogen fertiliser treatment resulted in a consistent and significant (P < 0.001) increase in crude protein for all varieties at all sites from an average of 95 to 118 g kg(-1) DM, increased the potassium concentration in all varieties from an average of 4.9 to 5.1 g kg(-1) DM (P < 0.01) and reduced total lipid by a small but significant (P < 0.001) amount. Optimum nitrogen increased (P < 0.001) the NDF concentration in the two husked varieties and in the naked variety Pendragon. Naked cultivars were lower in fibre, had considerably higher energy, total lipid, linoleic acid, protein, starch and essential amino acids than the husked cultivars. Thus nutritionists need to be selective in their choice of naked or husked oat depending on the intended dietary use. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Logistic regression, supported by other statistical analyses was used to explore the possible association of risk factors with the fluoroquinolone (FQ)-resistance status of 108 pig finisher farms in Great Britain. The farms were classified as 'affected' or 'not affected' by FQ-resistant E. coli or Campylobacter spp. on the basis of isolation of organisms from faecal samples on media containing 1 mg/l FQ. The use of FQ was the most important factor associated with finding resistant E. coli and/or Campylobacter, which were found on 79% (FQ-resistant E. coli) and 86% (FQ-resistant Campylobacter) of farms with a history of FQ use. However, resistant bacteria were also found on 19% (FQ-resistant E. coli) and 54% (FQ-resistant Campylobacter) of farms with no history of FQ use. For FQ-resistant E. coli, biosecurity measures may be protective and there was strong seasonal variation, with more farms found affected when sampled in the summer. For FQ-resistant Campylobacter, the buying-in of grower stock may increase risk and good on-farm hygiene may be protective. The findings suggest that resistant organisms, particularly Campylobacter, may spread between pig farms.
Resumo:
A phylogenetic approach was taken to investigate the evolutionary history of seed appendages in the plant family Polygalaceae (Fabales) and determine which factors might be associated with evolution of elaiosomes through comparisons to abiotic (climate) and biotic (ant species number and abundance) timelines. Molecular datasets from three plastid regions representing 160 species were used to reconstruct a phylogenetic tree of the order Fabales, focusing on Polygalaceae. Bayesian dating methods were used to estimate the age of the appearance of ant-dispersed elaiosomes in Polygalaceae, shown by likelihood optimizations to have a single origin in the family. Topology-based tests indicated a diversification rate shift associated with appearance of caruncular elaiosomes. We show that evolution of the caruncular elaiosome type currently associated with ant dispersal occurred 54.0-50.5 million year ago. This is long after an estimated increase in ant lineages in the Late Cretaceous based on molecular studies, but broadly concomitant with increasing global temperatures culminating in the Late Paleocene-Early Eocene thermal maxima. These results suggest that although most major ant clades were present when elaiosomes appeared, the environmental significance of elaiosomes may have been an important factor in success of elaiosome-bearing lineages. Ecological abundance of ants is perhaps more important than lineage numbers in determining significance of ant dispersal. Thus, our observation that elaiosomes predate increased ecological abundance of ants inferred from amber deposits could be indicative of an initial abiotic environmental function.
Resumo:
Background: The upper outer quadrant (UOQ) of the breast is the most frequent site for incidence of breast cancel; but the reported disproportionate incidence in this quadrant appears to rise with year of publication. Materials and Methods: In order to determine whether this increasing incidence in the UOQ is an artifact of different study populations or is chronological, data have been analysed for annual quadrant incidence of female breast cancer recorded nationally in England and Wales between 1979 and 2000 and in Scotland between 1980 and 2001. Results: In England and Wales, the recorded incidence of female breast cancer in the UOQ rose front 47.9% in 1979 to 53.3% in 2000, and has done so linearly over tune with a con-elation coefficient R of +/- 0.71 +/- SD 0.01 (p < 0.001). Analysis of independent data front Scotland showed a similar trend in that recorded female breast cancer had also increased in the UOQ from 38.3% in 1980 to 54.7% in 2001, with a con-elation coefficient R for the linear annual increase of +0.80 +/- SD 0.03 (p < 0.001). Conclusion: These results are inconsistent with current views that the high level of UOQ breast cancer is due solely to a greater amount of target epithelial tissue in that region. Identification of the reasons for such a disproportionate site-specific increase could provide clues as to causative factors in breast cancer.
Resumo:
Ozone and its precursors were measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the monsoon season 2006 as part of the African Monsoon Multidisciplinary Analysis (AMMA) campaign. One of the main features observed in the west African boundary layer is the increase of the ozone mixing ratios from 25 ppbv over the forested area (south of 12 degrees N) up to 40 ppbv over the Sahelian area. We employ a two-dimensional ( latitudinal versus vertical) meteorological model coupled with an O-3-NOx-VOC chemistry scheme to simulate the distribution of trace gases over West Africa during the monsoon season and to analyse the processes involved in the establishment of such a gradient. Including an additional source of NO over the Sahelian region to account for NO emitted by soils we simulate a mean NOx concentration of 0.7 ppbv at 16 degrees N versus 0.3 ppbv over the vegetated region further south in reasonable agreement with the observations. As a consequence, ozone is photochemically produced with a rate of 0.25 ppbv h(-1) over the vegetated region whilst it reaches up to 0.75 ppbv h(-1) at 16 degrees N. We find that the modelled gradient is due to a combination of enhanced deposition to vegetation, which decreases the ozone levels by up to 11 pbbv, and the aforementioned enhanced photochemical production north of 12 degrees N. The peroxy radicals required for this enhanced production in the north come from the oxidation of background CO and CH4 as well as from VOCs. Sensitivity studies reveal that both the background CH4 and partially oxidised VOCs, produced from the oxidation of isoprene emitted from the vegetation in the south, contribute around 5-6 ppbv to the ozone gradient. These results suggest that the northward transport of trace gases by the monsoon flux, especially during nighttime, can have a significant, though secondary, role in determining the ozone gradient in the boundary layer. Convection, anthropogenic emissions and NO produced from lightning do not contribute to the establishment of the discussed ozone gradient.
Resumo:
This study was aimed at determining whether an increase of 5 portions of fruits and vegetables in the form of soups and beverages has a beneficial effect on markers of oxidative stress and cardiovascular disease risk factors. The study was a single blind, randomized, controlled, crossover dietary intervention study. After a 2-wk run-in period with fish oil supplementation, which continued throughout the dietary intervention to increase oxidative stress, the volunteers consumed carotenoid-rich or control vegetable soups and beverages for 4 wk. After a 10-wk wash-out period, the volunteers repeated the above protocol, consuming the other intervention foods. Both test and control interventions significantly increased the % energy from carbohydrates and decreased dietary protein and vitamin B-12 intakes. Compared with the control treatment, consumption of the carotenoid-rich soups and beverages increased dietary carotenoids, vitamin C, alpha-tocopherol, potassium, and folate, and the plasma concentrations of alpha-carotene (362%), beta-carotene (250%) and lycopene (31%) (P < 0.01) and decreased the plasma homocysteine concentration by 8.8% (P < 0.01). The reduction in plasma homocysteine correlated weakly with the increase in dietary folate during the test intervention (r = -0.35, P = 0.04). The plasma antioxidant status and markers of oxidative stress were not affected by treatment. Consumption of fruit and vegetable soups and beverages makes a useful contribution to meeting dietary recommendations for fruit and vegetable consumption.