800 resultados para Education, Sociology of|Health Sciences, Public Health
Resumo:
Background: There is a need to review factors related to health service utilisation by the increasing number of cancer survivors in order to inform care planning and the organisation and delivery of services.
Methods: Studies were identified via systematic searches of Medline, PsycINFO, CINAHL, Social Science Citation Index and the SEER-MEDICARE library. Methodological quality was assessed using STROBE; and the Andersen Behavioural Model was used as a framework to structure, organise and analyse the results of the review.
Results: Younger, white cancer survivors were most likely to receive follow-up screening, preventive care, visit their physician, utilise professional mental health services and least likely to be hospitalised. Utilisation rates of other health professionals such as physiotherapists were low. Only studies of health service use conducted in the USA investigated the role of type of health insurance and ethnicity. There appeared to be disparate service use among US samples in terms of ethnicity and socio-demographic status, regardless of type of health insurance provisions- this may be explained by underlying differences in health-seeking behaviours. Overall, use of follow-up care appeared to be lower than expected and barriers existed for particular groups of cancer survivors.
Conclusions: Studies focussed on the use of a specific type of service rather than adopting a whole-system approach and future health services research should address this shortcoming. Overall, there is a need to improve access to care for all cancer survivors. Studies were predominantly US-based focussing mainly on breast or colorectal cancer. Thus, the generalisability of findings to other health-care systems and cancer sites is unclear. The Andersen Behavioural Model provided an appropriate framework for studying and understanding health service use among cancer survivors. The active involvement of physicians and use of personalised care plans are required in order to ensure that post-treatment needs and recommendations for care are met.
Resumo:
Total-arsenic (T-As) and arsenic (As) species were determined by HPLC-HG-AAS in ten different confectionery products: nine throat pearls and an industrial licorice extract. The Spanish legislation sets a maximum total-As content in confectionery products at 0.1 mu g/g. T-As concentrations were above the permitted maximum limit (mean of 0.219 +/- 0.008 mu g/g). All As was present in the form of toxic inorganic species. The daily consumption of licorice-confections in Spain is 1.1 g and leads to a daily intake of inorganic-As of 0.23 mu g (0.2% of the tolerable daily intake of inorganic As for a teenager). These experimental results proved that even though high total-As concentrations were found in licorice throat pearls and that all the As found was present as inorganic species, no significant risks for health are expected just by considering this As source.
Resumo:
Background Persistent and marked differences in adult morbidity and mortality between regions in the United Kingdom (UK) are often referred to as the north-south gradient (or divide) and the Scottish effect, and are only partly explained by adult levels of socioeconomic status (SES) or risk factors which suggests variation arising earlier in life. The aim of the current study was to examine regional variations in five health indicators in children in England and Scotland at birth and three years of age.
Methods Respondents were 10,500 biological Caucasian mothers of singleton children recruited to the Millennium Cohort Study (MCS). Outcome variables were: gestational age and weight at birth, and height, body mass index (BMI), and externalising behaviour at age three. Region/Country was categorised as: South (reference), Midlands, North, and Scotland. Respondents provided information on child, maternal, household, and socioeconomic characteristics when the cohort infant/child was aged nine months and again when aged three years.
Results There were no significant regional variations for gestational age or birthweight. However, at age three there was a north-south gradient for externalising behaviour and a north-south divide in BMI which attenuated on adjustment. However, a north-south divide in height was not fully explained by the adjusted model. There was also evidence of a ‘Midlands effect’, with increased likelihoods of shorter stature and behaviour problems. Results showed a Scottish effect for height and BMI in the unadjusted models, and height in the adjusted model. However, Scottish children were less likely to show behaviour problems in crude and adjusted models.
Conclusions Findings indicated no marked regional differences in children at birth, but by age three some regional health differences were evident, and though not distinct north-south gradients or Scottish effects, are evidence of health inequalities appearing at an early age and dependent on geographic location.
Resumo:
The focus of this paper is to outline a method for consolidating and implementing the work on performance-based specification and testing. First part of the paper will review the mathematical significance of the variables used in common service life models. The aim is to identify a set of significant variables that influence the ingress of chloride ions into concrete. These variables are termed as Key Performance Indicators (KPI’s). This will also help to reduce the complexity of some of the service life models and make them more appealing for practicing engineers. The second part of the paper presents a plan for developing a database based on these KPI’s so that relationships can then be drawn between common concrete mix parameters and KPI’s. This will assist designers in specifying a concrete with adequate performance for a particular environment. This, collectively, is referred to as the KPI based approach and the concluding remarks will outline how the authors envisage the KPI theory to relate to performance assessment and monitoring.
Resumo:
Background: Clinical Commissioning Groups (CCGs) are mandated to use research evidence effectively to ensure optimum use of resources by the National Health Service (NHS), both in accelerating innovation and in stopping the use of less effective practices and models of service delivery. We intend to evaluate whether access to a demand-led evidence service improves uptake and use of research evidence by NHS commissioners compared with less intensive and less targeted alternatives.
Methods/design: This is a controlled before and after study involving CCGs in the North of England. Participating CCGs will receive one of three interventions to support the use of research evidence in their decision-making:1) consulting plus responsive push of tailored evidence; 2) consulting plus an unsolicited push of non-tailored evidence; or 3) standard service unsolicited push of non-tailored evidence. Our primary outcome will be changed at 12 months from baseline of a CCGs ability to acquire, assess, adapt and apply research evidence to support decision-making. Secondary outcomes will measure individual clinical leads and managers’ intentions to use research evidence in decision making. Documentary evidence of the use of the outputs of the service will be sought. A process evaluation will evaluate the nature and success of the interactions both within the sites and between commissioners and researchers delivering the service.
Discussion: The proposed research will generate new knowledge of direct relevance and value to the NHS. The findings will help to clarify which elements of the service are of value in promoting the use of research evidence.Those involved in NHS commissioning will be able to use the results to inform how best to build the infrastructure they need to acquire, assess, adapt and apply research evidence to support decision-making and to fulfil their statutory duties under the Health and Social Care Act.
Resumo:
Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias.
Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture.
Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria.
Setting: Critical care departments within NHS hospitals in the north-west of England.
Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation.
Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard.
Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy.
Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.