333 resultados para M-Days
Resumo:
Biochars produced by slow pyrolysis of greenwaste (GW), poultry litter (PL), papermill waste (PS), and biosolids (BS) were shown to reduce N2O emissions from an acidic Ferrosol. Similar reductions were observed for the untreated GW feedstock. Soil was amended with biochar or feedstock giving application rates of 1 and 5%. Following an initial incubation, nitrogen (N) was added at 165 kg/ha as urea. Microcosms were again incubated before being brought to 100% water-filled porosity and held at this water content for a further 47 days. The flooding phase accounted for the majority (<80%) of total N2O emissions. The control soil released 3165 mg N2O-N/m2, or 15.1% of the available N as N2O. Amendment with 1 and 5% GW feedstock significantly reduced emissions to 1470 and 636 mg N2O-N/m2, respectively. This was equivalent to 8.6 and 3.8% of applied N. The GW biochar produced at 350°C was least effective in reducing emissions, resulting in 1625 and 1705 mg N2O-N/m2 for 1 and 5% amendments. Amendment with BS biochar at 5% had the greatest impact, reducing emissions to 518 mg N2O-N/m2, or 2.2% of the applied N over the incubation period. Metabolic activity as measured by CO2 production could not explain the differences in N2O emissions between controls and amendments, nor could NH4+ or NO3– concentrations in biochar-amended soils. A decrease in NH4+ and NO3– following GW feedstock application is likely to have been responsible for reducing N2O emissions from this amendment. Reduction in N2O emissions from the biochar-amended soils was attributed to increased adsorption of NO3–. Small reductions are possible due to improved aeration and porosity leading to lower levels of denitrification and N2O emissions. Alternatively, increased pH was observed, which can drive denitrification through to dinitrogen during soil flooding.
Resumo:
In an Australian context, the term hooning refers to risky driving behaviours such as illegal street racing and speed trials, as well as behaviours that involve unnecessary noise and smoke, which include burn outs, donuts, fish tails, drifting and other skids. Hooning receives considerable negative media attention in Australia, and since the 1990s all Australian jurisdictions have implemented vehicle impoundment programs to deal with the problem. However, there is limited objective evidence of the road safety risk associated with hooning behaviours. Attempts to estimate the risk associated with hooning are limited by official data collection and storage practices, and the willingness of drivers to admit to their illegal behaviour in the event of a crash. International evidence suggests that illegal street racing is associated with only a small proportion of fatal crashes; however, hooning in an Australian context encompasses a broader group of driving behaviours than illegal street racing alone, and it is possible that the road safety risks will differ with these behaviours. There is evidence from North American jurisdictions that vehicle impoundment programs are effective for managing drink driving offenders, and drivers who continue to drive while disqualified or suspended both during and post-impoundment. However, these programs used impoundment periods of 30 – 180 days (depending on the number of previous offences). In Queensland the penalty for a first hooning offence is 48 hours, while the vehicle can be impounded for up to 3 months for a second offence, or permanently for a third or subsequent offence within three years. Thus, it remains unclear whether similar effects will be seen for hooning offenders in Australia, as no evaluations of vehicle impoundment programs for hooning have been published. To address these research needs, this program of research consisted of three complementary studies designed to: (1) investigate the road safety implications of hooning behaviours in terms of the risks associated with the specific behaviours, and the drivers who engage in these behaviours; and (2) assess the effectiveness of current approaches to dealing with the problem; in order to (3) inform policy and practice in the area of hooning behaviour. Study 1 involved qualitative (N = 22) and quantitative (N = 290) research with drivers who admitted engaging in hooning behaviours on Queensland roads. Study 2 involved a systematic profile of a large sample of drivers (N = 834) detected and punished for a hooning offence in Queensland, and a comparison of their driving and crash histories with a randomly sampled group of Queensland drivers with the same gender and age distribution. Study 3 examined the post-impoundment driving behaviour of hooning offenders (N = 610) to examine the effects of vehicle impoundment on driving behaviour. The theoretical framework used to guide the research incorporated expanded deterrence theory, social learning theory, and driver thrill-seeking perspectives. This framework was used to explore factors contributing to hooning behaviours, and interpret the results of the aspects of the research designed to explore the effectiveness of vehicle impoundment as a countermeasure for hooning. Variables from each of the perspectives were related to hooning measures, highlighting the complexity of the behaviour. This research found that the road safety risk of hooning behaviours appears low, as only a small proportion of the hooning offences in Study 2 resulted in a crash. However, Study 1 found that hooning-related crashes are less likely to be reported than general crashes, particularly when they do not involve an injury, and that higher frequencies of hooning behaviours are associated with hooning-related crash involvement. Further, approximately one fifth of drivers in Study 1 reported being involved in a hooning-related crash in the previous three years, which is comparable to general crash involvement among the general population of drivers in Queensland. Given that hooning-related crashes represented only a sub-set of crash involvement for this sample, this suggests that there are risks associated with hooning behaviour that are not apparent in official data sources. Further, the main evidence of risk associated with the behaviour appears to relate to the hooning driver, as Study 2 found that these drivers are likely to engage in other risky driving behaviours (particularly speeding and driving vehicles with defects or illegal modifications), and have significantly more traffic infringements, licence sanctions and crashes than drivers of a similar (i.e., young) age. Self-report data from the Study 1 samples indicated that Queensland’s vehicle impoundment and forfeiture laws are perceived as severe, and that many drivers have reduced their hooning behaviour to avoid detection. However, it appears that it is more common for drivers to have simply changed the location of their hooning behaviour to avoid detection. When the post-impoundment driving behaviour of the sample of hooning offenders was compared to their pre-impoundment behaviour to examine the effectiveness of vehicle impoundment in Study 3, it was found that there was a small but significant reduction in hooning offences, and also for other traffic infringements generally. As Study 3 was observational, it was not possible to control for extraneous variables, and is, therefore, possible that some of this reduction was due to other factors, such as a reduction in driving exposure, the effects of changes to Queensland’s Graduated Driver Licensing scheme that were implemented during the study period and affected many drivers in the offender sample due to their age, or the extension of vehicle impoundment to other types of offences in Queensland during the post-impoundment period. However, there was a protective effect observed, in that hooning offenders did not show the increase in traffic infringements in the post period that occurred within the comparison sample. This suggests that there may be some effect of vehicle impoundment on the driving behaviour of hooning offenders, and that this effect is not limited to their hooning driving behaviour. To be more confident in these results, it is necessary to measure driving exposure during the post periods to control for issues such as offenders being denied access to vehicles. While it was not the primary aim of this program of research to compare the utility of different theoretical perspectives, the findings of the research have a number of theoretical implications. For example, it was found that only some of the deterrence variables were related to hooning behaviours, and sometimes in the opposite direction to predictions. Further, social learning theory variables had stronger associations with hooning. These results suggest that a purely legal approach to understanding hooning behaviours, and designing and implementing countermeasures designed to reduce these behaviours, are unlikely to be successful. This research also had implications for policy and practice, and a number of recommendations were made throughout the thesis to improve the quality of relevant data collection practices. Some of these changes have already occurred since the expansion of the application of vehicle impoundment programs to other offences in Queensland. It was also recommended that the operational and resource costs of these laws should be compared to the road safety benefits in ongoing evaluations of effectiveness to ensure that finite traffic policing resources are allocated in a way that produces maximum road safety benefits. However, as the evidence of risk associated with the hooning driver is more compelling than that associated with hooning behaviour, it was argued that the hooning driver may represent the better target for intervention. Suggestions for future research include ongoing evaluations of the effectiveness of vehicle impoundment programs for hooning and other high-risk driving behaviours, and the exploration of additional potential targets for intervention to reduce hooning behaviour. As the body of knowledge regarding the factors contributing to hooning increases, along with the identification of potential barriers to the effectiveness of current countermeasures, recommendations for changes in policy and practice for hooning behaviours can be made.
Resumo:
Background: In the last decade, there has been increasing interest in the health effects of sedentary behavior, which is often assessed using self-report sitting-time questions. The aim of this qualitative study was to document older adults’ understanding of sitting-time questions from the International Physical Activity (PA) Questionnaire (IPAQ) and the PA Scale for the Elderly (PASE). Methods: Australian community-dwelling adults aged 65+ years answered the IPAQ and PASE sitting questions in face-to-face semi-structured interviews. IPAQ uses one open-ended question to assess sitting on a weekday in the last 7 days 'at work, at home, while doing coursework and during leisure time'; PASE uses a three-part closed question about daily leisure-time sitting in the last 7 days. Participants expressed their thoughts out loud while answering each question. They were then probed about their responses. Interviews were recorded, transcribed and coded into themes. Results: Mean age of the 28 male and 27 female participants was 73 years (range 65-89). The most frequently reported activity was watching TV. For both questionnaires, many participants had difficulties understanding what activities to report. Some had difficulty understanding what activities should be classified as ‘leisure-time sitting’. Some assumed they were being asked to only report activities provided as examples. Most reported activities they normally do, rather than those performed on a day in the previous week. Participants used a variety of strategies to select ‘a day’ for which they reported their sitting activities and to calculate sitting time on that day. Therefore, many different ways of estimating sitting time were used. Participants had particular difficulty reporting their daily sitting-time when their schedules were not consistent across days. Some participants declared the IPAQ sitting question too difficult to answer. Conclusion: The accuracy of older adults’ self-reported sitting time is questionable given the challenges they have in answering sitting-time questions. Their responses to sitting-time questions may be more accurate if our recommendations for clarifying the sitting domains, providing examples relevant to older adults and suggesting strategies for formulating responses are incorporated. Future quantitative studies should include objective criterion measures to assess validity and reliability of these questions.
Resumo:
In November 2010, the world watched as 33 Chilean miners were rescued from the depths of the earth where they had been stranded since July. We were able to watch because the world’s news media were there, drawn by the human drama, the suspense, the spectacle. It was a great news story, ideally suited for the 24-hour news culture we live in today, and the globalised audience that consumes it. Nothing much happened for 68 of those 69 days, until that final 24 hours when the miners emerged. But we were transfixed, engrossed, immersed in the story.
Resumo:
‘Everybody is science conscious these days’ – so started the inaugural week of Frontiers of Science, a self described ‘intelligently presented and attractively drawn’ science-based comic strip published in the Sydney Morning Herald from 1961 to 1982 and ultimately syndicated to daily newspapers around the world. An archive of the first 200 Frontiers of Science comic strips (1961−65) has been made freely available online through an initiative of the University of Sydney Library. While the 1960s public interest in evolution, space exploration, and the Cold War have given way to the twenty-first century concerns about global warming, genetic engineering, and alternative energy sources, it is fair to say that everybody is still science conscious. Frontiers of Science provides an interesting and nostalgic insight into 1960s popular science through an unusual mode of dissemination.
Resumo:
Understanding the relationship between diet, physical activity and health in humans requires accurate measurement of body composition and daily energy expenditure. Stable isotopes provide a means of measuring total body water and daily energy expenditure under free-living conditions. While the use of isotope ratio mass spectrometry (IRMS) for the analysis of 2H (Deuterium) and 18O (Oxygen-18) is well established in the field of human energy metabolism research, numerous questions remain regarding the factors which influence analytical and measurement error using this methodology. This thesis was comprised of four studies with the following emphases. The aim of Study 1 was to determine the analytical and measurement error of the IRMS with regard to sample handling under certain conditions. Study 2 involved the comparison of TEE (Total daily energy expenditure) using two commonly employed equations. Further, saliva and urine samples, collected at different times, were used to determine if clinically significant differences would occur. Study 3 was undertaken to determine the appropriate collection times for TBW estimates and derived body composition values. Finally, Study 4, a single case study to investigate if TEE measures are affected when the human condition changes due to altered exercise and water intake. The aim of Study 1 was to validate laboratory approaches to measure isotopic enrichment to ensure accurate (to international standards), precise (reproducibility of three replicate samples) and linear (isotope ratio was constant over the expected concentration range) results. This established the machine variability for the IRMS equipment in use at Queensland University for both TBW and TEE. Using either 0.4mL or 0.5mL sample volumes for both oxygen-18 and deuterium were statistically acceptable (p>0.05) and showed a within analytical variance of 5.8 Delta VSOW units for deuterium, 0.41 Delta VSOW units for oxygen-18. This variance was used as “within analytical noise” to determine sample deviations. It was also found that there was no influence of equilibration time on oxygen-18 or deuterium values when comparing the minimum (oxygen-18: 24hr; deuterium: 3 days) and maximum (oxygen-18: and deuterium: 14 days) equilibration times. With regard to preparation using the vacuum line, any order of preparation is suitable as the TEE values fall within 8% of each other regardless of preparation order. An 8% variation is acceptable for the TEE values due to biological and technical errors (Schoeller, 1988). However, for the automated line, deuterium must be assessed first followed by oxygen-18 as the automated machine line does not evacuate tubes but merely refills them with an injection of gas for a predetermined time. Any fractionation (which may occur for both isotopes), would cause a slight elevation in the values and hence a lower TEE. The purpose of the second and third study was to investigate the use of IRMS to measure the TEE and TBW of and to validate the current IRMS practices in use with regard to sample collection times of urine and saliva, the use of two TEE equations from different research centers and the body composition values derived from these TEE and TBW values. Following the collection of a fasting baseline urine and saliva sample, 10 people (8 women, 2 men) were dosed with a doubly labeled water does comprised of 1.25g 10% oxygen-18 and 0.1 g 100% deuterium/kg body weight. The samples were collected hourly for 12 hrs on the first day and then morning, midday, and evening samples were collected for the next 14 days. The samples were analyzed using an isotope ratio mass spectrometer. For the TBW, time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibration was reached in 90% of the sample by hour 6, and in 100% of the sample by hour 7. With regard to the TBW estimations, the optimal time for urine collection was found to be between hours 4 and 10 as to where there was no significant difference between values. In contrast, statistically significant differences in TBW estimations were found between hours 1-3 and from 11-12 when compared with hours 4-10. Most of the individuals in this study were in equilibrium after 7 hours. The TEE equations of Prof Dale Scholler (Chicago, USA, IAEA) and Prof K.Westerterp were compared with that of Prof. Andrew Coward (Dunn Nutrition Centre). When comparing values derived from samples collected in the morning and evening there was no effect of time or equation on resulting TEE values. The fourth study was a pilot study (n=1) to test the variability in TEE as a result of manipulations in fluid consumption and level of physical activity; the magnitude of change which may be expected in a sedentary adult. Physical activity levels were manipulated by increasing the number of steps per day to mimic the increases that may result when a sedentary individual commences an activity program. The study was comprised of three sub-studies completed on the same individual over a period of 8 months. There were no significant changes in TBW across all studies, even though the elimination rates changed with the supplemented water intake and additional physical activity. The extra activity may not have sufficiently strenuous enough and the water intake high enough to cause a significant change in the TBW and hence the CO2 production and TEE values. The TEE values measured show good agreement based on the estimated values calculated on an RMR of 1455 kcal/day, a DIT of 10% of TEE and activity based on measured steps. The covariance values tracked when plotting the residuals were found to be representative of “well-behaved” data and are indicative of the analytical accuracy. The ratio and product plots were found to reflect the water turnover and CO2 production and thus could, with further investigation, be employed to identify the changes in physical activity.
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.
Resumo:
Background There has been increasing interest in assessing the impacts of temperature on mortality. However, few studies have used a case–crossover design to examine non-linear and distributed lag effects of temperature on mortality. Additionally, little evidence is available on the temperature-mortality relationship in China, or what temperature measure is the best predictor of mortality. Objectives To use a distributed lag non-linear model (DLNM) as a part of case–crossover design. To examine the non-linear and distributed lag effects of temperature on mortality in Tianjin, China. To explore which temperature measure is the best predictor of mortality; Methods: The DLNM was applied to a case¬−crossover design to assess the non-linear and delayed effects of temperatures (maximum, mean and minimum) on deaths (non-accidental, cardiopulmonary, cardiovascular and respiratory). Results A U-shaped relationship was consistently found between temperature and mortality. Cold effects (significantly increased mortality associated with low temperatures) were delayed by 3 days, and persisted for 10 days. Hot effects (significantly increased mortality associated with high temperatures) were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. Conclusions In Tianjin, extreme cold and hot temperatures increased the risk of mortality. Results suggest that the effects of cold last longer than the effects of heat. It is possible to combine the case−crossover design with DLNMs. This allows the case−crossover design to flexibly estimate the non-linear and delayed effects of temperature (or air pollution) whilst controlling for season.
Resumo:
Background: The aims of this study were to determine the documentation of pharmacotherapy optimization goals in the discharge letters of patients with the principal diagnosis of chronic heart failure. Methods: A retrospective practice audit of 212 patients discharged to the care of their local general practitioner from general medical units of a large tertiary hospital. Details of recommendations regarding ongoing pharmacological and non-pharmacological management were reviewed. The doses of medications on discharge were noted and whether they met current guidelines recommending titration of angiotensin-converting enzyme inhibitors and beta-blockers. Ongoing arrangements for specialist follow up were also reviewed. Results: The mean age of patients whose letters were reviewed was 78.4 years (standard deviation ± 8.6); 50% were men. Patients had an overall median of six comorbidities and eight regular medications on discharge. Mean length of stay for each admission was 6 days. Discharge letters were posted a median of 4 days after discharge, with 25% not posted at 10 days. No discharge letter was sent in 9.4% (20) of the cases. Only six (2.8%) letters had any recommendations regarding future titration of angiotensin-converting enzyme inhibitors and 6.6% (14) for beta-blockers. Recommendations for future non-pharmacological management, for example, diuretic action plans, regular weight monitoring and exercise plans were not found in the letters in this audit. Conclusion: Hospital discharge is an opportunity to communicate management plans for treatment optimization effectively, and while this opportunity is spurned, implementation gaps in the management of cardiac failure will probably remain.
Resumo:
Background Iron deficiency, anemia and hookworm disease are important public health problems for women of reproductive age living in developing countries and affect the health of newborns and infants. Iron supplementation and deworming treatment are effective in addressing these problems in both pregnant and non-pregnant women. Daily iron supplementation and deworming after the first trimester is recommended for pregnant women although these programs usually do not operate efficiently or effectively. Weekly iron-folic acid supplementation and regular deworming for non-pregnant women may be a viable approach for improving iron status and preventing anemia during the reproductive years. Addressing these diseases at a population level before women become pregnant could significantly improve women's health before and during pregnancy, as well as their infants' growth and development. Methods and Results This paper describes the major processes undertaken in a demonstration intervention of preventive weekly iron-folic acid supplementation with regular deworming for all 52,000 women aged 15–45 years in two districts of Yen Bai province, in northern Viet Nam. The intervention strategy included extensive consultation with community leaders and village, commune, district and provincial health staff, and training for village health workers. Distribution of the drugs was integrated with the existing health service infrastructure and the village health workers were the direct point of contact with women. Iron-folic acid tablets and deworming treatment were provided free of charge from May 2006. An independent Vietnamese NGO was commissioned to evaluate compliance and identify potential problems. The program resulted in effective distribution of iron-folic acid tablets and deworming treatment to all villages in the target districts, with full or partial compliance of 85%. Conclusion Training for health staff, the strong commitment of all partners and the use of appropriate educational materials led to broad support for weekly iron-folic acid supplementation and high participation in the regular deworming days. In March 2008 the program was expanded to all districts in the province, a target population of approximately 250,000 WRA, and management was handed over to provincial authorities.
Resumo:
Background: Medication-related problems often occur in the immediate post-discharge period. To reduce medication misadventure the Commonwealth Government funds home medicines reviews (HMRs). HMRs are initiated when general practitioners refer consenting patients to their community pharmacists, who then engage accredited pharmacists to review patients' medicines in their homes. Aim: To determine if hospital-initiated medication reviews (HIMRs) can be implemented in a more timely manner than HMRs; and to assess the impact of a bespoke referral form with comorbidity-specific questions on the quality of reports. Method: Eligible medical inpatients at risk of medication misadventure were referred by the hospital liaison pharmacist to participating accredited pharmacists post-discharge from hospital. Social, demographic and laboratory data were collected from medical records and during interviews with consenting patients. Issues raised in the HIMR reports were categorised: intervention/action, information given or recommendation, and assigned a rank of clinical significance. Results: HIMRs were conducted within 11.6 6.6 days postdischarge. 36 HIMR reports were evaluated and 1442 issues identified - information given (n = 1204), recommendations made (n = 88) and actions taken (n = 150). The majority of issues raised (89%) had a minor clinical impact. The bespoke referral form prompted approximately half of the issues raised. Conclusion: HIMRs can be facilitated in a more timely manner than post-discharge HMRs. There was an associated positive clinical impact of issues raised in the HIMR reports.
Resumo:
Chlamydia pneumoniae causes a range of respiratory infections including bronchitis, pharyngitis and pneumonia. Infection has also been implicated in exacerbation/initiation of asthma and chronic obstructive pulmonary disease (COPD) and may play a role in atherosclerosis and Alzheimer's disease. We have used a mouse model of Chlamydia respiratory infection to determine the effectiveness of intranasal (IN) and transcutaneous immunization (TCI) to prevent Chlamydia lung infection. Female BALB/c mice were immunized with chlamydial major outer membrane protein (MOMP) mixed with cholera toxin and CpG oligodeoxynucleotide adjuvants by either the IN or TCI routes. Serum and bronchoalveolar lavage (BAL) were collected for antibody analysis. Mononuclear cells from lung-draining lymph nodes were stimulated in vitro with MOMP and cytokine mRNA production determined by real time PCR. Animals were challenged with live Chlamydia and weighed daily following challenge. At day 10 (the peak of infection) animals were sacrificed and the numbers of recoverable Chlamydia in lungs determined by real time PCR. MOMP-specific antibody-secreting cells in lung tissues were also determined at day 10 post-infection. Both IN and TCI protected animals against weight loss compared to non-immunized controls with both immunized groups gaining weight by day 10-post challenge while controls had lost 6% of body weight. Both immunization protocols induced MOMP-specific IgG in serum and BAL while only IN immunization induced MOMP-specific IgA in BAL. Both immunization routes resulted in high numbers of MOMP-specific antibody-secreting cells in lung tissues (IN > TCI). Following in vitro re-stimulation of lung-draining lymph node cells with MOMP; IFNγ mRNA increased 20-fold in cells from IN immunized animals (compared to non-immunized controls) while IFNγ levels increased 6- to 7-fold in TCI animals. Ten days post challenge non-immunized animals had >7000 IFU in their lungs, IN immunized animals <50 IFU and TCI immunized animals <1500 IFU. Thus, both intranasal and transcutaneous immunization protected mice against respiratory challenge with Chlamydia. The best protection was obtained following IN immunization and correlated with IFNγ production by mononuclear cells in lung-draining LN and MOMP-specific IgA in BAL.
Resumo:
OBJECTIVE: This paper reviews the epidemiological evidence on the relationship between ambient temperature and morbidity. It assesses the methodological issues in previous studies, and proposes future research directions. DATA SOURCES AND DATA EXTRACTION: We searched the PubMed database for epidemiological studies on ambient temperature and morbidity of non-communicable diseases published in refereed English journals prior to June 2010. 40 relevant studies were identified. Of these, 24 examined the relationship between ambient temperature and morbidity, 15 investigated the short-term effects of heatwave on morbidity, and 1 assessed both temperature and heatwave effects. DATA SYNTHESIS: Descriptive and time-series studies were the two main research designs used to investigate the temperature–morbidity relationship. Measurements of temperature exposure and health outcomes used in these studies differed widely. The majority of studies reported a significant relationship between ambient temperature and total or cause-specific morbidities. However, there were some inconsistencies in the direction and magnitude of non-linear lag effects. The lag effect of hot temperature on morbidity was shorter (several days) compared to that of cold temperature (up to a few weeks). The temperature–morbidity relationship may be confounded and/or modified by socio-demographic factors and air pollution. CONCLUSIONS: There is a significant short-term effect of ambient temperature on total and cause-specific morbidities. However, further research is needed to determine an appropriate temperature measure, consider a diverse range of morbidities, and to use consistent methodology to make different studies more comparable.
Resumo:
Maternal and infant mortality is a global health issue with a significant social and economic impact. Each year, over half a million women worldwide die due to complications related to pregnancy or childbirth, four million infants die in the first 28 days of life, and eight million infants die in the first year. Ninety-nine percent of maternal and infant deaths are in developing countries. Reducing maternal and infant mortality is among the key international development goals. In China, the national maternal mortality ratio and infant mortality rate were reduced greatly in the past two decades, yet a large discrepancy remains between urban and rural areas. To address this problem, a large-scale Safe Motherhood Programme was initiated in 2000. The programme was implemented in Guangxi in 2003. Interventions in the programme included both demand-side and supply side-interventions focusing on increasing health service use and improving birth outcomes. Little is known about the effects and economic outcomes of the Safe Motherhood Programme in Guangxi, although it has been implemented for seven years. The aim of this research is to estimate the effectiveness and cost-effectiveness of the interventions in the Safe Motherhood Programme in Guangxi, China. The objectives of this research include: 1. To evaluate whether the changes of health service use and birth outcomes are associated with the interventions in the Safe Motherhood Programme. 2. To estimate the cost-effectiveness of the interventions in the Safe Motherhood Programme and quantify the uncertainty surrounding the decision. 3. To assess the expected value of perfect information associated with both the whole decision and individual parameters, and interpret the findings to inform priority setting in further research and policy making in this area. A quasi-experimental study design was used in this research to assess the effectiveness of the programme in increasing health service use and improving birth outcomes. The study subjects were 51 intervention counties and 30 control counties. Data on the health service use, birth outcomes and socio-economic factors from 2001 to 2007 were collected from the programme database and statistical yearbooks. Based on the profile plots of the data, general linear mixed models were used to evaluate the effectiveness of the programme while controlling for the effects of baseline levels of the response variables, change of socio-economic factors over time and correlations among repeated measurements from the same county. Redundant multicollinear variables were deleted from the mixed model using the results of the multicollinearity diagnoses. For each response variable, the best covariance structure was selected from 15 alternatives according to the fit statistics including Akaike information criterion, Finite-population corrected Akaike information criterion, and Schwarz.s Bayesian information criterion. Residual diagnostics were used to validate the model assumptions. Statistical inferences were made to show the effect of the programme on health service use and birth outcomes. A decision analytic model was developed to evaluate the cost-effectiveness of the programme, quantify the decision uncertainty, and estimate the expected value of perfect information associated with the decision. The model was used to describe the transitions between health states for women and infants and reflect the change of both costs and health benefits associated with implementing the programme. Result gained from the mixed models and other relevant evidence identified were synthesised appropriately to inform the input parameters of the model. Incremental cost-effectiveness ratios of the programme were calculated for the two groups of intervention counties over time. Uncertainty surrounding the parameters was dealt with using probabilistic sensitivity analysis, and uncertainty relating to model assumptions was handled using scenario analysis. Finally the expected value of perfect information for both the whole model and individual parameters in the model were estimated to inform priority setting in further research in this area.The annual change rates of the antenatal care rate and the institutionalised delivery rate were improved significantly in the intervention counties after the programme was implemented. Significant improvements were also found in the annual change rates of the maternal mortality ratio, the infant mortality rate, the incidence rate of neonatal tetanus and the mortality rate of neonatal tetanus in the intervention counties after the implementation of the programme. The annual change rate of the neonatal mortality rate was also improved, although the improvement was only close to statistical significance. The influences of the socio-economic factors on the health service use indicators and birth outcomes were identified. The rural income per capita had a significant positive impact on the health service use indicators, and a significant negative impact on the birth outcomes. The number of beds in healthcare institutions per 1,000 population and the number of rural telephone subscribers per 1,000 were found to be positively significantly related to the institutionalised delivery rate. The length of highway per square kilometre negatively influenced the maternal mortality ratio. The percentage of employed persons in the primary industry had a significant negative impact on the institutionalised delivery rate, and a significant positive impact on the infant mortality rate and neonatal mortality rate. The incremental costs of implementing the programme over the existing practice were US $11.1 million from the societal perspective, and US $13.8 million from the perspective of the Ministry of Health. Overall, 28,711 life years were generated by the programme, producing an overall incremental cost-effectiveness ratio of US $386 from the societal perspective, and US $480 from the perspective of the Ministry of Health, both of which were below the threshold willingness-to-pay ratio of US $675. The expected net monetary benefit generated by the programme was US $8.3 million from the societal perspective, and US $5.5 million from the perspective of the Ministry of Health. The overall probability that the programme was cost-effective was 0.93 and 0.89 from the two perspectives, respectively. The incremental cost-effectiveness ratio of the programme was insensitive to the different estimates of the three parameters relating to the model assumptions. Further research could be conducted to reduce the uncertainty surrounding the decision, in which the upper limit of investment was US $0.6 million from the societal perspective, and US $1.3 million from the perspective of the Ministry of Health. It is also worthwhile to get a more precise estimate of the improvement of infant mortality rate. The population expected value of perfect information for individual parameters associated with this parameter was US $0.99 million from the societal perspective, and US $1.14 million from the perspective of the Ministry of Health. The findings from this study have shown that the interventions in the Safe Motherhood Programme were both effective and cost-effective in increasing health service use and improving birth outcomes in rural areas of Guangxi, China. Therefore, the programme represents a good public health investment and should be adopted and further expanded to an even broader area if possible. This research provides economic evidence to inform efficient decision making in improving maternal and infant health in developing countries.
Resumo:
We compared changes in markers of muscle damage and systemic inflammation after submaximal and maximal lengthening muscle contractions of the elbow flexors. Using a cross-over design, 10 healthy young men not involved in resistance training completed a submaximal trial (10 sets of 60 lengthening contractions at 10% maximum isometric strength, 1 min rest between sets), followed by a maximal trial (10 sets of three lengthening contractions at 100% maximum isometric strength, 3 min rest between sets). Lengthening contractions were performed on an isokinetic dynamometer. Opposite arms were used for the submaximal and maximal trials, and the trials were separated by a minimum of two weeks. Blood was sampled before, immediately after, 1 h, 3 h, and 1-4 d after each trial. Total leukocyte and neutrophil numbers, and the serum concentration of soluble tumor necrosis factor-alpha receptor 1 were elevated after both trials (P < 0.01), but there were no differences between the trials. Serum IL-6 concentration was elevated 3 h after the submaximal contractions (P < 0.01). The concentrations of serum tumor necrosis factor-alpha, IL-1 receptor antagonist, IL-10, granulocyte-colony stimulating factor and plasma C-reactive protein remained unchanged following both trials. Maximum isometric strength and range of motion decreased significantly (P < 0.001) after both trials, and were lower from 1-4 days after the maximal contractions compared to the submaximal contractions. Plasma myoglobin concentration and creatine kinase activity, muscle soreness and upper arm circumference all increased after both trials (P < 0.01), but were not significantly different between the trials. Therefore, there were no differences in markers of systemic inflammation, despite evidence of greater muscle damage following maximal versus submaximal lengthening contractions of the elbow flexors.