945 resultados para Lucan, 39-65.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Banana is a staple crop in many regions where vitamin A deficiency is prevalent, making it a target for provitamin A biofortification. However, matrix effects may limit provitamin A bioavailability from bananas. The retinol bioefficacies of unripe and ripe bananas (study 1A), unripe high-provitamin A bananas (study 1B), and raw and cooked bananas (study 2) were determined in retinol-depleted Mongolian gerbils (n = 97/study) using positive and negative controls. After feeding a retinol-deficient diet for 6 and 4 wk in studies 1 and 2, respectively, customized diets containing 60, 30, or 15% banana were fed for 17 and 13 d, respectively. In study 1A, the hepatic retinol of the 60% ripe Cavendish group (0.52 ± 0.13 μmol retinol/liver) differed from baseline (0.65 ± 0.15 μmol retinol/liver) and was higher than the negative control group (0.39 ± 0.16 μmol retinol/liver; P < 0.0065). In study 1B, no groups differed from baseline (0.65 ± 0.15 μmol retinol/liver; P = 0.20). In study 2, the 60% raw Butobe group (0.68 ± 0.17 μmol retinol/liver) differed from the 60% cooked Butobe group (0.87 ± 0.24 μmol retinol/liver); neither group differed from baseline (0.80 ± 0.27 μmol retinol/liver; P < 0.0001). Total liver retinol was higher in the groups fed cooked bananas than in those fed raw (P = 0.0027). Body weights did not differ even though gerbils ate more green, ripe, and raw bananas than cooked, suggesting a greater indigestible component. In conclusion, thermal processing, but not ripening, improves the retinol bioefficacy of bananas. Food matrix modification affects carotenoid bioavailability from provitamin A biofortification targets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Several new types of contraception became available in Australia over the last twelve years (the implant in 2001, progestogen intra-uterine device (IUD) in 2003, and vaginal contraceptive ring in 2007). Most methods of contraception require access to health services. Permanent sterilisation and the insertion of an implant or IUD involve a surgical procedure. Access to health professionals providing these specialised services may be more difficult in rural areas. This paper examines uptake of permanent or long-acting reversible contraception (LARCs) among Australian women in rural areas compared to women in urban areas. Method: Participants in the Australian Longitudinal Study on Women's Health born in 1973-78 reported on their contraceptive use at three surveys: 2003, 2006 and 2009. Contraceptive methods included permanent sterilisation (tubal ligation, vasectomy), non-daily or LARC methods (implant, IUD, injection, vaginal ring), and other methods including daily, barrier or "natural" methods (oral contraceptive pills, condoms, withdrawal, safe period). Sociodemographic, reproductive history and health service use factors associated with using permanent, LARC or other methods were examined using a multivariable logistic regression analysis. Results: Of 9,081 women aged 25-30 in 2003, 3% used permanent methods and 4% used LARCs. Six years later in 2009, of 8,200 women (aged 31-36), 11% used permanent methods and 9% used LARCs. The fully adjusted parsimonious regression model showed that the likelihood of a woman using LARCs and permanent methods increased with number of children. Women whose youngest child was school-age were more likely to use LARCs (OR=1.83, 95%CI 1.43-2.33) or permanent methods (OR=4.39, 95%CI 3.54-5.46) compared to women with pre-school children. Compared to women living in major cities, women in inner regional areas were more likely to use LARCs (OR=1.26, 95%CI 1.03-1.55) or permanent methods (OR=1.43, 95%CI 1.17-1.76). Women living in outer regional and remote areas were more likely than women living in cities to use LARCs (OR=1.65, 95%CI 1.31-2.08) or permanent methods (OR=1.69, 95%CI 1.43-2.14). Women with poorer access to GPs were more likely to use permanent methods (OR=1.27, 95%CI 1.07-1.52). Conclusions: Location of residence and access to health services are important factors in women's choices about long-acting contraception in addition to the number and age of their children. There is a low level of uptake of non-daily, long-acting methods of contraception among Australian women in their mid-thirties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The largest proportion of cancer patients are aged 65 years and over. Increasing age is also associated with nutritional risk and multi-morbidities—factors which complicate the cancer treatment decision-making process in older patients. Objectives To determine whether malnutrition risk and Body Mass Index (BMI) are associated with key oncogeriatric variables as potential predictors of chemotherapy outcomes in geriatric oncology patients with solid tumours. Methods In this longitudinal study, geriatric oncology patients (aged ≥65 years) received a Comprehensive Geriatric Assessment (CGA) for baseline data collection prior to the commencement of chemotherapy treatment. Malnutrition risk was assessed using the Malnutrition Screening Tool (MST) and BMI was calculated using anthropometric data. Nutritional risk was compared with other variables collected as part of standard CGA. Associations were determined by chi-square tests and correlations. Results Over half of the 175 geriatric oncology patients were at risk of malnutrition (53.1%) according to MST. BMI ranged from 15.5–50.9kg/m2, with 35.4% of the cohort overweight when compared to geriatric cutoffs. Malnutrition risk was more prevalent in those who were underweight (70%) although many overweight participants presented as at risk (34%). Malnutrition risk was associated with a diagnosis of colorectal or lung cancer (p=0.001), dependence in activities of daily living (p=0.015) and impaired cognition (p=0.049). Malnutrition risk was positively associated with vulnerability to intensive cancer therapy (rho=0.16, p=0.038). Larger BMI was associated with a greater number of multi-morbidities (rho =.27, p=0.001. Conclusions Malnutrition risk is prevalent among geriatric patients undergoing chemotherapy, is more common in colorectal and lung cancer diagnoses, is associated with impaired functionality and cognition and negatively influences ability to complete planned intensive chemotherapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information literacy is presented here from a relational perspective, as people’s experience of using information to learn in a particular context. A detailed practical example of such a context is provided, in the health information literacy experience of 65–79 year old Australians. A phenomenographic investigation found five qualitatively distinct ways of experiencing health information literacy: Absorbing (intuitive reception), Targeting (a planned process), Journeying (a personal quest), Liberating (equipping for independence) and Collaborating (interacting in community). These five ways of experiencing indicated expanding awareness of context (degree of orientation towards their environment), source (breadth of esteemed information), beneficiary (the scope of people who gain) and agency (amount of activity), across HIL core aspects of information, learning and health. These results illustrate the potential contribution of relational information literacy to information science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to identify new peptide antigens from Chlamydia (C.) trachomatis in a proof of concept approach which could be used to develop an epitope-based serological diagnostic for C. trachomatis related infertility in women. A bioinformatics analysis was conducted examining several immunodominant proteins from C. trachomatis to identify predicted immunoglobulin epitopes unique to C. trachomatis. A peptide array of these epitopes was screened against participant sera. The participants (all female) were categorized into the following cohorts based on their infection and gynecological history; acute (single treated infection with C. trachomatis), multiple (more than one C. trachomatis infection, all treated), sequelae (PID or tubal infertility with a history of C. trachomatis infection), and infertile (no history of C. trachomatis infection and no detected tubal damage). The bioinformatics strategy identified several promising epitopes. Participants who reacted positively in the peptide 11 ELISA were found to have an increased likelihood of being in the sequelae cohort compared to the infertile cohort with an odds ratio of 16.3 (95% c.i. 1.65 – 160), with 95% specificity and 46% sensitivity (0.19-0.74). The peptide 11 ELISA has the potential to be further developed as a screening tool for use during the early IVF work up and provides proof of concept that there may be further peptide antigens which could be identified using bioinformatics and screening approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The prevalence of protein-energy malnutrition in older adults is reported to be as high as 60% and is associated with poor health outcomes. Inadequate feeding assistance and mealtime interruptions may contribute to malnutrition and poor nutritional intake during hospitalisation. Despite being widely implemented in practice in the United Kingdom and increasingly in Australia, there have been few studies examining the impact of strategies such as Protected Mealtimes and dedicated feeding assistant roles on nutritional outcomes of elderly inpatients. AIMS: The aim of this research was to implement and compare three system-level interventions designed to specifically address mealtime barriers and improve energy intakes of medical inpatients aged ≥65 years. This research also aimed to evaluate the sustainability of any changes to mealtime routines six months post-intervention and to gain an understanding of staff perceptions of the post-intervention mealtime experience. METHODS: Three mealtime assistance interventions were implemented in three medical wards at Royal Brisbane and Women's Hospital: AIN-only: Additional assistant-in-nursing (AIN) with dedicated nutrition role. PM-only: Multidisciplinary approach to meals, including Protected Mealtimes. PM+AIN: Combined intervention: AIN + multidisciplinary approach to meals. An action research approach was used to carefully design and implement the three interventions in partnership with ward staff and managers. Significant time was spent in consultation with staff throughout the implementation period to facilitate ownership of the interventions and increase likelihood of successful implementation. A pre-post design was used to compare the implementation and nutritional outcomes of each intervention to a pre-intervention group. Using the same wards, eligible participants (medical inpatients aged ≥65 years) were recruited to the preintervention group between November 2007 and March 2008 and to the intervention groups between January and June 2009. The primary nutritional outcome was daily energy and protein intake, which was determined by visually estimating plate waste at each meal and mid-meal on Day 4 of admission. Energy and protein intakes were compared between the pre and post intervention groups. Data were collected on a range of covariates (demographics, nutritional status and known risk factors for poor food intake), which allowed for multivariate analysis of the impact of the interventions on nutritional intake. The provision of mealtime assistance to participants and activities of ward staff (including mealtime interruptions) were observed in the pre-intervention and intervention groups, with staff observations repeated six months post-intervention. Focus groups were conducted with nursing and allied health staff in June 2009 to explore their attitudes and behaviours in response to the three mealtime interventions. These focus group discussions were analysed using thematic analysis. RESULTS: A total of 254 participants were recruited to the study (pre-intervention: n=115, AIN-only: n=58, PM-only: n=39, PM+AIN: n=42). Participants had a mean age of 80 years (SD 8), and 40% (n=101) were malnourished on hospital admission, 50% (n=108) had anorexia and 38% (n=97) required some assistance at mealtimes. Occasions of mealtime assistance significantly increased in all interventions (p<0.01). However, no change was seen in mealtime interruptions. No significant difference was seen in mean total energy and protein intake between the preintervention and intervention groups. However, when total kilojoule intake was compared with estimated requirements at the individual level, participants in the intervention groups were more likely to achieve adequate energy intake (OR=3.4, p=0.01), with no difference noted between interventions (p=0.29). Despite small improvements in nutritional adequacy, the majority of participants in the intervention groups (76%, n=103) had inadequate energy intakes to meet their estimated energy requirements. Patients with cognitive impairment or feeding dependency appeared to gain substantial benefit from mealtime assistance interventions. The increase in occasions of mealtime assistance by nursing staff during the intervention period was maintained six-months post-intervention. Staff focus groups highlighted the importance of clearly designating and defining mealtime responsibilities in order to provide adequate mealtime care. While the purpose of the dedicated feeding assistant was to increase levels of mealtime assistance, staff indicated that responsibility for mealtime duties may have merely shifted from nursing staff to the assistant. Implementing the multidisciplinary interventions empowered nursing staff to "protect" the mealtime from external interruptions, but further work is required to empower nurses to prioritise mealtime activities within their own work schedules. Staff reported an increase in the profile of nutritional care on all wards, with additional non-nutritional benefits noted including improved mobility and functional independence, and better identification of swallowing difficulties. IMPLICATIONS: The PhD research provides clinicians with practical strategies to immediately introduce change to deliver better mealtime care in the hospital setting, and, as such, has initiated local and state-wide roll-out of mealtime assistance programs. Improved nutritional intakes of elderly inpatients was observed; however given the modest effect size and reducing lengths of hospital stays, better nutritional outcomes may be achieved by targeting the hospital-to-home transition period. Findings from this study suggest that mealtime assistance interventions for elderly inpatients with cognitive impairment and/or functional dependency show promise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, infectious diseases and under-nutrition have been considered major health problems in Sri Lanka with little attention paid to obesity and associated non-communicable diseases (NCDs). However, the recent Sri Lanka Diabetes and Cardiovascular Study (SLDCS) reported the epidemic level of obesity, diabetes and metabolic syndrome. Moreover, obesity-associated NCDs is the leading cause of death in Sri Lanka and there is an exponential increase in hospitalization due to NCDs adversely affecting the development of the country. Despite Sri Lanka having a very high prevalence of NCDs and associated mortality, little is known about the causative factors for this burden. It is widely believed that the global NCD epidemic is associated with recent lifestyle changes, especially dietary factors. In the absence of sufficient data on dietary habits in Sri Lanka, successful interventions to manage these serious health issues would not be possible. In view of the current situation the dietary survey was undertaken to assess the intakes of energy, macro-nutrients and selected other nutrients with respect to socio demographic characteristics and the nutritional status of Sri Lankan adults especially focusing on obesity. Another aim of this study was to develop and validate a culturally specific food frequency questionnaire (FFQ) to assess dietary risk factors of NCDs in Sri Lankan adults. Data were collected from a subset of the national SLDCS using a multi-stage, stratified, random sampling procedure (n=500). However, data collection in the SLDCS was affected by the prevailing civil war which resulted in no data being collected from Northern and Eastern provinces. To obtain a nationally representative sample, additional subjects (n=100) were later recruited from the two provinces using similar selection criteria. Ethical Approval for this study was obtained from the Ethical Review Committee, Faculty of Medicine, University of Colombo, Sri Lanka and informed consent was obtained from the subjects before data were collected. Dietary data were obtained using the 24-h Dietary Recall (24HDR) method. Subjects were asked to recall all foods and beverages, consumed over the previous 24-hour period. Respondents were probed for the types of foods and food preparation methods. For the FFQ validation study, a 7-day weight diet record (7-d WDR) was used as the reference method. All foods recorded in the 24 HDR were converted into grams and then intake of energy and nutrients were analysed using NutriSurvey 2007 (EBISpro, Germany) which was modified for Sri Lankan food recipes. Socio-demographic details and body weight perception were collected from interviewer-administrated questionnaire. BMI was calculated and overweight (BMI ≥23 kg.m-2), obesity (BMI ≥25 kg.m-2) and abdominal obesity (Men: WC ≥ 90 cm; Women: WC ≥ 80 cm) were categorized according to Asia-pacific anthropometric cut-offs. The SPSS v. 16 for Windows and Minitab v10 were used for statistical analysis purposes. From a total of 600 eligible subjects, 491 (81.8%) participated of whom 34.5% (n=169) were males. Subjects were well distributed among different socio-economic parameters. A total of 312 different food items were recorded and nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Seventy-seven subjects completed (response rate = 65%) the FFQ and 7-day WDR. Estimated mean energy intake (SD) from FFQ (1794±398 kcal) and 7DWR (1698±333 kcal, P<0.001) was significantly different due to a significant overestimation of carbohydrate (~10 g/d, P<0.001) and to some extent fat (~5 g/d, NS). Significant positive correlations were found between the FFQ and 7DWR for energy (r = 0.39), carbohydrate (r = 0.47), protein (r = 0.26), fat (r =0.17) and dietary fiber (r = 0.32). Bland-Altman graphs indicated fairly good agreement between methods with no relationship between bias and average intake of each nutrient examined. The findings from the nutrition survey showed on average, Sri Lankan adults consumed over 14 portions of starch/d; moreover, males consumed 5 more portions of cereal than females. Sri Lankan adults consumed on average 3.56 portions of added sugars/d. Moreover, mean daily intake of fruit (0.43) and vegetable (1.73) portions was well below minimum dietary recommendations (fruits 2 portions/d; vegetables 3 portions/d). The total fruit and vegetable intake was 2.16 portions/d. Daily consumption of meat or alternatives was 1.75 portions and the sum of meat and pulses was 2.78 portions/d. Starchy foods were consumed by all participants and over 88% met the minimum daily recommendations. Importantly, nearly 70% of adults exceeded the maximum daily recommendation for starch (11portions/d) and a considerable proportion consumed larger numbers of starch servings daily, particularly men. More than 12% of men consumed over 25 starch servings/d. In contrast to their starch consumption, participants reported very low intakes of other food groups. Only 11.6%, 2.1% and 3.5% of adults consumed the minimum daily recommended servings of vegetables, fruits, and fruits and vegetables combined, respectively. Six out of ten adult Sri Lankans sampled did not consume any fruits. Milk and dairy consumption was extremely low; over a third of the population did not consume any dairy products and less than 1% of adults consumed 2 portions of dairy/d. A quarter of Sri Lankans did not report consumption of meat and pulses. Regarding protein consumption, 36.2% attained the minimum Sri Lankan recommendation for protein; and significantly more men than women achieved the recommendation of ≥3 servings of meat or alternatives daily (men 42.6%, women 32.8%; P<0.05). Over 70% of energy was derived from carbohydrates (Male:72.8±6.4%, Female:73.9±6.7%), followed by fat (Male:19.9±6.1%, Female:18.5±5.7%) and proteins (Male:10.6±2.1%, Female:10.9±5.6%). The average intake of dietary fiber was 21.3 g/day and 16.3 g/day for males and females, respectively. There was a significant difference in nutritional intake related to ethnicities, areas of residence, education levels and BMI categories. Similarly, dietary diversity was significantly associated with several socio-economic parameters among Sri Lankan adults. Adults with BMI ≥25 kg.m-2 and abdominally obese Sri Lankan adults had the highest diet diversity values. Age-adjusted prevalence (95% confidence interval) of overweight, obesity, and abdominal obesity among Sri Lankan adults were 17.1% (13.8-20.7), 28.8% (24.8-33.1), and 30.8% (26.8-35.2), respectively. Men, compared with women, were less overweight, 14.2% (9.4-20.5) versus 18.5% (14.4-23.3), P = 0.03, less obese, 21.0% (14.9-27.7) versus 32.7% (27.6-38.2), P < .05; and less abdominally obese, 11.9% (7.4-17.8) versus 40.6% (35.1-46.2), P < .05. Although, prevalence of obesity has reached to epidemic level body weight misperception was common among Sri Lankan adults. Two-thirds of overweight males and 44.7% of females considered themselves as in "about right weight". Over one third of both male and female obese subjects perceived themselves as "about right weight" or "underweight". Nearly 32% of centrally obese men and women perceived that their waist circumference is about right. People who perceived overweight or very overweight (n = 154) only 63.6% tried to lose their body weight (n = 98), and quarter of adults seek advices from professionals (n = 39). A number of important conclusions can be drawn from this research project. Firstly, the newly developed FFQ is an acceptable tool for assessing the nutrient intake of Sri Lankans and will assist proper categorization of individuals by dietary exposure. Secondly, a substantial proportion of the Sri Lankan population does not consume a varied and balanced diet, which is suggestive of a close association between the nutrition-related NCDs in the country and unhealthy eating habits. Moreover, dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Lastly, although obesity is a major health issue among Sri Lankan adults, body weight misperception was common among underweight, healthy weight, overweight, and obese adults in Sri Lanka. Over 2/3 of overweight and 1/3 of obese Sri Lankan adults believe that they are in "right weight" or "under-weight" categories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To develop, using dacarbazine as a model, reliable techniques for measuring DNA damage and repair as pharmacodynamic endpoints for patients receiving chemotherapy. Methods: A group of 39 patients with malignant melanoma were treated with dacarbazine 1 g/m2 i.v. every 21 days. Tamoxifen 20 mg daily was commenced 24 h after the first infusion and continued until 3 weeks after the last cycle of chemotherapy. DNA strand breaks formed during dacarbazine-induced DNA damage and repair were measured in individual cells by the alkaline comet assay. DNA methyl adducts were quantified by measuring urinary 3-methyladenine (3-MeA) excretion using immunoaffinity ELISA. Venous blood was taken on cycles 1 and 2 for separation of peripheral blood lymphocytes (PBLs) for measurement of DNA strand breaks. Results: Wide interpatient variation in PBL DNA strand breaks occurred following chemotherapy, with a peak at 4 h (median 26.6 h, interquartile range 14.75- 40.5 h) and incomplete repair by 24 h. Similarly, there was a range of 3-MeA excretion with peak levels 4-10 h after chemotherapy (median 33 nmol/h, interquartile range 20.448.65 nmol/h). Peak 3-MeA excretion was positively correlated with DNA strand breaks at 4 h (Spearman's correlation coefficient, r = 0.39, P = 0.036) and 24 h (r = 0.46, P = 0.01). Drug-induced emesis correlated with PBL DNA strand breaks (Mann Whitney U-test, P = 0.03) but not with peak 3-MeA excretion. Conclusions: DNA damage and repair following cytotoxic chemotherapy can be measured in vivo by the alkaline comet assay and by urinary 3-MeA excretion in patients receiving chemotherapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigated the effect of any health professional contact and the types of contact new mothers received in the first 10 days post-discharge on breastfeeding rates at 3 months. This cross-sectional retrospective self-report survey was distributed to women who birthed in Queensland, Australia between 1st February and 31st May 2010 at 4–5 months postpartum. Data were collected on pregnancy, birth, postpartum care and infant feeding. Logistic regression was used to assess the relationship between health professional contact and breastfeeding at 3 months. Data were analysed by birthing facility sector because of significant differences between sectors in health professional contact. The study cohort consisted of 6,852 women. Women in the public sector were more likely to be visited at home than women birthing in the private sector. Any health professional contact (AOR 1.65 99 % CI 0.98–2.76 public sector, AOR 0.78 99 % CI 0.59–1.03 private sector) and home visits (AOR 1.50 99 % CI 0.89–2.54 public sector, AOR 0.80 99 % CI 0.46–1.39 private sector) were not associated with breastfeeding at 3 months in either sector. A telephone call (AOR 2.07 99 % CI 1.06–4.03) or visit to a general practitioner (GP) (AOR 1.83 99 % CI 1.04–3.21) increased the odds of breastfeeding in public sector women. Health professional contact or home visiting in the first 10 days post-discharge did not have a significant impact on breastfeeding rates at 3 months. Post-discharge telephone contact for all women and opportunities for self-initiated clinic visits for women assessed to be at higher risk of ceasing breastfeeding may be the most effective care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this investigation is to present an overview of roadside drug driving enforcement and detections in Queensland, Australia since the introduction of oral fluid screening. Drug driving is a problematic issue for road safety and investigations of the prevalence and impact of drug driving suggest that, in particular, the use of illicit drugs may increase a driver’s involvement in a road crash when compared to a driver who is drug free. In response to the potential increased crash involvement of drug impaired drivers, Australian police agencies have adopted the use of oral fluid analysis to detect the presence of illicit drugs in drivers. This paper describes the results of roadside drug testing for over 80,000 drivers in Queensland, Australia, from December 2007 to June 2012. It provides unique data on the prevalence of methamphetamine, cannabis and ecstasy in the screened population for the period. When prevalence rates are examined over time, drug driving detection rates have almost doubled from around 2.0% at the introduction of roadside testing operations to just under 4.0% in the latter years. The most common drug type detected was methamphetamine (40.8%) followed by cannabis (29.8%) and methamphetamine/cannabis combination (22.5%). By comparison, the rate of ecstasy detection was very low (1.7%). The data revealed a number of regional, age and gender patterns and variations of drug driving across the state. Younger drivers were more likely to test positive for cannabis whilst older drivers were more likely to test positive for methamphetamine. The overall characteristics of drivers who tested positive to the presence of at least one of the target illicit drugs are they are likely to be male, aged 30-39 years, be driving a car on Friday, Saturday or Sunday between 6:00PM and 6:00AM and to test positive for methamphetamine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives The UK Department for Transport recommends taking a break from driving every 2 h. This study investigated: (i) if a 2 h drive time on a monotonous road is appropriate for OSA patients treated with CPAP, compared with healthy age matched controls, (ii) the impact of a night’s sleep restriction (with CPAP) and (iii) what happens if these patients miss one nights’ CPAP treatment. Methods About 19 healthy men aged 52–74 y (m = 66.2 y) and 19 OSA participants aged 50–75 y (m = 64.4 y) drove an interactive car simulator under monotonous motorway conditions for 2 h on two afternoons, in a counterbalanced design; (1) following a normal night’s sleep (8 h). (2) following a restricted night’s sleep (5 h), with normal CPAP use (3) following a night without CPAP treatment. (n = 11) Lane drifting incidents, indicative of falling asleep, were recorded for up to 2 h depending on competence to continue driving. Results Normal sleep: Controls drove for an average of 95.9 min (s.d. 37 min) and treated OSA drivers for 89.6 min (s.d. 29 min) without incident. 63.2% of controls and 42.1% of OSA drivers successfully completed the drive without an incident. Sleep restriction: 47.4% of controls and 26.3% OSA drivers finished without incident. Overall: controls drove for an average of 89.5 min (s.d. 39 min) and treated OSA drivers 65 min (s.d. 42 min) without incident. The effect of condition was significant [F(1.36) = 9.237, P < 0.05, eta2 = 0.204]. Stopping CPAP: 18.2% of drivers successfully completed the drive. Overall, participants drove for an average of 50.1 min (s.d. 38 min) without incident. The effect of condition was significant [F(2) = 8.8, P < 0.05, eta2 = 0.468]. Conclusion 52.6% of all drivers were able to complete a 2 hour drive under monotonous conditions after a full night’s sleep. Sleep restriction significantly affected both control and OSA drivers. We find evidence that treated OSA drivers are more impaired by sleep restriction than healthy control, as they were less able to sustain safely the 2 h drive without incidents. OSA drivers should be aware that non-compliance with CPAP can significantly impair driving performance. It may be appropriate to recommend older drivers take a break from driving every 90 min especially when undertaking a monotonous drive, as was the case here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rationale: Anabolic steroids are drugs of abuse. However, the potential for addiction remains unclear. Testosterone induces conditioned place preference in rats and oral self-administration in hamsters. Objectives: To determine if male rats and hamsters consume testosterone by intravenous (IV) or intracerebroventricular (ICV) self- administration. Methods: With each nose-poke in the active hole during daily 4-h tests in an operant condi- tioning chamber, gonad-intact adult rats and hamsters received 50 mg testosterone in an aqueous solution of b-cyclodextrin via jugular cannula. The inactive nose- poke hole served as a control. Additional hamsters received vehicle infusions. Results: Rats (n=7) expressed a significant preference for the active nose-poke hole (10.0€2.8 responses/4 h) over the inactive hole (4.7€1.2 responses/4 h). Similarly, during 16 days of testosterone self-administration IV, hamsters (n=9) averaged 11.7€2.9 responses/4 h and 6.3€1.1 responses/4 h in the active and inactive nose-poke holes, respectively. By contrast, vehicle controls (n=8) failed to develop a preference for the active nose-poke hole (6.5€0.5 and 6.4€0.3 responses/4 h). Hamsters (n=8) also self-administered 1 mg testosterone ICV (active hole:39.8€6.0 nose-pokes/ 4 h; inactive hole: 22.6€7.1 nose-pokes/4 h). When testosterone was replaced with vehicle, nose-poking in the active hole declined from 31.1€7.6 to 11.9€3.2 responses/ 4 h within 6 days. Likewise, reversing active and inactive holes increased nose-poking in the previously inactive hole from 9.1€1.9 to 25.6€5.4 responses/4 h. However, reducing the testosterone dose from 1 mg to 0.2 mg per 1 ml injection did not change nose-poking. Conclu- sions: Compared with other drugs of abuse, testosterone reinforcement is modest. Nonetheless, these data support the hypothesis that testosterone is reinforcing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to assess the accuracy of placement of pelvic binders and to determine whether circumferential compression at the level of the greater trochanters is the best method of reducing a symphyseal diastasis. Patients were identified by a retrospective review of all pelvic radiographs performed at a military hospital over a period of 30 months. We analysed any pelvic radiograph on which the buckle of the pelvic binder was clearly visible. The patients were divided into groups according to the position of the buckle in relation to the greater trochanters: high, trochanteric or low. Reduction of the symphyseal diastasis was measured in a subgroup of patients with an open-book fracture, which consisted of an injury to the symphysis and disruption of the posterior pelvic arch (AO/OTA 61-B/C). We identified 172 radiographs with a visible pelvic binder. Five cases were excluded due to inadequate radiographs. In 83 (50%) the binder was positioned at the level of the greater trochanters. A high position was the most common site of inaccurate placement, occurring in 65 (39%). Seventeen patients were identified as a subgroup to assess the effect of the position of the binder on reduction of the diastasis. The mean gap was 2.8 times greater (mean difference 22 mm) in the high group compared with the trochanteric group (p < 0.01). Application of a pelvic binder above the level of the greater trochanters is common and is an inadequate method of reducing pelvic fractures and is likely to delay cardiovascular recovery in these seriously injured patients.