430 resultados para EBV B_(95-8)
Resumo:
Introduction and aims: Individual smokers from disadvantaged backgrounds are less likely to quit, which contributes to widening inequalities in smoking. Residents of disadvantaged neighbourhoods are more likely to smoke, and neighbourhood inequalities in smoking may also be widening because of neighbourhood differences in rates of cessation. This study examined the association between neighbourhood disadvantage and smoking cessation and its relationship with neighbourhood inequalities in smoking. Design and methods: A multilevel longitudinal study of mid-aged (40-67 years) residents (n=6915) of Brisbane, Australia, who lived in the same neighbourhoods (n=200) in 2007 and 2009. Neighbourhood inequalities in cessation and smoking were analysed using multilevel logistic regression and Markov chain Monte Carlo simulation. Results: After adjustment for individual-level socioeconomic factors, the probability of quitting smoking between 2007 and 2009 was lower for residents of disadvantaged neighbourhoods (9.0%-12.8%) than their counterparts in more advantaged neighbourhoods (20.7%-22.5%). These inequalities in cessation manifested in widening inequalities in smoking: in 2007 the between-neighbourhood variance in rates of smoking was 0.242 (p≤0.001) and in 2009 it was 0.260 (p≤0.001). In 2007, residents of the most disadvantaged neighbourhoods were 88% (OR 1.88, 95% CrI 1.41-2.49) more likely to smoke than residents in the least disadvantaged neighbourhoods: the corresponding difference in 2009 was 98% (OR 1.98 95% CrI 1.48-2.66). Conclusion: Fundamentally, social and economic inequalities at the neighbourhood and individual-levels cause smoking and cessation inequalities. Reducing these inequalities will require comprehensive, well-funded, and targeted tobacco control efforts and equity based policies that address the social and economic determinants of smoking.
Resumo:
PURPOSE: This study examined the effects of overnight sleep deprivation on recovery following competitive rugby league matches. METHODS: Eleven male, amateur rugby league players performed two competitive matches, followed by either a normal night's sleep (~8h; CONT) or a sleep deprived night (~0h; SDEP) in a randomised fashion. Testing was conducted the morning of the match, and immediately post-match, 2h post and the next morning (16h post-match). Measures included counter-movement jump (CMJ) distance, knee extensor maximal voluntary contraction (MVC), voluntary activation (VA), venous blood creatine kinase (CK) and C-reactive protein (CRP), perceived muscle soreness and a word-colour recognition cognitive function test. Percent change between post- and 16h post-match was reported to determine the effect of the intervention the next morning. RESULTS: Large effects indicated a greater post- to 16h post-match percentage decline in CMJ distance following SDEP compared to CONT (P=0.10-0.16; d=0.95-1.05). Similarly, the percentage decline in incongruent word-colour reaction times were increased in SDEP trials (P=0.007; d=1.75). Measures of MVC did not differ between conditions (P=0.40-0.75; d=0.13-0.33), though trends for larger percentage decline in VA were detected in SDEP (P=0.19; d=0.84). Further, large effects indicated higher CK and CRP responses 16h post-match during SDEP compared to CONT (P=0.11-0.87; d=0.80-0.88). CONCLUSIONS: Sleep deprivation negatively affected recovery following a rugby league match, specifically impairing CMJ distance and cognitive function. Practitioners should promote adequate post-match sleep patterns or adjust training demands the next day to accommodate the altered physical and cognitive state following sleep deprivation.
Resumo:
Background: Extracorporeal circulation (ECC), the diversion of blood flow through a circuit located outside of the body, has been one of the major advances in modern medicine. Cardio-pulmonary bypass (CPB), renal dialysis, apheresis and extracorporeal membrane oxygenation (ECMO) are all different forms of ECC. Despite its major benefits, when blood comes into contact with foreign material, both the coagulation and inflammation cascades are activated simultaneously. Short periods of exposure to ECC e.g. CPB (�2 h duration), are known to be associated with haemolysis, coagulopathies, bleeding and inflammation which demand blood product support. Therefore, it is not unexpected that these complications would be exaggerated with prolonged periods of ECC such as in ECMO (days to weeks duration). The variability and complexities of the underlying pathologies of patients requiring ECC makes it difficult to study the cause and effect of these complications. To overcome this problem we developed an ovine (sheep) model of ECC. Method: Healthy female sheep (1–3 y.o.) weighing 40–50 kg were fasted overnight, anaesthetised, intubated and ventilated [1]. Half the group received smoke induced acute lung injury (S-ALI group) (n = 8) and the other half did not (healthy group) (n = 8). Sheep were subsequently cannulated (Medtronic Inc, Minneapolis, MN, USA) and veno-venous ECMO commenced using PLS ECMO circuit and Quadrox D oxygenator (Maquet Cardiopulmonary AG, Hechinger Straße, Germany). There was continuous physiological monitoring and blood was collected at specified time intervals for full blood counts, platelet function analysis (by Multiplate®), routine coagulation and assessment of clot formation and lysis (by ROTEM®). Preliminary results Full blood counts and routine coagulation results from normal healthy sheep were comparable to those of normal human adults. Within 15 min of initiating of ECMO, PT, PTT and EXTEM clot formation time increased, whilst EXTEM maximum clot firmness decreased in both cohorts. Discussion & Conclusions: Preliminary results of sheep from both 2 h ECMO cohorts showed that the anatomy, haematology and coagulation parameters of an adult sheep are comparable to that a human adult. Experiments are currently underway with healthy (n = 8) and S-ALI (n = 8) sheep on ECMO for 24 h. In addition to characterising how ECMO alters haematology and coagulation parameters, we hope that it will also define which blood components will be most effective to correct bleeding or clotting complications during ECMO support.
Resumo:
PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.
Resumo:
This study evaluated effects of defensive pressure on running velocity in footballers during the approach to kick a stationary football. Approach velocity and ball speed/accuracy data were recorded from eight football youth academy participants (15.25, SD=0.46 yrs). Participants were required to run to a football to cross it to a receiver to score against a goal-keeper. Defensive pressure was manipulated across three counterbalanced conditions: defender-absent (DA); defender-far (DF) and defender-near (DN). Pass accuracy (percentages of a total of 32 trials with 95% confidence limits in parenthesis) did not significantly reduce under changing defensive pressure: DA, 78% (55–100%); DF, 78% (61–96%); DN, 59% (40–79%). Ball speed (m·s−1) significantly reduced as defensive pressure was included and increased: DA, 23.10 (22.38–23.83); DF, 20.40 (19.69–21.11); DN, 19.22 (18.51–19.93). When defensive pressure was introduced, average running velocity of attackers did not change significantly: DA versus DF (m·s−1), 5.40 (5.30–5.51) versus 5.41 (5.34–5.48). Scaling defender starting positions closer to the start position of the attacker (DN) significantly increased average running velocity relative to the DA and DF conditions, 5.60 (5.50–5.71). In the final approach footfalls, all conditions significantly differed: DA, 5.69 (5.35–6.03); DF, 6 .22 (5.93–6.50); DN, 6.52 (6.23–6.80). Data suggested that approach velocity is constrained by both presence and initial distance of the defender during task performance. Implications are that the expression of kicking behaviour is specific to a performance context and some movement regulation features will not emerge unless a defender is present as a task constraint in practice.
Resumo:
The optimum parameters for synthesis of zeolite NaA based on metakaolin were investigated according to results of cation exchange capacity and static water adsorption of all synthesis products and selected X-ray diffraction (XRD). Magnetic zeolite NaA was synthesized by adding Fe3O4 in the precursor of zeolite. Zeolite NaA and magnetic zeolite NaA were characterized with scanning electron microscopy (SEM) and XRD. Magnetic zeolite NaA with different Fe3O4 loadings was prepared and used for removal of heavy metals (Cu2+, Pb2+). The results show the optimum parameters for synthesis zeolite NaA are SiO2/Al2O3 = 2.3, Na2O/SiO2 = 1.4, H2O/Na2O = 50, crystallization time 8 h, crystallization temperature 95 �C. The addition of Fe3O4 makes the NaA zeolite with good magnetic susceptibility and good magnetic stability regardless of the Fe3O4 loading, confirming the considerable separation efficiency. Additionally, Fe3O4 loading had a little effect on removal of heavy metal by magnetic zeolite, however, the adsorption capacity still reaches 2.3 mmol g�1 for Cu2+, Pb2+ with a removal efficiency of over 95% in spite of 4.7% Fe3O4 loading. This indicates magnetic zeolite can be used to remove metal heavy at least Cu2+, Pb2+ from water with metallic contaminants and can be separated easily after a magnetic process.
Resumo:
BACKGROUND: A long length of stay (LOS) in the emergency department (ED) associated with overcrowding has been found to adversely affect the quality of ED care. The objective of this study is to determine whether patients who speak a language other than English at home have a longer LOS in EDs compared to those whose speak only English at home. METHODS: A secondary data analysis of a Queensland state-wide hospital EDs dataset (Emergency Department Information System) was conducted for the period, 1 January 2008 to 31 December 2010. RESULTS: The interpreter requirement was the highest among Vietnamese speakers (23.1%) followed by Chinese (19.8%) and Arabic speakers (18.7%). There were significant differences in the distributions of the departure statuses among the language groups (Chi-squared=3236.88, P<0.001). Compared with English speakers, the Beta coeffi cient for the LOS in the EDs measured in minutes was among Vietnamese, 26.3 (95%CI: 22.1–30.5); Arabic, 10.3 (95%CI: 7.3–13.2); Spanish, 9.4 (95%CI: 7.1–11.7); Chinese, 8.6 (95%CI: 2.6–14.6); Hindi, 4.0 (95%CI: 2.2–5.7); Italian, 3.5 (95%CI: 1.6–5.4); and German, 2.7 (95%CI: 1.0–4.4). The fi nal regression model explained 17% of the variability in LOS. CONCLUSION: There is a close relationship between the language spoken at home and the LOS at EDs, indicating that language could be an important predictor of prolonged LOS in EDs and improving language services might reduce LOS and ease overcrowding in EDs in Queensland's public hospitals.
Resumo:
In Energex Limited v Sablatura [2009] QSC 356 the difficulty facing the applicant related not to its substantive rights, but to its ability to vindicate those rights without an effective respondent to the application. The case highlights issues that may confront an applicant or plaintiff in vindicating rights it may have against a person who is or becomes under a legal incapacity, if there is no-one other than the Public Trustee able to act as litigation guardian.
Resumo:
Background: Decreased ability to perform Activities of Daily Living (ADLs) during hospitalisation has negative consequences for patients and health service delivery. Objective: To develop an Index to stratify patients at lower and higher risk of a significant decline in ability to perform ADLs at discharge. Design: Prospective two cohort study comprising a derivation (n=389; mean age 82.3 years; SD� 7.1) and a validation cohort (n=153; mean age 81.5 years; SD� 6.1). Patients and setting: General medical patients aged = 70 years admitted to three university-affiliated acute care hospitals in Brisbane, Australia. Measurement and main results: The short ADL Scale was used to identify a significant decline in ability to perform ADLs from premorbid to discharge. In the derivation cohort, 77 patients (19.8%) experienced a significant decline. Four significant factors were identified for patients independent at baseline: 'requiring moderate assistance to being totally dependent on others with bathing'; 'difficulty understanding others (frequently or all the time)'; 'requiring moderate assistance to being totally dependent on others with performing housework'; a 'history of experiencing at least one fall in the previous 90 days prior to hospital admission' in addition to 'independent at baseline', which was protective against decline at discharge. 'Difficulty understanding others (frequently or all the time)' and 'requiring moderate assistance to being totally dependent on others with performing housework' were also predictors for patients dependent in ADLs at baseline. Sensitivity, specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV) of the DADLD dichotomised risk scores were: 83.1% (95% CI 72.8; 90.7); 60.5% (95% CI 54.8; 65.9); 34.2% (95% CI 27.5; 41.5); 93.5% (95% CI 89.2; 96.5). In the validation cohort, 47 patients (30.7%) experienced a significant decline. Sensitivity, specificity, PPV and NPV of the DADLD were: 78.7% (95% CI 64.3; 89.3); 69.8% (95% CI 60.1, 78.3); 53.6% (95% CI 41.2; 65.7); 88.1% (95% CI 79.2; 94.1). Conclusions: The DADLD Index is a useful tool for identifying patients at higher risk of decline in ability to perform ADLs at discharge.
Resumo:
Background: Associations between sitting-time and physical activity (PA) with depression are unclear. Purpose: To examine concurrent and prospective associations between both sitting-time and PA with prevalent depressive symptoms in mid-aged Australian women. Methods: Data were from 8,950 women, aged 50-55 years in 2001, who completed mail surveys in 2001, 2004, 2007 and 2010. Depressive symptoms were assessed using the Center for Epidemiological Studies Depression questionnaire. Associations between sitting-time (≤4, >4-7, >7 hrs/day) and PA (none, some, meeting guidelines) with depressive symptoms (symptoms/no symptoms) were examined in 2011 in concurrent and lagged mixed effect logistic modeling. Both main effects and interaction models were developed. Results: In main effects modeling, women who sat >7 hrs/day (OR 1.47, 95%CI 1.29-1.67) and women who did no PA (OR 1.99, 95%CI 1.75-2.27) were more likely to have depressive symptoms than women who sat ≤4 hrs/day and who met PA guidelines, respectively. In interaction modeling, the likelihood of depressive symptoms in women who sat >7 hrs/day and did no PA was triple that of women who sat ≤4 hrs/day and met PA guidelines (OR 2.96, 95%CI 2.37-3.69). In prospective main effects and interaction modeling, sitting-time was not associated with depressive symptoms, but women who did no PA were more likely than those who met PA guidelines to have future depressive symptoms (OR 1.26, 95%CI 1.08-1.47). Conclusions: Increasing PA to a level commensurate with PA guidelines can alleviate current depression symptoms and prevent future symptoms in mid-aged women. Reducing sitting-time may ameliorate current symptoms.
Resumo:
Monitoring foodservice satisfaction is a risk management strategy for malnutrition in the acute care sector, as low satisfaction may be associated with poor intake. This study aimed to investigate the relationship between age and foodservice satisfaction in the private acute care setting. Patient satisfaction was assessed using a validated tool, the Acute Care Hospital Foodservice Patient Satisfaction Questionnaire for data collected 2008–2010 (n = 779) at a private hospital, Brisbane. Age was grouped into three categories; <50 years, 51–70 years and >70 years. Fisher’s exact test assessed independence of categorical responses and age group; ANOVA or Kruskal–Wallis test was used for continuous variables. Dichotomised responses were analysed using logistic regression and odds ratios (95% confidence interval, p < 0.05). Overall foodservice satisfaction (5 point scale) was high (≥4 out of 5) and was independent of age group (p = 0.377). There was an increasing trend with age in mean satisfaction scores for individual dimensions of foodservice; food quality (p < 0.001), meal service quality (p < 0.001), staff service issues (p < 0.001) and physical environment (p < 0.001). A preference for being able to choose different sized meals (59.8% > 70 years vs 40.6% ≤50 years; p < 0.001) and response to ‘the foods are just the right temperature’ (55.3% >70 years vs 35.9% ≤50 years; p < 0.001) was dependent on age. For the food quality dimension, based on dichotomised responses (satisfied or not), the odds of satisfaction was higher for >70 years (OR = 5.0, 95% CI: 1.8–13.8; <50 years referent). These results suggest that dimensions of foodservice satisfaction are associated with age and can assist foodservices to meet varying generational expectations of clients.
Resumo:
Objective: Several new types of contraception became available in Australia over the last twelve years (the implant in 2001, progestogen intra-uterine device (IUD) in 2003, and vaginal contraceptive ring in 2007). Most methods of contraception require access to health services. Permanent sterilisation and the insertion of an implant or IUD involve a surgical procedure. Access to health professionals providing these specialised services may be more difficult in rural areas. This paper examines uptake of permanent or long-acting reversible contraception (LARCs) among Australian women in rural areas compared to women in urban areas. Method: Participants in the Australian Longitudinal Study on Women's Health born in 1973-78 reported on their contraceptive use at three surveys: 2003, 2006 and 2009. Contraceptive methods included permanent sterilisation (tubal ligation, vasectomy), non-daily or LARC methods (implant, IUD, injection, vaginal ring), and other methods including daily, barrier or "natural" methods (oral contraceptive pills, condoms, withdrawal, safe period). Sociodemographic, reproductive history and health service use factors associated with using permanent, LARC or other methods were examined using a multivariable logistic regression analysis. Results: Of 9,081 women aged 25-30 in 2003, 3% used permanent methods and 4% used LARCs. Six years later in 2009, of 8,200 women (aged 31-36), 11% used permanent methods and 9% used LARCs. The fully adjusted parsimonious regression model showed that the likelihood of a woman using LARCs and permanent methods increased with number of children. Women whose youngest child was school-age were more likely to use LARCs (OR=1.83, 95%CI 1.43-2.33) or permanent methods (OR=4.39, 95%CI 3.54-5.46) compared to women with pre-school children. Compared to women living in major cities, women in inner regional areas were more likely to use LARCs (OR=1.26, 95%CI 1.03-1.55) or permanent methods (OR=1.43, 95%CI 1.17-1.76). Women living in outer regional and remote areas were more likely than women living in cities to use LARCs (OR=1.65, 95%CI 1.31-2.08) or permanent methods (OR=1.69, 95%CI 1.43-2.14). Women with poorer access to GPs were more likely to use permanent methods (OR=1.27, 95%CI 1.07-1.52). Conclusions: Location of residence and access to health services are important factors in women's choices about long-acting contraception in addition to the number and age of their children. There is a low level of uptake of non-daily, long-acting methods of contraception among Australian women in their mid-thirties.
Resumo:
Diet Induced Thermogenesis (DIT) is the energy expended consequent to meal consumption, and reflects the energy required for the processing and digestion of food consumed throughout each day. Although DIT is the total energy expended across a day in digestive processes to a number of meals, most studies measure thermogenesis in response to a single meal (Meal Induced Thermogenesis: MIT) as a representation of an individual’s thermogenic response to acute food ingestion. As a component of energy expenditure, DIT may have a contributing role in weight gain and weight loss. While the evidence is inconsistent, research has tended to reveal a suppressed MIT response in obese compared to lean individuals, which identifies individuals with an efficient storage of food energy, hence a greater tendency for weight gain. Appetite is another factor regulating body weight through its influence on energy intake. Preliminary research has shown a potential link between MIT and postprandial appetite as both are responses to food ingestion and have a similar response dependent upon the macronutrient content of food. There is a growing interest in understanding how both MIT and appetite are modified with changes in diet, activity levels and body size. However, the findings from MIT research have been highly inconsistent, potentially due to the vastly divergent protocols used for its measurement. Therefore, the main theme of this thesis was firstly, to address some of the methodological issues associated with measuring MIT. Additionally this thesis aimed to measure postprandial appetite simultaneously to MIT to test for any relationships between these meal-induced variables and to assess changes that occur in MIT and postprandial appetite during periods of energy restriction (ER) and following weight loss. Two separate studies were conducted to achieve these aims. Based on the increasing prevalence of obesity, it is important to develop accurate methodologies for measuring the components potentially contributing to its development and to understand the variability within these variables. Therefore, the aim of Study One was to establish a protocol for measuring the thermogenic response to a single test meal (MIT), as a representation of DIT across a day. This was done by determining the reproducibility of MIT with a continuous measurement protocol and determining the effect of measurement duration. The benefit of a fixed resting metabolic rate (RMR), which is a single measure of RMR used to calculate each subsequent measure of MIT, compared to separate baseline RMRs, which are separate measures of RMR measured immediately prior to each MIT test meal to calculate each measure of MIT, was also assessed to determine the method with greater reproducibility. Subsidiary aims were to measure postprandial appetite simultaneously to MIT, to determine its reproducibility between days and to assess potential relationships between these two variables. Ten healthy individuals (5 males, 5 females, age = 30.2 ± 7.6 years, BMI = 22.3 ± 1.9 kg/m2, %Fat Mass = 27.6 ± 5.9%) undertook three testing sessions within a 1-4 week time period. During the first visit, participants had their body composition measured using DXA for descriptive purposes, then had an initial 30-minute measure of RMR to familiarise them with the testing and to be used as a fixed baseline for calculating MIT. During the second and third testing sessions, MIT was measured. Measures of RMR and MIT were undertaken using a metabolic cart with a ventilated hood to measure energy expenditure via indirect calorimetry with participants in a semi-reclined position. The procedure on each MIT test day was: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard 576 kcal breakfast (54.3% CHO, 14.3% PRO, 31.4% FAT), comprising muesli, milk toast, butter, jam and juice, and 3) six hours of measuring MIT with two, ten-minute breaks at 3 and 4.5 hours for participants to visit the bathroom. On the MIT test days, pre and post breakfast then at 45-minute intervals, participants rated their subjective appetite, alertness and comfort on visual analogue scales (VAS). Prior to each test, participants were required to be fasted for 12 hours, and have undertaken no high intensity physical activity for the previous 48 hours. Despite no significant group changes in the MIT response between days, individual variability was high with an average between-day CV of 33%, which was not significantly improved by the use of a fixed RMR to 31%. The 95% limits of agreements which ranged from 9.9% of energy intake (%EI) to -10.7%EI with the baseline RMRs and between 9.6%EI to -12.4%EI with the fixed RMR, indicated very large changes relative to the size of the average MIT response (MIT 1: 8.4%EI, 13.3%EI; MIT 2: 8.8%EI, 14.7%EI; baseline and fixed RMRs respectively). After just three hours, the between-day CV with the baseline RMR was 26%, which may indicate an enhanced MIT reproducibility with shorter measurement durations. On average, 76, 89, and 96% of the six-hour MIT response was completed within three, four and five hours, respectively. Strong correlations were found between MIT at each of these time points and the total six-hour MIT (range for correlations r = 0.990 to 0.998; P < 0.01). The reproducibility of the proportion of the six-hour MIT completed at 3, 4 and 5 hours was reproducible (between-day CVs ≤ 8.5%). This indicated the suitability to use shorter durations on repeated occasions and a similar percent of the total response to be completed. There was a lack of strong evidence of any relationship between the magnitude of the MIT response and subjective postprandial appetite. Given a six-hour protocol places a considerable burden on participants, these results suggests that a post-meal measurement period of only three hours is sufficient to produce valid information on the metabolic response to a meal. However while there was no mean change in MIT between test days, individual variability was large. Further research is required to better understand which factors best explain the between-day variability in this physiological measure. With such a high prevalence of obesity, dieting has become a necessity to reduce body weight. However, during periods of ER, metabolic and appetite adaptations can occur which may impede weight loss. Understanding how metabolic and appetite factors change during ER and weight loss is important for designing optimal weight loss protocols. The purpose of Study Two was to measure the changes in the MIT response and subjective postprandial appetite during either continuous (CONT) or intermittent (INT) ER and following post diet energy balance (post-diet EB). Thirty-six obese male participants were randomly assigned to either the CONT (Age = 38.6 ± 7.0 years, weight = 109.8 ± 9.2 kg, % fat mass = 38.2 ± 5.2%) or INT diet groups (Age = 39.1 ± 9.1 years, weight = 107.1 ± 12.5 kg, % fat mass = 39.6 ± 6.8%). The study was divided into three phases: a four-week baseline (BL) phase where participants were provided with a diet to maintain body weight, an ER phase lasting either 16 (CONT) or 30 (INT) weeks, where participants were provided with a diet which supplied 67% of their energy balance requirements to induce weight loss and an eight-week post-diet EB phase, providing a diet to maintain body weight post weight loss. The INT ER phase was delivered as eight, two-week blocks of ER interspersed with two-week blocks designed to achieve weight maintenance. Energy requirements for each phase were predicted based on measured RMR, and adjusted throughout the study to account for changes in RMR. All participants completed MIT and appetite tests during BL and the ER phase. Nine CONT and 15 INT participants completed the post-diet EB MIT and 14 INT and 15 CONT participants completed the post-diet EB appetite tests. The MIT test day protocol was as follows: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard breakfast meal (874 kcal, 53.3% CHO, 14.5% PRO, 32.2% FAT), and 3) three hours of measuring MIT. MIT was calculated as the energy expenditure above the pre-meal RMR. Appetite test days were undertaken on a separate day using the same 576 kcal breakfast used in Study One. VAS were used to assess appetite pre and post breakfast, at one hour post breakfast then a further three times at 45-minute intervals. Appetite ratings were calculated for hunger and fullness as both the intra-meal change in appetite and the AUC. The three-hour MIT response at BL, ER and post-diet EB respectively were 5.4 ± 1.4%EI, 5.1 ± 1.3%EI and 5.0 ± 0.8%EI for the CONT group and 4.4 ± 1.0%EI, 4.7 ± 1.0%EI and 4.8 ± 0.8%EI for the INT group. Compared to BL, neither group had significant changes in their MIT response during ER or post-diet EB. There were no significant time by group interactions (p = 0.17) indicating a similar response to ER and post-diet EB in both groups. Contrary to what was hypothesised, there was a significant increase in postprandial AUC fullness in response to ER in both groups (p < 0.05). However, there were no significant changes in any of the other postprandial hunger or fullness variables. Despite no changes in MIT in both the CONT or INT group in response to ER or post-diet EB and only a minor increase in postprandial AUC fullness, the individual changes in MIT and postprandial appetite in response to ER were large. However those with the greatest MIT changes did not have the greatest changes in postprandial appetite. This study shows that postprandial appetite and MIT are unlikely to be altered during ER and are unlikely to hinder weight loss. Additionally, there were no changes in MIT in response to weight loss, indicating that body weight did not influence the magnitude of the MIT response. There were large individual changes in both variables, however further research is required to determine whether these changes were real compensatory changes to ER or simply between-day variation. Overall, the results of this thesis add to the current literature by showing the large variability of continuous MIT measurements, which make it difficult to compare MIT between groups and in response to diet interventions. This thesis was able to provide evidence to suggest that shorter measures may provide equally valid information about the total MIT response and can therefore be utilised in future research in order to reduce the burden of long measurements durations. This thesis indicates that MIT and postprandial subjective appetite are most likely independent of each other. This thesis also shows that, on average, energy restriction was not associated with compensatory changes in MIT and postprandial appetite that would have impeded weight loss. However, the large inter-individual variability supports the need to examine individual responses in more detail.
Resumo:
Background Surveillance programs and research for acute respiratory infections in remote Australian communities are complicated by difficulties in the storage and transport of frozen samples to urban laboratories for testing. This study assessed the sensitivity of a simple method for transporting nasal swabs from a remote setting for bacterial polymerase chain reaction (PCR) testing. Methods We sampled every individual who presented to a remote community clinic over a three week period in August at a time of low influenza and no respiratory syncytial virus activity. Two anterior nasal swabs were collected from each participant. The left nare specimen was mailed to the laboratory via routine postal services. The right nare specimen was transported frozen. Testing for six bacterial species was undertaken using real-time PCR. Results One hundred and forty participants were enrolled who contributed 150 study visits and paired specimens for testing. Respiratory illnesses accounted for 10% of the reasons for presentation. Bacteria were identified in 117 (78%) presentations for 110 (79.4%) individuals; Streptococcus pneumoniae and Haemophilus influenzae were the most common (each identified in 58% of episodes). The overall sensitivity for any bacterium detected in mailed specimens was 82.2% (95% CI 73.6, 88.1) compared to 94.8% (95% CI 89.4, 98.1) for frozen specimens. The sensitivity of the two methods varied by species identified. Conclusion The mailing of unfrozen nasal specimens from remote communities appears to influence the utility of the specimen for bacterial studies, with a loss in sensitivity for the detection of any species overall. Further studies are needed to confirm our finding and to investigate the possible mechanisms of effect. Clinical trial registration Australia and New Zealand Clinical Trials Registry Number: ACTRN12609001006235. Keywords: Respiratory bacteria; RT-PCR; Specimen transport; Laboratory methods
Resumo:
The giant freshwater prawn (Macrobrachium rosenbergii) or GFP is one of the most important freshwater crustacean species in the inland aquaculture sector of many tropical and subtropical countries. Since the 1990’s, there has been rapid global expansion of freshwater prawn farming, especially in Asian countries, with an average annual rate of increase of 48% between 1999 and 2001 (New, 2005). In Vietnam, GFP is cultured in a variety of culture systems, typically in integrated or rotational rice-prawn culture (Phuong et al., 2006) and has become one of the most common farmed aquatic species in the country, due to its ability to grow rapidly and to attract high market price and high demand. Despite potential for expanded production, sustainability of freshwater prawn farming in the region is currently threatened by low production efficiency and vulnerability of farmed stocks to disease. Commercial large scale and small scale GFP farms in Vietnam have experienced relatively low stock productivity, large size and weight variation, a low proportion of edible meat (large head to body ratio), scarcity of good quality seed stock. The current situation highlights the need for a systematic stock improvement program for GFP in Vietnam aimed at improving economically important traits in this species. This study reports on the breeding program for fast growth employing combined (between and within) family selection in giant freshwater prawn in Vietnam. The base population was synthesized using a complete diallel cross including 9 crosses from two local stocks (DN and MK strains) and a third exotic stock (Malaysian strain - MY). In the next three selection generations, matings were conducted between genetically unrelated brood stock to produce full-sib and (paternal) half-sib families. All families were produced and reared separately until juveniles in each family were tagged as a batch using visible implant elastomer (VIE) at a body size of approximately 2 g. After tags were verified, 60 to 120 juveniles chosen randomly from each family were released into two common earthen ponds of 3,500 m2 pond for a grow-out period of 16 to 18 weeks. Selection applied at harvest on body weight was a combined (between and within) family selection approach. 81, 89, 96 and 114 families were produced for the Selection line in the F0, F1, F2 and F3 generations, respectively. In addition to the Selection line, 17 to 42 families were produced for the Control group in each generation. Results reported here are based on a data set consisting of 18,387 body and 1,730 carcass records, as well as full pedigree information collected over four generations. Variance and covariance components were estimated by restricted maximum likelihood fitting a multi-trait animal model. Experiments assessed performance of VIE tags in juvenile GFP of different size classes and individuals tagged with different numbers of tags showed that juvenile GFP at 2 g were of suitable size for VIE tags with no negative effects evident on growth or survival. Tag retention rates were above 97.8% and tag readability rates were 100% with a correct assignment rate of 95% through to mature animal size of up to 170 g. Across generations, estimates of heritability for body traits (body weight, body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width) and carcass weight traits (abdominal weight, skeleton-off weight and telson-off weight) were moderate and ranged from 0.14 to 0.19 and 0.17 to 0.21, respectively. Body trait heritabilities estimated for females were significantly higher than for males whereas carcass weight trait heritabilities estimated for females and males were not significantly different (P > 0.05). Maternal and common environmental effects for body traits accounted for 4 to 5% of the total variance and were greater in females (7 to 10%) than in males (4 to 5%). Genetic correlations among body traits were generally high in both sexes. Genetic correlations between body and carcass weight traits were also high in the mixed sexes. Average selection response (% per generation) for body weight (transformed to square root) estimated as the difference between the Selection and the Control group was 7.4% calculated from least squares means (LSMs), 7.0% from estimated breeding values (EBVs) and 4.4% calculated from EBVs between two consecutive generations. Favourable correlated selection responses (estimated from LSMs) were detected for other body traits (12.1%, 14.5%, 10.4%, 15.5% and 13.3% for body length, cephalothorax length, abdominal length, cephalothorax width and abdominal width, respectively) over three selection generations. Data in the second selection generation showed positive correlated responses for carcass weight traits (8.8%, 8.6% and 8.8% for abdominal weight, skeleton-off weight and telson-off weight, respectively). Data in the third selection generation showed that heritability for body traits were moderate and ranged from 0.06 to 0.11 and 0.11 to 0.22 at weeks 10 and 18, respectively. Body trait heritabilities estimated at week 10 were not significantly lower than at week 18. Genetic correlations between body traits within age and genetic correlations for body traits between ages were generally high. Overall our results suggest that growth rate responds well to the application of family selection and carcass weight traits can also be improved in parallel, using this approach. Moreover, selection for high growth rate in GFP can be undertaken successfully before full market size has been reached. The outcome of this study was production of an improved culture strain of GFP for the Vietnamese culture industry that will be trialed in real farm production environments to confirm the genetic gains identified in the experimental stock improvement program.