238 resultados para Jaime Rest
Resumo:
It is frequently reported that the actual weight loss achieved through exercise interventions is less than theoretically expected. Amongst other compensatory adjustments that accompany exercise training (e.g., increases in resting metabolic rate and energy intake), a possible cause of the less than expected weight loss is a failure to produce a marked increase in total daily energy expenditure due to a compensatory reduction in non-exercise activity thermogenesis (NEAT). Therefore, there is a need to understand how behaviour is modified in response to exercise interventions. The proposed benefits of exercise training are numerous, including changes to fat oxidation. Given that a diminished capacity to oxidise fat could be a factor in the aetiology of obesity, an exercise training intensity that optimises fat oxidation in overweight/obese individuals would improve impaired fat oxidation, and potentially reduce health risks that are associated with obesity. To improve our understanding of the effectiveness of exercise for weight management, it is important to ensure exercise intensity is appropriately prescribed, and to identify and monitor potential compensatory behavioural changes consequent to exercise training. In line with the gaps in the literature, three studies were performed. The aim of Study 1 was to determine the effect of acute bouts of moderate- and high-intensity walking exercise on NEAT in overweight and obese men. Sixteen participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60-min on a motorised treadmill at 6 km.h-1. The 60-min HIE session consisted of walking in 5-min intervals at 6 km.h-1 and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer three days before, on the day of, and three days after the exercise sessions. There was no significant difference in NEAT vector magnitude (counts.min-1) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with the exercise day (P = 0.32). During the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with the pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. To conclude, a single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, extending the monitoring of NEAT allowed the detection of a 48 hour delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT. In Study 2, there were two primary aims. The first aim was to test the reliability of a discontinuous incremental exercise protocol (DISCON-FATmax) to identify the workload at which fat oxidation is maximised (FATmax). Ten overweight and obese sedentary male men (mean BMI of 29.5 ¡Ó 4.5 kg/m2 and mean age of 28.0 ¡Ó 5.3 y) participated in this study and performed two identical DISCON-FATmax tests one week apart. Each test consisted of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The starting work load of 28 W was increased every 4-min using 14 W increments followed by 2-min rest intervals. When the respiratory exchange ratio was consistently >1.0, the workload was increased by 14 W every 2-min until volitional exhaustion. Fat oxidation was measured by indirect calorimetry. The mean FATmax, ƒtV O2peak, %ƒtV O2peak and %Wmax at which FATmax occurred during the two tests were 0.23 ¡Ó 0.09 and 0.18 ¡Ó 0.08 (g.min-1); 29.7 ¡Ó 7.8 and 28.3 ¡Ó 7.5 (ml.kg-1.min-1); 42.3 ¡Ó 7.2 and 42.6 ¡Ó 10.2 (%ƒtV O2max) and 36.4 ¡Ó 8.5 and 35.4 ¡Ó 10.9 (%), respectively. A paired-samples T-test revealed a significant difference in FATmax (g.min-1) between the tests (t = 2.65, P = 0.03). The mean difference in FATmax was 0.05 (g.min-1) with the 95% confidence interval ranging from 0.01 to 0.18. Paired-samples T-test, however, revealed no significant difference in the workloads (i.e. W) between the tests, t (9) = 0.70, P = 0.4. The intra-class correlation coefficient for FATmax (g.min-1) between the tests was 0.84 (95% confidence interval: 0.36-0.96, P < 0.01). However, Bland-Altman analysis revealed a large disagreement in FATmax (g.min-1) related to W between the two tests; 11 ¡Ó 14 (W) (4.1 ¡Ó 5.3 ƒtV O2peak (%)).These data demonstrate two important phenomena associated with exercise-induced substrate oxidation; firstly, that maximal fat oxidation derived from a discontinuous FATmax protocol differed statistically between repeated tests, and secondly, there was large variability in the workload corresponding with FATmax. The second aim of Study 2 was to test the validity of a DISCON-FATmax protocol by comparing maximal fat oxidation (g.min-1) determined by DISCON-FATmax with fat oxidation (g.min-1) during a continuous exercise protocol using a constant load (CONEX). Ten overweight and obese sedentary males (BMI = 29.5 ¡Ó 4.5 kg/m2; age = 28.0 ¡Ó 4.5 y) with a ƒtV O2max of 29.1 ¡Ó 7.5 ml.kg-1.min-1 performed a DISCON-FATmax test consisting of alternate 4-min exercise and 2-min rest intervals on a cycle ergometer. The 1-h CONEX protocol used the workload from the DISCON-FATmax to determine FATmax. The mean FATmax, ƒtV O2max, %ƒtV O2max and workload at which FATmax occurred during the DISCON-FATmax were 0.23 ¡Ó 0.09 (g.min-1); 29.1 ¡Ó 7.5 (ml.kg-1.min-1); 43.8 ¡Ó 7.3 (%ƒtV O2max) and 58.8 ¡Ó 19.6 (W), respectively. The mean fat oxidation during the 1-h CONEX protocol was 0.19 ¡Ó 0.07 (g.min-1). A paired-samples T-test revealed no significant difference in fat oxidation (g.min-1) between DISCON-FATmax and CONEX, t (9) = 1.85, P = 0.097 (two-tailed). There was also no significant correlation in fat oxidation between the DISCON-FATmax and CONEX (R=0.51, P = 0.14). Bland- Altman analysis revealed a large disagreement in fat oxidation between the DISCONFATmax and CONEX; the upper limit of agreement was 0.13 (g.min-1) and the lower limit of agreement was ¡V0.03 (g.min-1). These data suggest that the CONEX and DISCONFATmax protocols did not elicit different rates of fat oxidation (g.min-1). However, the individual variability in fat oxidation was large, particularly in the DISCON-FATmax test. Further research is needed to ascertain the validity of graded exercise tests for predicting fat oxidation during constant load exercise sessions. The aim of Study 3 was to compare the impact of two different intensities of four weeks of exercise training on fat oxidation, NEAT, and appetite in overweight and obese men. Using a cross-over design 11 participants (BMI = 29 ¡Ó 4 kg/m2; age = 27 ¡Ó 4 y) participated in a training study and were randomly assigned initially to: [1] a lowintensity (45%ƒtV O2max) exercise (LIT) or [2] a high-intensity interval (alternate 30 s at 90%ƒtV O2max followed by 30 s rest) exercise (HIIT) 40-min duration, three times a week. Participants completed four weeks of supervised training and between cross-over had a two week washout period. At baseline and the end of each exercise intervention,ƒtV O2max, fat oxidation, and NEAT were measured. Fat oxidation was determined during a standard 30-min continuous exercise bout at 45%ƒtV O2max. During the steady state exercise expired gases were measured intermittently for 5-min periods and HR was monitored continuously. In each training period, NEAT was measured for seven consecutive days using an accelerometer (RT3) the week before, at week 3 and the week after training. Subjective appetite sensations and food preferences were measured immediately before and after the first exercise session every week for four weeks during both LIT and HIIT. The mean fat oxidation rate during the standard continuous exercise bout at baseline for both LIT and HIIT was 0.14 ¡Ó 0.08 (g.min-1). After four weeks of exercise training, the mean fat oxidation was 0.178 ¡Ó 0.04 and 0.183 ¡Ó 0.04 g.min-1 for LIT and HIIT, respectively. The mean NEAT (counts.min-1) was 45 ¡Ó 18 at baseline, 55 ¡Ó 22 and 44 ¡Ó 16 during training, and 51 ¡Ó 14 and 50 ¡Ó 21 after training for LIT and HIIT, respectively. There was no significant difference in fat oxidation between LIT and HIIT. Moreover, although not statistically significant, there was some evidence to suggest that LIT and HIIT tend to increase fat oxidation during exercise at 45% ƒtV O2max (P = 0.14 and 0.08, respectively). The order of training treatment did not significantly influence changes in fat oxidation, NEAT, and appetite. NEAT (counts.min-1) was not significantly different in the week following training for either LIT or HIIT. Although not statistically significant (P = 0.08), NEAT was 20% lower during week 3 of exercise training in HIIT compared with LIT. Examination of appetite sensations revealed differences in the intensity of hunger, with higher ratings after LIT compared with HIIT. No differences were found in preferences for high-fat sweet foods between LIT and HIIT. In conclusion, the results of this thesis suggest that while fat oxidation during steady state exercise was not affected by the level of exercise intensity, there is strong evidence to suggest that intense exercise could have a debilitative effect on NEAT.
Resumo:
Nineteen studies met the inclusion criteria. A skin temperature reduction of 5–15 °C, in accordance with the recent PRICE (Protection, Rest, Ice, Compression and Elevation) guidelines, were achieved using cold air, ice massage, crushed ice, cryotherapy cuffs, ice pack, and cold water immersion. There is evidence supporting the use and effectiveness of thermal imaging in order to access skin temperature following the application of cryotherapy. Thermal imaging is a safe and non-invasive method of collecting skin temperature. Although further research is required, in terms of structuring specific guidelines and protocols, thermal imaging appears to be an accurate and reliable method of collecting skin temperature data following cryotherapy. Currently there is ambiguity regarding the optimal skin temperature reductions in a medical or sporting setting. However, this review highlights the ability of several different modalities of cryotherapy to reduce skin temperature.
Resumo:
We report three developments toward resolving the challenge of the apparent basal polytomy of neoavian birds. First, we describe improved conditional down-weighting techniques to reduce noise relative to signal for deeper divergences and find increased agreement between data sets. Second, we present formulae for calculating the probabilities of finding predefined groupings in the optimal tree. Finally, we report a significant increase in data: nine new mitochondrial (mt) genomes (the dollarbird, New Zealand kingfisher, great potoo, Australian owlet-nightjar, white-tailed trogon, barn owl, a roadrunner [a ground cuckoo], New Zealand long-tailed cuckoo, and the peach-faced lovebird) and together they provide data for each of the six main groups of Neoaves proposed by Cracraft J (2001). We use his six main groups of modern birds as priors for evaluation of results. These include passerines, cuckoos, parrots, and three other groups termed “WoodKing” (woodpeckers/rollers/kingfishers), “SCA” (owls/potoos/owlet-nightjars/hummingbirds/swifts), and “Conglomerati.” In general, the support is highly significant with just two exceptions, the owls move from the “SCA” group to the raptors, particularly accipitrids (buzzards/eagles) and the osprey, and the shorebirds may be an independent group from the rest of the “Conglomerati”. Molecular dating mt genomes support a major diversification of at least 12 neoavian lineages in the Late Cretaceous. Our results form a basis for further testing with both nuclear-coding sequences and rare genomic changes.
Resumo:
Internal autopsies are invasive and result in the mutilation of the deceased person’s body. They are expensive and pose occupational health and safety risks. Accordingly, they should only be done for good cause. However, until recently, “full” internal autopsies have usually been undertaken in most coroners’ cases. There is a growing trend against this practice but it is meeting resistance from some pathologists who argue that any decision as to the extent of the autopsy should rest with them. This paper examines the origins of the coronial system to place in context the current approach to a death investigation and to review the debate about the role of an internal autopsy in the coronial system.
Resumo:
The regulatory pathways involved in maintaining the pluripotency of embryonic stem cells are partially known, whereas the regulatory pathways governing adult stem cells and their "stem-ness" are characterized to an even lesser extent. We, therefore, screened the transcriptome profiles of 20 osteogenically induced adult human adipose-derived stem cell (ADSC) populations and investigated for putative transcription factors that could regulate the osteogenic differentiation of these ADSC. We studied a subgroup of donors' samples that had a disparate osteogenic response transcriptome from that of induced human fetal osteoblasts and the rest of the induced human ADSC samples. From our statistical analysis, we found activating transcription factor 5 (ATF5) to be significantly and consistently down-regulated in a randomized time-course study of osteogenically differentiated adipose-derived stem cells from human donor samples. Knockdown of ATF5 with siRNA showed an increased sensitivity to osteogenic induction. This evidence suggests a role for ATF5 in the regulation of osteogenic differentiation in adipose-derived stem cells. To our knowledge, this is the first report that indicates a novel role of transcription factors in regulating osteogenic differentiation in adult or tissue specific stem cells. © 2012 Wiley Periodicals, Inc.
Resumo:
Introduction: The ability to regulate joint stiffness and coordinate movement during landing when impaired by muscle fatigue has important implications for knee function. Unfortunately, the literature examining fatigue effects on landing mechanics suffers from a lack of consensus. Inconsistent results can be attributed to variable fatigue models, as well as grouping variable responses between individuals when statistically detecting differences between conditions. There remains a need to examine fatigue effects on knee function during landing with attention to these methodological limitations. Aim: The purpose of this study therefore, was to examine the effects of isokinetic fatigue on pre-impact muscle activity and post-impact knee mechanics during landing using singlesubject analysis. Methodology: Sixteen male university students (22.6+3.2 yrs; 1.78+0.07 m; 75.7+6.3 kg) performed maximal concentric and eccentric knee extensions in a reciprocal manner on an isokinetic dynamometer and step-landing trials on 2 occasions. On the first occasion each participant performed 20 step-landing trials from a knee-high platform followed by 75 maximal contractions on the isokinetic dynamometer. The isokinetic data was used to calculate the operational definition of fatigue. On the second occasion, with a minimum rest of 14 days, participants performed 2 sets of 20 step landing trials, followed by isokinetic exercise until the operational definition of fatigue was met and a final post-fatigue set of 20 step-landing trials. Results: Single-subject analyses revealed that isokinetic fatigue of the quadriceps induced variable responses in pre impact activation of knee extensors and flexors (frequency, onset timing and amplitude) and post-impact knee mechanics(stiffness and coordination). In general however, isokinetic fatigue induced sig nificant (p<0.05) reductions in quadriceps activation frequency, delayed onset and increased amplitude. In addition, knee stiffness was significantly (p<0.05) increased in some individuals, as well as impaired sagittal coordination. Conclusions: Pre impact activation and post-impact mechanics were adjusted in patterns that were unique to the individual, which could not be identified using traditional group-based statistical analysis. The results suggested that individuals optimised knee function differently to satisfy competing demands, such as minimising energy expenditure, as well as maximising joint stability and sensory information.
Resumo:
Introduction: Evidence concerning the alteration of knee function during landing suffers from a lack of consensus. This uncertainty can be attributed to methodological flaws, particularly in relation to the statistical analysis of variable human movement data. Aim: The aim of this study was to compare single-subject and group analysis in quantifying alterations in the magnitude and within-participant variability of knee mechanics during a step landing task. Methods: A group of healthy men (N = 12) stepped-down from a knee-high platform for 60 consecutive trials, each trial separated by a 1-minute rest. The magnitude and within-participant variability of sagittal knee stiffness and coordination of the landing leg during the immediate postimpact period were evaluated. Coordination of the knee was quantified in the sagittal plane by calculating the mean absolute relative phase of sagittal shank and thigh motion (MARP1) and between knee rotation and knee flexion (MARP2). Changes across trials were compared between both group and single-subject statistical analyses. Results: The group analysis detected significant reductions in MARP1 magnitude. However, the single-subject analyses detected changes in all dependent variables, which included increases in variability with task repetition. Between-individual variation was also present in the timing, size and direction of alterations to task repetition. Conclusion: The results have important implications for the interpretation of existing information regarding the adaptation of knee mechanics to interventions such as fatigue, footwear or landing height. It is proposed that a familiarisation session be incorporated in future experiments on a single-subject basis prior to an intervention.
Resumo:
Purpose To compare self-reported driving ability with objective measures of on-road driving performance in a large cohort of older drivers. Methods 270 community-living adults aged 70 – 88 years recruited via the electoral roll completed a standardized assessment of on-road driving performance and questionnaires determining perceptions of their own driving ability, confidence and driving difficulties. Retrospective self-reported crash data over the previous five years were recorded. Results Participants reported difficulty with only selected driving situations, including driving into the sun, in unfamiliar areas, in wet conditions, and at night or dusk. The majority of participants rated their own driving as good to excellent. Of the 47 (17%) of drivers who were rated as potentially unsafe to drive, 66% rated their own driving as good to excellent. Drivers who made critical errors, where the driving instructor had to take control of the vehicle, had no lower self-rating of driving ability then the rest of the group. The discrepancy in self-perceptions of driving and participants’ safety rating on the on-road assessment was significantly associated with self-reported retrospective crash rates, where those drivers who displayed greater overconfidence in their own driving were significantly more likely to report a crash. Conclusions This study demonstrates that older drivers with the greatest mismatch between actual and self-rated driving ability pose the greatest risk to road safety. Therefore licensing authorities should not assume that when older individuals’ driving abilities begin to decline they will necessarily be aware of these changes and adopt appropriate compensatory driving behaviours; rather, it is essential that evidence-based assessments are adopted.
Resumo:
Objectives This prospective study investigated the effects of caffeine ingestion on the extent of adenosine-induced perfusion abnormalities during myocardial perfusion imaging (MPI). Methods Thirty patients with inducible perfusion abnormalities on standard (caffeine-abstinent) adenosine MPI underwent repeat testing with supplementary coffee intake. Baseline and test MPIs were assessed for stress percent defect, rest percent defect, and percent defect reversibility. Plasma levels of caffeine and metabolites were assessed on both occasions and correlated with MPI findings. Results Despite significant increases in caffeine [mean difference 3,106 μg/L (95% CI 2,460 to 3,752 μg/L; P < .001)] and metabolite concentrations over a wide range, there was no statistically significant change in stress percent defect and percent defect reversibility between the baseline and test scans. The increase in caffeine concentration between the baseline and the test phases did not affect percent defect reversibility (average change −0.003 for every 100 μg/L increase; 95% CI −0.17 to 0.16; P = .97). Conclusion There was no significant relationship between the extent of adenosine-induced coronary flow heterogeneity and the serum concentration of caffeine or its principal metabolites. Hence, the stringent requirements for prolonged abstinence from caffeine before adenosine MPI—based on limited studies—appear ill-founded.
Resumo:
Introduction and Methods: This study compared changes in myokine and myogenic genes following resistance exercise (3 sets of 12 repetitions of maximal unilateral knee extension) in 20 elderly men (67.8 ± 1.0 years) and 15 elderly women (67.2 ± 1.5 years). Results: Monocyte chemotactic protein (MCP)-1, macrophage inhibitory protein (MIP)-1β, interleukin (IL)-6 and MyoD mRNA increased significantly (P < 0.05), whereas myogenin and myostatin mRNA decreased significantly after exercise in both groups. Macrophage-1 (Mac-1) and MCP-3 mRNA did not change significantly after exercise in either group. MIP-1β, Mac-1 and myostatin mRNA were significantly higher before and after exercise in men compared with women. In contrast, MCP-3 and myogenin mRNA were significantly higher before and after exercise in the women compared with the men. Conclusions: In elderly individuals, gender influences the mRNA expression of certain myokines and growth factors, both at rest and after resistance exercise. These differences may influence muscle regeneration following muscle injury
Resumo:
We investigated the effect of hydrotherapy on time-trial performance and cardiac parasympathetic reactivation during recovery from intense training. On three occasions, 18 well-trained cyclists completed 60 min high-intensity cycling, followed 20 min later by one of three 10-min recovery interventions: passive rest (PAS), cold water immersion (CWI), or contrast water immersion (CWT). The cyclists then rested quietly for 160 min with R-R intervals and perceptions of recovery recorded every 30 min. Cardiac parasympathetic activity was evaluated using the natural logarithm of the square root of mean squared differences of successive R-R intervals (ln rMSSD). Finally, the cyclists completed a work-based cycling time trial. Effects were examined using magnitude-based inferences. Differences in time-trial performance between the three trials were trivial. Compared with PAS, general fatigue was very likely lower for CWI (difference [90% confidence limits; -12% (-18; -5)]) and CWT [-11% (-19; -2)]. Leg soreness was almost certainly lower following CWI [-22% (-30; -14)] and CWT [-27% (-37; -15)]. The change in mean ln rMSSD following the recovery interventions (ln rMSSD(Post-interv)) was almost certainly higher following CWI [16.0% (10.4; 23.2)] and very likely higher following CWT [12.5% (5.5; 20.0)] compared with PAS, and possibly higher following CWI [3.7% (-0.9; 8.4)] compared with CWT. The correlations between performance, ln rMSSD(Post-interv) and perceptions of recovery were unclear. A moderate correlation was observed between ln rMSSD(Post-interv) and leg soreness [r = -0.50 (-0.66; -0.29)]. Although the effects of CWI and CWT on performance were trivial, the beneficial effects on perceptions of recovery support the use of these recovery strategies.
Resumo:
Virtual worlds (VWs) continue to be used extensively in Australia and New Zealand higher education institutions although the tendency towards making unrealistic claims of efficacy and popularity appears to be over. Some educators at higher education institutions continue to use VWs in the same way as they have done in the past; others are exploring a range of different VWs or using them in new ways; whilst some are opting out altogether. This paper presents an overview of how 46 educators from some 26 institutions see VWs as an opportunity to sustain higher education. The positives and negatives of using VWs are discussed.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
The increasing popularity of video consumption from mobile devices requires an effective video coding strategy. To overcome diverse communication networks, video services often need to maintain sustainable quality when the available bandwidth is limited. One of the strategy for a visually-optimised video adaptation is by implementing a region-of-interest (ROI) based scalability, whereby important regions can be encoded at a higher quality while maintaining sufficient quality for the rest of the frame. The result is an improved perceived quality at the same bit rate as normal encoding, which is particularly obvious at the range of lower bit rate. However, because of the difficulties of predicting region-of-interest (ROI) accurately, there is a limited research and development of ROI-based video coding for general videos. In this paper, the phase spectrum quaternion of Fourier Transform (PQFT) method is adopted to determine the ROI. To improve the results of ROI detection, the saliency map from the PQFT is augmented with maps created from high level knowledge of factors that are known to attract human attention. Hence, maps that locate faces and emphasise the centre of the screen are used in combination with the saliency map to determine the ROI. The contribution of this paper lies on the automatic ROI detection technique for coding a low bit rate videos which include the ROI prioritisation technique to give different level of encoding qualities for multiple ROIs, and the evaluation of the proposed automatic ROI detection that is shown to have a close performance to human ROI, based on the eye fixation data.
Resumo:
Aims: This study investigated the association between the basal (rest) insulin-signaling proteins, Akt, and the Akt substrate AS160, metabolic risk factors, inflammatory markers and aerobic fitness, in middle-aged women with varying numbers of metabolic risk factors for type 2 diabetes. Methods: Sixteen women (n = 16) aged 51.3+/-5.1 (mean +/-SD) years provided muscle biopsies and blood samples at rest. In addition, anthropometric characteristics and aerobic power were assessed and the number of metabolic risk factors for each participant was determined (IDF criteria). Results: The mean number of metabolic risk factors was 1.6+/-1.2. Total Akt was negatively correlated with IL-1 beta (r = -0.45, p = 0.046), IL-6 (r = -0.44, p = 0.052) and TNF-alpha (r = -0.51, p = 0.025). Phosphorylated AS160 was positively correlated with HDL (r = 0.58, p = 0.024) and aerobic fitness (r = 0.51, p = 0.047). Furthermore, a multiple regression analysis revealed that both HDL (t = 2.5, p = 0.032) and VO(2peak) (t = 2.4, p = 0.037) were better predictors for phosphorylated AS160 than TNF-alpha or IL-6 (p>0.05). Conclusions: Elevated inflammatory markers and increased metabolic risk factors may inhibit insulin-signaling protein phosphorylation in middle-aged women, thereby increasing insulin resistance under basal conditions. Furthermore, higher HDL and fitness levels are associated with an increased AS160 phosphorylation, which may in turn reduce insulin resistance.