959 resultados para Acquired
Resumo:
The launch of the Centre of Research Excellence in Reducing Healthcare Associated Infection (CRE-RHAI) took place in Sydney on Friday 12 October 2012. The mission of the CRE-RHAI is to generate new knowledge about strategies to reduce healthcare associated infections and to provide data on the cost-effectiveness of infection control programs. As well as launching the CRE-RHAI, an important part of this event was a stakeholder Consultation Workshop, which brought together several experts in the Australian infection control community. The aims of this workshop were to establish the research and clinical priorities in Australian infection control, assess the importance of various multi-resistant organisms, and to gather information about decision making in infection control. We present here a summary and discussion of the responses we received.
Resumo:
The preparedness theory of classical conditioning proposed by Seligman (1970, 1971) has been applied extensively over the past 40 years to explain the nature and "source" of human fear and phobias. In this review we examine the formative studies that tested the four defining characteristics of prepared learning with animal fear-relevant stimuli (typically snakes and spiders) and consider claims that fear of social stimuli, such as angry faces, or faces of racial out-group members, may also be acquired utilising the same preferential learning mechanism. Exposition of critical differences between fear learning to animal and social stimuli suggests that a single account cannot adequately explain fear learning with animal and social stimuli. We demonstrate that fear conditioned to social stimuli is less robust than fear conditioned to animal stimuli as it is susceptible to cognitive influence and propose that it may instead reflect on negative stereotypes and social norms. Thus, a theoretical model that can accommodate the influence of both biological and cultural factors is likely to have broader utility in the explanation of fear and avoidance responses than accounts based on a single mechanism.
Resumo:
Located within the Creative Industries Faculty, the Animation team at the Queensland University of Technology (QUT) recently acquired a full-body inertial motion capture system. Our research to date has been predominantly concerned with interdisciplinary practice and the benefits this could bring to undergraduate teaching. From early experimental tests it was identified that there was a need to develop a framework for best practice and an efficient production workflow to ensure the system was being used to its full potential. Through our ongoing investigation we have identified at least three areas that stand to have long-term benefits from universities engaging in motion capture related research activity. This includes interdisciplinary collaborative research, undergraduate teaching and improved production processes. The following paper reports the early stages of our research, which explores the use of a full-body inertial motion capture (MoCap) solution in collaboration with performing artists.
Resumo:
The current gold standard for the design of orthopaedic implants is 3D models of long bones obtained using computed tomography (CT). However, high-resolution CT imaging involves high radiation exposure, which limits its use in healthy human volunteers. Magnetic resonance imaging (MRI) is an attractive alternative for the scanning of healthy human volunteers for research purposes. Current limitations of MRI include difficulties of tissue segmentation within joints and long scanning times. In this work, we explore the possibility of overcoming these limitations through the use of MRI scanners operating at a higher field strength. We quantitatively compare the quality of anatomical MR images of long bones obtained at 1.5 T and 3 T and optimise the scanning protocol of 3 T MRI. FLASH images of the right leg of five human volunteers acquired at 1.5 T and 3 T were compared in terms of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). The comparison showed a relatively high CNR and SNR at 3 T for most regions of the femur and tibia, with the exception of the distal diaphyseal region of the femur and the mid diaphyseal region of the tibia. This was accompanied by an ~65% increase in the longitudinal spin relaxation time (T1) of the muscle at 3 T compared to 1.5 T. The results suggest that MRI at 3 T may be able to enhance the segmentability and potentially improve the accuracy of 3D anatomical models of long bones, compared to 1.5 T. We discuss how the total imaging times at 3 T can be kept short while maximising the CNR and SNR of the images obtained.
Resumo:
Background: There are strong logical reasons why energy expended in metabolism should influence the energy acquired in food-intake behavior. However, the relation has never been established, and it is not known why certain people experience hunger in the presence of large amounts of body energy. Objective: We investigated the effect of the resting metabolic rate (RMR) on objective measures of whole-day food intake and hunger. Design: We carried out a 12-wk intervention that involved 41 overweight and obese men and women [mean ± SD age: 43.1 ± 7.5 y; BMI (in kg/m2): 30.7 ± 3.9] who were tested under conditions of physical activity (sedentary or active) and dietary energy density (17 or 10 kJ/g). RMR, daily energy intake, meal size, and hunger were assessed within the same day and across each condition. Results: We obtained evidence that RMR is correlated with meal size and daily energy intake in overweight and obese individuals. Participants with high RMRs showed increased levels of hunger across the day (P < 0.0001) and greater food intake (P < 0.00001) than did individuals with lower RMRs. These effects were independent of sex and food energy density. The change in RMR was also related to energy intake (P < 0.0001). Conclusions: We propose that RMR (largely determined by fat-free mass) may be a marker of energy intake and could represent a physiologic signal for hunger. These results may have implications for additional research possibilities in appetite, energy homeostasis, and obesity. This trial was registered under international standard identification for controlled trials as ISRCTN47291569.
Resumo:
BACKGROUND: Studies have shown that nurse staffing levels, among many other factors in the hospital setting, contribute to adverse patient outcomes. Concerns about patient safety and quality of care have resulted in numerous studies being conducted to examine the relationship between nurse staffing levels and the incidence of adverse patient events in both general wards and intensive care units. AIM: The aim of this paper is to review literature published in the previous 10 years which examines the relationship between nurse staffing levels and the incidence of mortality and morbidity in adult intensive care unit patients. METHODS: A literature search from 2002 to 2011 using the MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, and Australian digital thesis databases was undertaken. The keywords used were: intensive care; critical care; staffing; nurse staffing; understaffing; nurse-patient ratios; adverse outcomes; mortality; ventilator-associated pneumonia; ventilator-acquired pneumonia; infection; length of stay; pressure ulcer/injury; unplanned extubation; medication error; readmission; myocardial infarction; and renal failure. A total of 19 articles were included in the review. Outcomes of interest are patient mortality and morbidity, particularly infection and pressure ulcers. RESULTS: Most of the studies were observational in nature with variables obtained retrospectively from large hospital databases. Nurse staffing measures and patient outcomes varied widely across the studies. While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies concluded that a trend exists between increased nurse staffing levels and decreased adverse events. CONCLUSION: While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies demonstrated a trend between increased nurse staffing levels and decreased adverse patient outcomes in the intensive care unit which is consistent with previous literature. While further more robust research methodologies need to be tested in order to more confidently demonstrate this association and decrease the influence of the many other confounders to patient outcomes; this would be difficult to achieve in this field of research.
Resumo:
Determining the properties and integrity of subchondral bone in the developmental stages of osteoarthritis, especially in a form that can facilitate real-time characterization for diagnostic and decision-making purposes, is still a matter for research and development. This paper presents relationships between near infrared absorption spectra and properties of subchondral bone obtained from 3 models of osteoarthritic degeneration induced in laboratory rats via: (i) menisectomy (MSX); (ii) anterior cruciate ligament transaction (ACL); and (iii) intra-articular injection of mono-ido-acetate (1 mg) (MIA), in the right knee joint, with 12 rats per model group (N = 36). After 8 weeks, the animals were sacrificed and knee joints were collected. A custom-made diffuse reflectance NIR probe of diameter 5 mm was placed on the tibial surface and spectral data were acquired from each specimen in the wavenumber range 4000–12 500 cm− 1. After spectral acquisition, micro computed tomography (micro-CT) was performed on the samples and subchondral bone parameters namely: bone volume (BV) and bone mineral density (BMD) were extracted from the micro-CT data. Statistical correlation was then conducted between these parameters and regions of the near infrared spectra using multivariate techniques including principal component analysis (PCA), discriminant analysis (DA), and partial least squares (PLS) regression. Statistically significant linear correlations were found between the near infrared absorption spectra and subchondral bone BMD (R2 = 98.84%) and BV (R2 = 97.87%). In conclusion, near infrared spectroscopic probing can be used to detect, qualify and quantify changes in the composition of the subchondral bone, and could potentially assist in distinguishing healthy from OA bone as demonstrated with our laboratory rat models.
Resumo:
This paper is based on a PhD thesis that investigated how Hollywood’s dominance of the movie industry arose and how it has been maintained over time. Major studio dominance and the global popularity of Hollywood movies has been the subject of numerous studies. An interdisciplinary literature review of the economics, management, marketing, film, media and culture literatures identified twenty different single or multiple factor explanations that try to account for Major studio dominance at different time periods but cannot comprehensively explain how Hollywood acquired and maintained global dominance for nine decades. Existing strategic management and marketing theories were integrated into a ‘theoretical lens’ that enabled a historical analysis of Hollywood’s longstanding dominance of the movie business to be undertaken from a strategic business perspective. This paper concludes that the major studios rise to market leadership and enduring dominance can primarily be explained because they developed and maintained a set of strategic marketing management capabilities that were superior to rival firms and rival film industries. It is argued that a marketing orientation and effective strategic marketing management capabilities also provide a unifying theory for Hollywood’s enduring dominance because they can account for each of the twenty previously identified explanations for that dominance.
Resumo:
With the disintermediation of the financial markets, credit rating agencies filled the informational need of investors on the creditworthiness of borrowers. They acquired their privileged position in the financial market through their intellectual technology and reputational capital. To a large extent, they have gradually dissipated the authority of state regulators and supervisory authorities with their increasing reliance on credit ratings for regulatory purposes. But the recent credit crisis revives the question on whether states should retake their authorities and how far rating agencies should be subjected to competition, transparency and accountability constraints imposed by the public and the market on state regulators and supervisory authorities. Against this backdrop, this article critically explores the key concerns with credit rating agencies' functions to regulate financial market for further assessment
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
BACKGROUND: Infection by dengue virus (DENV) is a major public health concern in hundreds of tropical and subtropical countries. French Polynesia (FP) regularly experiences epidemics that initiate, or are consecutive to, DENV circulation in other South Pacific Island Countries (SPICs). In January 2009, after a decade of serotype 1 (DENV-1) circulation, the first cases of DENV-4 infection were reported in FP. Two months later a new epidemic emerged, occurring about 20 years after the previous circulation of DENV-4 in FP. In this study, we investigated the epidemiological and molecular characteristics of the introduction, spread and genetic microevolution of DENV-4 in FP. METHODOLOGY/PRINCIPAL FINDINGS: Epidemiological data suggested that recent transmission of DENV-4 in FP started in the Leeward Islands and this serotype quickly displaced DENV-1 throughout FP. Phylogenetic analyses of the nucleotide sequences of the envelope (E) gene of 64 DENV-4 strains collected in FP in the 1980s and in 2009-2010, and some additional strains from other SPICs showed that DENV-4 strains from the SPICs were distributed into genotypes IIa and IIb. Recent FP strains were distributed into two clusters, each comprising viruses from other but distinct SPICs, suggesting that emergence of DENV-4 in FP in 2009 resulted from multiple introductions. Otherwise, we observed that almost all strains collected in the SPICs in the 1980s exhibit an amino acid (aa) substitution V287I within domain I of the E protein, and all recent South Pacific strains exhibit a T365I substitution within domain III. CONCLUSIONS/SIGNIFICANCE: This study confirmed the cyclic re-emergence and displacement of DENV serotypes in FP. Otherwise, our results showed that specific aa substitutions on the E protein were present on all DENV-4 strains circulating in SPICs. These substitutions probably acquired and subsequently conserved could reflect a founder effect to be associated with epidemiological, geographical, eco-biological and social specificities in SPICs.
Resumo:
This research evaluated the effect of obesity on the acute cumulative transverse strain of the Achilles tendon in response to exercise. Twenty healthy adult males were categorized into ‘low normal-weight’ (BMI <23 kg m−2) and ‘overweight’ (BMI >27.5 kg m−2) groups based on intermediate cut-off points recommended by the World Health Organization. Longitudinal sonograms of the right Achilles tendon were acquired immediately prior and following weight-bearing ankle exercises. Achilles tendon thickness was measured 20-mm proximal to the calcaneal insertion and transverse tendon strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness. The Achilles tendon was thicker in the overweight group both prior to (t18 = −2.91, P = 0.009) and following (t18 = −4.87, P < 0.001) exercise. The acute transverse strain response of the Achilles tendon in the overweight group (−10.7 ± 2.5%), however, was almost half that of the ‘low normal-weight’ (−19.5 ± 7.4%) group (t18 = −3.56, P = 0.004). These findings suggest that obesity is associated with structural changes in tendon that impairs intra-tendinous fluid movement in response to load and provides new insights into the link between tendon pathology and overweight and obesity.