142 resultados para Gurney, Hannah--defendant.
Resumo:
A variety of foods have been implicated in symptoms of patients with Irritable Bowel Syndrome (IBS) but wheat products are most frequently cited by patients as a trigger. Our aim was to investigate the effects of breads, which were fermented for different lengths of time, on the colonic microbiota using in vitro batch culture experiments. A set of in vitro anaerobic culture systems were run over a period of 24 h using faeces from 3 different IBS donors (Rome Criteria–mainly constipated) and 3 healthy donors. Changes in gut microbiota during a time course were identified by fluorescence in situ hybridisation (FISH), whilst the small -molecular weight metabolomic profile was determined by NMR analysis. Gas production was separately investigated in non pH-controlled, 36 h batch culture experiments. Numbers of bifidobacteria were higher in healthy subjects compared to IBS donors. In addition, the healthy donors showed a significant increase in bifidobacteria (P<0.005) after 8 h of fermentation of a bread produced using a sourdough process (type C) compared to breads produced with commercial yeasted dough (type B) and no time fermentation (Chorleywood Breadmaking process) (type A). A significant decrease of δ-Proteobacteria and most Gemmatimonadetes species was observed after 24 h fermentation of type C bread in both IBS and healthy donors. In general, IBS donors showed higher rates of gas production compared to healthy donors. Rates of gas production for type A and conventional long fermentation (type B) breads were almost identical in IBS and healthy donors. Sourdough bread produced significantly lower cumulative gas after 15 h fermentation as compared to type A and B breads in IBS donors but not in the healthy controls. In conclusion, breads fermented by the traditional long fermentation and sourdough are less likely to lead to IBS symptoms compared to bread made using the Chorleywood Breadmaking Process.
Resumo:
Mammalian aging is accompanied by a progressive loss of skeletal muscle, a process called sarcopenia. Myostatin, a secreted member of the transforming growth factor-β family of signaling molecules, has been shown to be a potent inhibitor of muscle growth. Here, we examined whether muscle growth could be promoted in aged animals by antagonizing the activity of myostatin through the neutralizing activity of the myostatin propeptide. We show that a single injection of an AAV8 virus expressing the myostatin propeptide induced an increase in whole body weights and all muscles examined within 7 weeks of treatment. Our cellular studies demonstrate that muscle enlargement was due to selective fiber type hypertrophy, which was accompanied by a shift toward a glycolytic phenotype. Our molecular investigations elucidate the mechanism underpinning muscle hypertrophy by showing a decrease in the expression of key genes that control ubiquitin-mediated protein breakdown. Most importantly, we show that the hypertrophic muscle that develops as a consequence of myostatin propeptide in aged mice has normal contractile properties. We suggest that attenuating myostatin signaling could be a very attractive strategy to halt and possibly reverse age-related muscle loss.
Resumo:
There is a strong drive towards hyperresolution earth system models in order to resolve finer scales of motion in the atmosphere. The problem of obtaining more realistic representation of terrestrial fluxes of heat and water, however, is not just a problem of moving to hyperresolution grid scales. It is much more a question of a lack of knowledge about the parameterisation of processes at whatever grid scale is being used for a wider modelling problem. Hyperresolution grid scales cannot alone solve the problem of this hyperresolution ignorance. This paper discusses these issues in more detail with specific reference to land surface parameterisations and flood inundation models. The importance of making local hyperresolution model predictions available for evaluation by local stakeholders is stressed. It is expected that this will be a major driving force for improving model performance in the future. Keith BEVEN, Hannah CLOKE, Florian PAPPENBERGER, Rob LAMB, Neil HUNTER
Resumo:
Animal models are invaluable tools which allow us to investigate the microbiome-host dialogue. However, experimental design introduces biases in the data that we collect, also potentially leading to biased conclusions. With obesity at pandemic levels animal models of this disease have been developed; we investigated the role of experimental design on one such rodent model. We used 454 pyrosequencing to profile the faecal bacteria of obese (n = 6) and lean (homozygous n = 6; heterozygous n = 6) Zucker rats over a 10 week period, maintained in mixed-genotype cages, to further understand the relationships between the composition of the intestinal bacteria and age, obesity progression, genetic background and cage environment. Phylogenetic and taxon-based univariate and multivariate analyses (non-metric multidimensional scaling, principal component analysis) showed that age was the most significant source of variation in the composition of the faecal microbiota. Second to this, cage environment was found to clearly impact the composition of the faecal microbiota, with samples from animals from within the same cage showing high community structure concordance, but large differences seen between cages. Importantly, the genetically induced obese phenotype was not found to impact the faecal bacterial profiles. These findings demonstrate that the age and local environmental cage variables were driving the composition of the faecal bacteria and were more deterministically important than the host genotype. These findings have major implications for understanding the significance of functional metagenomic data in experimental studies and beg the question; what is being measured in animal experiments in which different strains are housed separately, nature or nurture?
Resumo:
At the most recent session of the Conference of the Parties (COP19) in Warsaw (November 2013) the Warsaw international mechanism for loss and damage associated with climate change impacts was established under the United Nations Framework Convention on Climate Change (UNFCCC). The mechanism aims at promoting the implementation of approaches to address loss and damage associated with the adverse effects of climate change. Specifically, it aims to enhance understanding of risk management approaches to address loss and damage. Understanding risks associated with impacts due to highly predictable (slow onset) events like sea-level rise is relatively straightforward whereas assessing the effects of climate change on extreme weather events and their impacts is much more difficult. However, extreme weather events are a significant cause of loss of life and livelihoods, particularly in vulnerable countries and communities in Africa. The emerging science of probabilistic event attribution is relevant as it provides scientific evidence on the contribution of anthropogenic climate change to changes in risk of extreme events. It thus provides the opportunity to explore scientifically-backed assessments of the human influence on such events. However, different ways of framing attribution questions can lead to very different assessments of change in risk. Here we explain the methods of, and implications of different approaches to attributing extreme weather events with a focus on Africa. Crucially, it demonstrates that defining the most appropriate attribution question to ask is not a science decision but needs to be made in dialogue with those stakeholders who will use the answers.
Resumo:
Background: Advances in nutritional assessment are continuing to embrace developments in computer technology. The online Food4Me food frequency questionnaire (FFQ) was created as an electronic system for the collection of nutrient intake data. To ensure its accuracy in assessing both nutrient and food group intake, further validation against data obtained using a reliable, but independent, instrument and assessment of its reproducibility are required. Objective: The aim was to assess the reproducibility and validity of the Food4Me FFQ against a 4-day weighed food record (WFR). Methods: Reproducibility of the Food4Me FFQ was assessed using test-retest methodology by asking participants to complete the FFQ on 2 occasions 4 weeks apart. To assess the validity of the Food4Me FFQ against the 4-day WFR, half the participants were also asked to complete a 4-day WFR 1 week after the first administration of the Food4Me FFQ. Level of agreement between nutrient and food group intakes estimated by the repeated Food4Me FFQ and the Food4Me FFQ and 4-day WFR were evaluated using Bland-Altman methodology and classification into quartiles of daily intake. Crude unadjusted correlation coefficients were also calculated for nutrient and food group intakes. Results: In total, 100 people participated in the assessment of reproducibility (mean age 32, SD 12 years), and 49 of these (mean age 27, SD 8 years) also took part in the assessment of validity. Crude unadjusted correlations for repeated Food4Me FFQ ranged from .65 (vitamin D) to .90 (alcohol). The mean cross-classification into “exact agreement plus adjacent” was 92% for both nutrient and food group intakes, and Bland-Altman plots showed good agreement for energy-adjusted macronutrient intakes. Agreement between the Food4Me FFQ and 4-day WFR varied, with crude unadjusted correlations ranging from .23 (vitamin D) to .65 (protein, % total energy) for nutrient intakes and .11 (soups, sauces and miscellaneous foods) to .73 (yogurts) for food group intake. The mean cross-classification into “exact agreement plus adjacent” was 80% and 78% for nutrient and food group intake, respectively. There were no significant differences between energy intakes estimated using the Food4Me FFQ and 4-day WFR, and Bland-Altman plots showed good agreement for both energy and energy-controlled nutrient intakes. Conclusions: The results demonstrate that the online Food4Me FFQ is reproducible for assessing nutrient and food group intake and has moderate agreement with the 4-day WFR for assessing energy and energy-adjusted nutrient intakes. The Food4Me FFQ is a suitable online tool for assessing dietary intake in healthy adults.
Resumo:
Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.
Resumo:
Climate change is expected to modify rainfall, temperature and catchment hydrological responses across the world, and adapting to these water-related changes is a pressing challenge. This paper reviews the impact of anthropogenic climate change on water in the UK and looks at projections of future change. The natural variability of the UK climate makes change hard to detect; only historical increases in air temperature can be attributed to anthropogenic climate forcing, but over the last 50 years more winter rainfall has been falling in intense events. Future changes in rainfall and evapotranspiration could lead to changed flow regimes and impacts on water quality, aquatic ecosystems and water availability. Summer flows may decrease on average, but floods may become larger and more frequent. River and lake water quality may decline as a result of higher water temperatures, lower river flows and increased algal blooms in summer, and because of higher flows in the winter. In communicating this important work, researchers should pay particular attention to explaining confidence and uncertainty clearly. Much of the relevant research is either global or highly localized: decision-makers would benefit from more studies that address water and climate change at a spatial and temporal scale appropriate for the decisions they make
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
A Canopy Height Profile (CHP) procedure presented in Harding et al. (2001) for large footprint LiDAR data was tested in a closed canopy environment as a way of extracting vertical foliage profiles from LiDAR raw-waveform. In this study, an adaptation of this method to small-footprint data has been shown, tested and validated in an Australian sparse canopy forest at plot- and site-level. Further, the methodology itself has been enhanced by implementing a dataset-adjusted reflectance ratio calculation according to Armston et al. (2013) in the processing chain, and tested against a fixed ratio of 0.5 estimated for the laser wavelength of 1550nm. As a by-product of the methodology, effective leaf area index (LAIe) estimates were derived and compared to hemispherical photography-derived values. To assess the influence of LiDAR aggregation area size on the estimates in a sparse canopy environment, LiDAR CHPs and LAIes were generated by aggregating waveforms to plot- and site-level footprints (plot/site-aggregated) as well as in 5m grids (grid-processed). LiDAR profiles were then compared to leaf biomass field profiles generated based on field tree measurements. The correlation between field and LiDAR profiles was very high, with a mean R2 of 0.75 at plot-level and 0.86 at site-level for 55 plots and the corresponding 11 sites. Gridding had almost no impact on the correlation between LiDAR and field profiles (only marginally improvement), nor did the dataset-adjusted reflectance ratio. However, gridding and the dataset-adjusted reflectance ratio were found to improve the correlation between raw-waveform LiDAR and hemispherical photography LAIe estimates, yielding the highest correlations of 0.61 at plot-level and of 0.83 at site-level. This proved the validity of the approach and superiority of dataset-adjusted reflectance ratio of Armston et al. (2013) over a fixed ratio of 0.5 for LAIe estimation, as well as showed the adequacy of small-footprint LiDAR data for LAIe estimation in discontinuous canopy forests.
Resumo:
This study has compared preliminary estimates of effective leaf area index (LAI) derived from fish-eye lens photographs to those estimated from airborne full-waveform small-footprint LiDAR data for a forest dataset in Australia. The full-waveform data was decomposed and optimized using a trust-region-reflective algorithm to extract denser point clouds. LAI LiDAR estimates were derived in two ways (1) from the probability of discrete pulses reaching the ground without being intercepted (point method) and (2) from raw waveform canopy height profile processing adapted to small-footprint laser altimetry (waveform method) accounting for reflectance ratio between vegetation and ground. The best results, that matched hemispherical photography estimates, were achieved for the waveform method with a study area-adjusted reflectance ratio of 0.4 (RMSE of 0.15 and 0.03 at plot and site level, respectively). The point method generally overestimated, whereas the waveform method with an arbitrary reflectance ratio of 0.5 underestimated the fish-eye lens LAI estimates.
Resumo:
Animals are imbued with adaptive mechanisms spanning from the tissue/organ to the cellular scale which insure that processes of homeostasis are preserved in the landscape of size change. However we and others have postulated that the degree of adaptation is limited and that once outside the normal levels of size fluctuations, cells and tissues function in an aberant manner. In this study we examine the function of muscle in the myostatin null mouse which is an excellent model for hypertrophy beyond levels of normal growth and consequeces of acute starvation to restore mass. We show that muscle growth is sustained through protein synthesis driven by Serum/Glucocorticoid Kinase 1 (SGK1) rather than Akt1. Furthermore our metabonomic profiling of hypertrophic muscle shows that carbon from nutrient sources is being channelled for the production of biomass rather than ATP production. However the muscle displays elevated levels of autophagy and decreased levels of muscle tension. We demonstrate the myostatin null muscle is acutely sensitive to changes in diet and activates both the proteolytic and autophagy programmes and shutting down protein synthesis more extensively than is the case for wild-types. Poignantly we show that acute starvation which is detrimental to wild-type animals is beneficial in terms of metabolism and muscle function in the myostatin null mice by normalising tension production.