39 resultados para Peter W. Williams
Resumo:
OBJECTIVE: To determine the effect of altering meal frequency on postprandial lipaemia and associated parameters. DESIGN: A randomized open cross over study to examine the programming effects of altering meal frequency. A standard test meal was given on three occasions following: (i) the normal diet; (ii) a period of two weeks on a nibbling and (iii) a period of two weeks on a gorging diet. SETTING: Free living subjects associated with the University of Surrey. SUBJECTS: Eleven female volunteers (age 22 +/- 0.89 y) were recruited. INTERVENTIONS: The subjects were requested to consume the same foods on either a nibbling diet (12 meals per day) or a gorging diet (three meals per day) for a period of two weeks. The standard test meal containing 80 g fat, 63 g carbohydrate and 20 g protein was administered on the day prior to the dietary intervention and on the day following each period of intervention. MAJOR OUTCOME MEASURES: Fasting and postprandial blood samples were taken for the analysis of plasma triacylglycerol, non-esterified fatty acids, glucose, immunoreactive insulin, glucose-dependent insulinotropic polypeptide levels (GIP) and glucagon-like peptide (GLP-1), fasting total, low density lipoprotein (LDL)- and high density lipoprotein (HDL)-cholesterol concentrations and postheparin lipoprotein lipase (LPL) activity measurements. Plasma paracetamol was measured following administration of a 1.5 g paracetamol load with the meal as an index of gastric emptying. RESULTS: The compliance to the two dietary regimes was high and there were no significant differences between the nutrient intakes on the two intervention diets. There were no significant differences in fasting or postprandial plasma concentrations of triacylglycerol, non-esterified fatty acids, glucose, immunoreactive insulin, GIP and GLP-1 levels, in response to the standard test meal following the nibbling or gorging dietary regimes. There were no significant differences in fasting total or LDL-cholesterol concentrations, or in the 15 min postheparin lipoprotein lipase activity measurements. There was a significant increase in HDL-cholesterol in the subjects following the gorging diet compared to the nibbling diet. DISCUSSION: The results suggest that previous meal frequency for a period of two weeks in young healthy women does not alter the fasting or postprandial lipid or hormonal response to a standard high fat meal. CONCLUSIONS: The findings of this study did not confirm the previous studies which suggested that nibbling is beneficial in reducing the concentrations of lipid and hormones. The rigorous control of diet content and composition in the present study compared with others, suggest reported effects of meal frequency may be due to unintentional alteration in nutrient and energy intake in previous studies.
Resumo:
The solubility of penciclovir (C10N5O3H17) in a novel film formulation designed for the treatment of cold sores was determined using X-ray, thermal, microscopic and release rate techniques. Solubilities of 0.15–0.23, 0.44, 0.53 and 0.42% (w/w) resulted for each procedure. Linear calibration lines were achieved for experimentally and theoretically determined differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) data. Intra- and inter-batch data precision values were determined; intra values were more precise. Microscopy was additionally useful for examining crystal shape, size distribution and homogeneity of drug distribution within the film. Whereas DSC also determined melting point, XRPD identified polymorphs and release data provided relevant kinetics.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the finger-print) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing.
Resumo:
Cyclodextrins are water-soluble cyclic oligosaccharides consisting of six, seven, and eight α-(1,4)-linked glucopyranose subunits. This study reports the use of different cyclodextrins in eye drop formulations to improve the aqueous solubility and corneal permeability of riboflavin. Riboflavin is a poorly soluble drug with a solubility up to 0.08 mg mL–1 in deionized water. It is used as a drug topically administered to the eye to mediate UV-induced corneal cross-linking in the treatment of keratoconus. Aqueous solutions of β-cyclodextrin (10–30 mg mL–1) can enhance the solubility of riboflavin up to 0.12–0.19 mg mL–1, whereas the higher concentration of α-cyclodextrin (100 mg mL–1) achieved a lower level of enhancement of 0.11 mg mL–1. The other oligosaccharides were found to be inefficient for this purpose. In vitro diffusion experiments performed with fresh and cryopreserved bovine cornea have demonstrated that β-cyclodextrin enhances riboflavin permeability. The mechanism of this enhancement was examined through microscopic histological analysis of the cornea and is discussed in this paper.
Resumo:
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1, 3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.
Resumo:
A cardinal property of neural stem cells (NSCs) is their ability to adopt multiple fates upon differentiation. The epigenome is widely seen as a read-out of cellular potential and a manifestation of this can be seen in embryonic stem cells (ESCs), where promoters of many lineage-specific regulators are marked by a bivalent epigenetic signature comprising trimethylation of both lysine 4 and lysine 27 of histone H3 (H3K4me3 and H3K27me3, respectively). Bivalency has subsequently emerged as a powerful epigenetic indicator of stem cell potential. Here, we have interrogated the epigenome during differentiation of ESC-derived NSCs to immature GABAergic interneurons. We show that developmental transitions are accompanied by loss of bivalency at many promoters in line with their increasing developmental restriction from pluripotent ESC through multipotent NSC to committed GABAergic interneuron. At the NSC stage, the promoters of genes encoding many transcriptional regulators required for differentiation of multiple neuronal subtypes and neural crest appear to be bivalent, consistent with the broad developmental potential of NSCs. Upon differentiation to GABAergic neurons, all non-GABAergic promoters resolve to H3K27me3 monovalency, whereas GABAergic promoters resolve to H3K4me3 monovalency or retain bivalency. Importantly, many of these epigenetic changes occur before any corresponding changes in gene expression. Intriguingly, another group of gene promoters gain bivalency as NSCs differentiate toward neurons, the majority of which are associated with functions connected with maturation and establishment and maintenance of connectivity. These data show that bivalency provides a dynamic epigenetic signature of developmental potential in both NSCs and in early neurons. Stem Cells 2013;31:1868-1880.
Resumo:
The objective biomization method developed by Prentice et al. (1996) for Europe was extended using modern pollen samples from Beringia and then applied to fossil pollen data to reconstruct palaeovegetation patterns at 6000 and 18,000 14C yr bp. The predicted modern distribution of tundra, taiga and cool conifer forests in Alaska and north-western Canada generally corresponds well to actual vegetation patterns, although sites in regions characterized today by a mosaic of forest and tundra vegetation tend to be preferentially assigned to tundra. Siberian larch forests are delimited less well, probably due to the extreme under-representation of Larix in pollen spectra. The biome distribution across Beringia at 6000 14C yr bp was broadly similar to today, with little change in the northern forest limit, except for a possible northward advance in the Mackenzie delta region. The western forest limit in Alaska was probably east of its modern position. At 18,000 14C yr bp the whole of Beringia was covered by tundra. However, the importance of the various plant functional types varied from site to site, supporting the idea that the vegetation cover was a mosaic of different tundra types.
Resumo:
Ethylenediaminetetraacetic acid, ethylenediamine-N,N′-disuccinic acid and ethylene glycol-bis(2-aminoethylether)-N,N,N′,N′-tetraacetic acid are polyaminocarboxylic acids that are able to sequester metal ions. Calcium is implicated in maintenance of intercellular matrix, zonula occludens (tight junctions) and zonula adherens of epithelium and endothelium cells. Corneal epithelium is impervious to many aqueous formulations due to it being lipophilic, whereby transcellular drug transit is resisted, whilst tight junctions restrict access via the paracellular route. Research has shown that integrity of tight junctions breaks down through loss of Ca2+ for endothelial and epithelial cells. This study investigates different Ca2+ sequestering compounds and their effect on corneal permeability of riboflavin at physiological pH. Riboflavin is a topically administered ocular drug applied during UV-induced corneal cross-linking for the treatment of keratoconus.
Resumo:
Various strategies for ocular drug delivery are considered; from basic formulation techniques for improving availability of drugs; viscosity enhancers and mucoadhesives aid drug retention and penetration enhancers promote drug transport into the eye. The use of drug loaded contact lenses and ocular inserts allows drugs to be better placed where they are needed for more direct delivery. Developments in ocular implants gives a means to overcome the physical barriers that traditionally prevented effective treatment. Implant technologies are under development allowing long term drug delivery from a single procedure, these devices allow posterior chamber diseases to be effectively treated. Future developments could bring artificial corneas to eliminate the need for donor tissue and one-off implantable drug depots lasting the patient’s lifetime.
Resumo:
Medication safety and errors are a major concern in care homes. In addition to the identification of incidents, there is a need for a comprehensive system description to avoid the danger of introducing interventions that have unintended consequences and are therefore unsustainable. The aim of the study was to explore the impact and uniqueness of Work Domain Analysis (WDA) to facilitate an in-depth understanding of medication safety problems within the care home system and identify the potential benefits of WDA to design safety interventions to improve medication safety. A comprehensive, systematic and contextual overview of the care home medication system was developed for the first time. The novel use of the Abstraction Hierarchy (AH) to analyse medication errors revealed the value of the AH to guide a comprehensive analysis of errors and generate system improvement recommendations that took into account the contextual information of the wider system.
Resumo:
Although the adult brain contains neural stem cells (NSCs) that generate new neurons throughout life, these astrocyte-like populations are restricted to two discrete niches. Despite their terminally differentiated phenotype, adult parenchymal astrocytes can re-acquire NSC-like characteristics following injury, and as such, these 'reactive' astrocytes offer an alternative source of cells for central nervous system (CNS) repair following injury or disease. At present, the mechanisms that regulate the potential of different types of astrocytes are poorly understood. We used in vitro and ex vivo astrocytes to identify candidate pathways important for regulation of astrocyte potential. Using in vitro neural progenitor cell (NPC)-derived astrocytes, we found that exposure of more lineage-restricted astrocytes to either tumor necrosis factor alpha (TNF-α) (via nuclear factor-κB (NFκB)) or the bone morphogenetic protein (BMP) inhibitor, noggin, led to re-acquisition of NPC properties accompanied by transcriptomic and epigenetic changes consistent with a more neurogenic, NPC-like state. Comparative analyses of microarray data from in vitro-derived and ex vivo postnatal parenchymal astrocytes identified several common pathways and upstream regulators associated with inflammation (including transforming growth factor (TGF)-β1 and peroxisome proliferator-activated receptor gamma (PPARγ)) and cell cycle control (including TP53) as candidate regulators of astrocyte phenotype and potential. We propose that inflammatory signalling may control the normal, progressive restriction in potential of differentiating astrocytes as well as under reactive conditions and represent future targets for therapies to harness the latent neurogenic capacity of parenchymal astrocytes.
Resumo:
In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyja allajökull eruption during the period from 7 to 16 May 2010. We find that at the start of this period, 7–10 May, the model (FLEXible PARTicle) has excellent skill and can predict the spatial distribution of the satellite-retrieved ash to within 0.5∘ × 0.5∘ latitude/longitude. However, on 10 May there is a decrease in the spatial accuracy of the model to 2.5∘× 2.5∘ latitude/longitude, and between 11 and 12 May the simulated ash location errors grow rapidly. On 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to key points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty, and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.
Resumo:
We report on the results of a laboratory investigation using a rotating two-layer annulus experiment, which exhibits both large-scale vortical modes and short-scale divergent modes. A sophisticated visualization method allows us to observe the flow at very high spatial and temporal resolution. The balanced long-wavelength modes appear only when the Froude number is supercritical (i.e. $F\,{>}\,F_\mathrm{critical}\,{\equiv}\, \upi^2/2$), and are therefore consistent with generation by a baroclinic instability. The unbalanced short-wavelength modes appear locally in every single baroclinically unstable flow, providing perhaps the first direct experimental evidence that all evolving vortical flows will tend to emit freely propagating inertia–gravity waves. The short-wavelength modes also appear in certain baroclinically stable flows. We infer the generation mechanisms of the short-scale waves, both for the baro-clinically unstable case in which they co-exist with a large-scale wave, and for the baroclinically stable case in which they exist alone. The two possible mechanisms considered are spontaneous adjustment of the large-scale flow, and Kelvin–Helmholtz shear instability. Short modes in the baroclinically stable regime are generated only when the Richardson number is subcritical (i.e. $\hbox{\it Ri}\,{<}\,\hbox{\it Ri}_\mathrm{critical}\,{\equiv}\, 1$), and are therefore consistent with generation by a Kelvin–Helmholtz instability. We calculate five indicators of short-wave generation in the baroclinically unstable regime, using data from a quasi-geostrophic numerical model of the annulus. There is excellent agreement between the spatial locations of short-wave emission observed in the laboratory, and regions in which the model Lighthill/Ford inertia–gravity wave source term is large. We infer that the short waves in the baroclinically unstable fluid are freely propagating inertia–gravity waves generated by spontaneous adjustment of the large-scale flow.