920 resultados para Sensitivity analysis
Resumo:
OBJECTIVE: To compare insulin sensitivity (Si) from a frequently sampled intravenous glucose tolerance test (FSIGT) and subsequent minimal model analyses with surrogate measures of insulin sensitivity and resistance and to compare features of the metabolic syndrome between Caucasians and Indian Asians living in the UK. SUBJECTS: In all, 27 healthy male volunteers (14 UK Caucasians and 13 UK Indian Asians), with a mean age of 51.2 +/- 1.5 y, BMI of 25.8 +/- 0.6 kg/m(2) and Si of 2.85 +/- 0.37. MEASUREMENTS: Si was determined from an FSIGT with subsequent minimal model analysis. The concentrations of insulin, glucose and nonesterified fatty acids (NEFA) were analysed in fasting plasma and used to calculate surrogate measure of insulin sensitivity (quantitative insulin sensitivity check index (QUICKI), revised QUICKI) and resistance (homeostasis for insulin resistance (HOMA IR), fasting insulin resistance index (FIRI), Bennetts index, fasting insulin, insulin-to-glucose ratio). Plasma concentrations of triacylglycerol (TAG), total cholesterol, high density cholesterol, (HDL-C) and low density cholesterol, (LDL-C) were also measured in the fasted state. Anthropometric measurements were conducted to determine body-fat distribution. RESULTS: Correlation analysis identified the strongest relationship between Si and the revised QUICKI (r = 0.67; P = 0.000). Significant associations were also observed between Si and QUICKI (r = 0.51; P = 0.007), HOMA IR (r = -0.50; P = 0.009), FIRI and fasting insulin. The Indian Asian group had lower HDL-C (P = 0.001), a higher waist-hip ratio (P = 0.01) and were significantly less insulin sensitive (Si) than the Caucasian group (P = 0.02). CONCLUSION: The revised QUICKI demonstrated a statistically strong relationship with the minimal model. However, it was unable to differentiate between insulin-sensitive and -resistant groups in this study. Future larger studies in population groups with varying degrees of insulin sensitivity are recommended to investigate the general applicability of the revised QUICKI surrogate technique.
Resumo:
There is evidence to suggest that insulin sensitivity may vary in response to changes in sex hormone levels. However, the results Of human studies designed to investigate changes in insulin sensitivity through the menstrual cycle have proved inconclusive. The aims of this Study were to 1) evaluate the impact of menstrual cycle phase on insulin sensitivity measures and 2) determine the variability Of insulin sensitivity measures within the same menstrual cycle phase. A controlled observational study of 13 healthy premenopausal women, not taking any hormone preparation and having regular menstrual cycles, was conducted. Insulin sensitivity (Si) and glucose effectiveness (Sg) were measured using an intravenous glucose tolerance test (IVGTT) with minimal model analysis. Additional Surrogate measures Of insulin sensitivity were calculated (homoeostasis model for insulin resistance [HOMA IR], quantitative insulin-to-glucose check index [QUICKI] and revised QUICKI [rQUICKI]), as well as plasma lipids. Each woman was tested in the luteal and follicular phases of her Menstrual cycle, and duplicate measures were taken in one phase of the cycle. No significant differences in insulin sensitivity (measured by the IVGTT or Surrogate markers) or plasma lipids were reported between the two phases of the menstrual cycle or between duplicate measures within the same phase. It was Concluded that variability in measures of insulin sensitivity were similar within and between menstrual phases.
Resumo:
We previously reported sequence determination of neutral oligosaccharides by negative ion electrospray tandem mass spectrometry on a quadrupole-orthogonal time-of-flight instrument with high sensitivity and without the need of derivatization. In the present report, we extend our strategies to sialylated oligosaccharides for analysis of chain and blood group types together with branching patterns. A main feature in the negative ion mass spectrometry approach is the unique double glycosidic cleavage induced by 3-glycosidic substitution, producing characteristic D-type fragments which can be used to distinguish the type 1 and type 2 chains, the blood group related Lewis determinants, 3,6-disubstituted core branching patterns, and to assign the structural details of each of the branches. Twenty mono- and disialylated linear and branched oligosaccharides were used for the investigation, and the sensitivity achieved is in the femtomole range. To demonstrate the efficacy of the strategy, we have determined a novel complex disialylated and monofucosylated tridecasaccharide that is based on the lacto-N-decaose core. The structure and sequence assignment was corroborated by :methylation analysis and H-1 NMR spectroscopy.
Resumo:
Substituted amphetamines such as p-chloroamphetamine and the abused drug methylenedioxymethamphetamine cause selective destruction of serotonin axons in rats, by unknown mechanisms. Since some serotonin neurones also express neuronal nitric oxide synthase, which has been implicated in neurotoxicity, the present study was undertaken to determine whether nitric oxide synthase expressing serotonin neurones are selectively vulnerable to methylenedioxymethamphetamine or p-chloroamphetamine. Using double-labeling immunocytochemistry and double in situ hybridization for nitric oxide synthase and the serotonin transporter, it was confirmed that about two thirds of serotonergic cell bodies in the dorsal raphe nucleus expressed nitric oxide synthase, however few if any serotonin transporter immunoreactive axons in striatum expressed nitric oxide synthase at detectable levels. Methylenedioxymethamphetamine (30 mg/kg) or p-chloroamphetamine (2 x 10 mg/kg) was administered to Sprague-Dawley rats, and 7 days after drug administration there were modest decreases in the levels of serotonin transporter protein in frontal cortex, and striatum using Western blotting, even though axonal loss could be clearly seen by immunostaining. p-Chloroamphetamine or methylenedioxymethamphetamine administration did not alter the level of nitric oxide synthase in striatum or frontal cortex, determined by Western blotting. Analysis of serotonin neuronal cell bodies 7 days after p-chloroamphetamine treatment, revealed a net down-regulation of serotonin transporter mRNA levels, and a profound change in expression of nitric oxide synthase, with 33% of serotonin transporter mRNA positive cells containing nitric oxide synthase mRNA, compared with 65% in control animals. Altogether these results support the hypothesis that serotonin neurones which express nitric oxide synthase are most vulnerable to substituted amphetamine toxicity, supporting the concept that the selective vulnerability of serotonin neurones has a molecular basis.
Resumo:
Non-word repetition (NWR) was investigated in adolescents with typical development, Specific Language Impairment (SLI) and Autism Plus language Impairment (ALI) (n = 17, 13, 16, and mean age 14;4, 15;4, 14;8 respectively). The study evaluated the hypothesis that poor NWR performance in both groups indicates an overlapping language phenotype (Kjelgaard & Tager-Flusberg, 2001). Performance was investigated both quantitatively, e.g. overall error rates, and qualitatively, e.g. effect of length on repetition, proportion of errors affecting phonological structure, and proportion of consonant substitutions involving manner changes. Findings were consistent with previous research (Whitehouse, Barry, & Bishop, 2008) demonstrating a greater effect of length in the SLI group than the ALI group, which may be due to greater short-term memory limitations. In addition, an automated count of phoneme errors identified poorer performance in the SLI group than the ALI group. These findings indicate differences in the language profiles of individuals with SLI and ALI, but do not rule out a partial overlap. Errors affecting phonological structure were relatively frequent, accounting for around 40% of phonemic errors, but less frequent than straight Consonant-for-Consonant or vowel-for-vowel substitutions. It is proposed that these two different types of errors may reflect separate contributory mechanisms. Around 50% of consonant substitutions in the clinical groups involved manner changes, suggesting poor auditory-perceptual encoding. From a clinical perspective algorithms which automatically count phoneme errors may enhance sensitivity of NWR as a diagnostic marker of language impairment. Learning outcomes: Readers will be able to (1) describe and evaluate the hypothesis that there is a phenotypic overlap between SLI and Autism Spectrum Disorders (2) describe differences in the NWR performance of adolescents with SLI and ALI, and discuss whether these differences support or refute the phenotypic overlap hypothesis, and (3) understand how computational algorithms such as the Levenshtein Distance may be used to analyse NWR data.
Resumo:
This paper describes the impact of changing the current imposed ozone climatology upon the tropical Quasi-Biennial Oscillation (QBO) in a high top climate configuration of the Met Office U.K. general circulation model. The aim is to help distinguish between QBO changes in chemistry climate models that result from temperature-ozone feedbacks and those that might be forced by differences in climatology between previously fixed and newly interactive ozone distributions. Different representations of zonal mean ozone climatology under present-day conditions are taken to represent the level of change expected between acceptable model realizations of the global ozone distribution and thus indicate whether more detailed investigation of such climatology issues might be required when assessing ozone feedbacks. Tropical stratospheric ozone concentrations are enhanced relative to the control climatology between 20–30 km, reduced from 30–40 km and enhanced above, impacting the model profile of clear-sky radiative heating, in particular warming the tropical stratosphere between 15–35 km. The outcome is consistent with a localized equilibrium response in the tropical stratosphere that generates increased upwelling between 100 and 4 hPa, sufficient to account for a 12 month increase of modeled mean QBO period. This response has implications for analysis of the tropical circulation in models with interactive ozone chemistry because it highlights the possibility that plausible changes in the ozone climatology could have a sizable impact upon the tropical upwelling and QBO period that ought to be distinguished from other dynamical responses such as ozone-temperature feedbacks.
Resumo:
For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.
Resumo:
Single-cell analysis is essential for understanding the processes of cell differentiation and metabolic specialisation in rare cell types. The amount of single proteins in single cells can be as low as one copy per cell and is for most proteins in the attomole range or below; usually considered as insufficient for proteomic analysis. The development of modern mass spectrometers possessing increased sensitivity and mass accuracy in combination with nano-LC-MS/MS now enables the analysis of single-cell contents. In Arabidopsis thaliana, we have successfully identified nine unique proteins in a single-cell sample and 56 proteins from a pool of 15 single-cell samples from glucosinolate-rich S-cells by nanoLC-MS/MS proteomic analysis, thus establishing the proof-of-concept for true single-cell proteomic analysis. Dehydrin (ERD14_ARATH), two myrosinases (BGL37_ARATH and BGL38_ARATH), annexin (ANXD1_ARATH), vegetative storage proteins (VSP1_ARATH and VSP2_ARATH) and four proteins belonging to the S-adenosyl-l-methionine cycle (METE_ARATH, SAHH1_ARATH, METK4_ARATH and METK1/3_ARATH) with associated adenosine kinase (ADK1_ARATH), were amongst the proteins identified in these single-S-cell samples. Comparison of the functional groups of proteins identified in S-cells with epidermal/cortical cells and whole tissue provided a unique insight into the metabolism of S-cells. We conclude that S-cells are metabolically active and contain the machinery for de novo biosynthesis of methionine, a precursor for the most abundant glucosinolate glucoraphanine in these cells. Moreover, since abundant TGG2 and TGG1 peptides were consistently found in single-S-cell samples, previously shown to have high amounts of glucosinolates, we suggest that both myrosinases and glucosinolates can be localised in the same cells, but in separate subcellular compartments. The complex membrane structure of S-cells was reflected by the presence of a number of proteins involved in membrane maintenance and cellular organisation.
Resumo:
Constrained principal component analysis (CPCA) with a finite impulse response (FIR) basis set was used to reveal functionally connected networks and their temporal progression over a multistage verbal working memory trial in which memory load was varied. Four components were extracted, and all showed statistically significant sensitivity to the memory load manipulation. Additionally, two of the four components sustained this peak activity, both for approximately 3 s (Components 1 and 4). The functional networks that showed sustained activity were characterized by increased activations in the dorsal anterior cingulate cortex, right dorsolateral prefrontal cortex, and left supramarginal gyrus, and decreased activations in the primary auditory cortex and "default network" regions. The functional networks that did not show sustained activity were instead dominated by increased activation in occipital cortex, dorsal anterior cingulate cortex, sensori-motor cortical regions, and superior parietal cortex. The response shapes suggest that although all four components appear to be invoked at encoding, the two sustained-peak components are likely to be additionally involved in the delay period. Our investigation provides a unique view of the contributions made by a network of brain regions over the course of a multiple-stage working memory trial.
Resumo:
The glutamate decarboxylase (GAD) system is important for the acid resistance of Listeria monocytogenes. We previously showed that under acidic conditions, glutamate (Glt)/γ-aminobutyrate (GABA) antiport is impaired in minimal media but not in rich ones, like brain heart infusion. Here we demonstrate that this behavior is more complex and it is subject to strain and medium variation. Despite the impaired Glt/GABA antiport, cells accumulate intracellular GABA (GABA(i)) as a standard response against acid in any medium, and this occurs in all strains tested. Since these systems can occur independently of one another, we refer to them as the extracellular (GAD(e)) and intracellular (GAD(i)) systems. We show here that GAD(i) contributes to acid resistance since in a ΔgadD1D2 mutant, reduced GABA(i) accumulation coincided with a 3.2-log-unit reduction in survival at pH 3.0 compared to that of wild-type strain LO28. Among 20 different strains, the GAD(i) system was found to remove 23.11% ± 18.87% of the protons removed by the overall GAD system. Furthermore, the GAD(i) system is activated at milder pH values (4.5 to 5.0) than the GAD(e) system (pH 4.0 to 4.5), suggesting that GAD(i) is the more responsive of the two and the first line of defense against acid. Through functional genomics, we found a major role for GadD2 in the function of GAD(i), while that of GadD1 was minor. Furthermore, the transcription of the gad genes in three common reference strains (10403S, LO28, and EGD-e) during an acid challenge correlated well with their relative acid sensitivity. No transcriptional upregulation of the gadT2D2 operon, which is the most important component of the GAD system, was observed, while gadD3 transcription was the highest among all gad genes in all strains. In this study, we present a revised model for the function of the GAD system and highlight the important role of GAD(i) in the acid resistance of L. monocytogenes.
Resumo:
Interest in the impacts of climate change is ever increasing. This is particularly true of the water sector where understanding potential changes in the occurrence of both floods and droughts is important for strategic planning. Climate variability has been shown to have a significant impact on UK climate and accounting for this in future climate cahgne projections is essential to fully anticipate potential future impacts. In this paper a new resampling methodology is developed which includes the variability of both baseline and future precipitation. The resampling methodology is applied to 13 CMIP3 climate models for the 2080s, resulting in an ensemble of monthly precipitation change factors. The change factors are applied to the Eden catchment in eastern Scotland with analysis undertaken for the sensitivity of future river flows to the changes in precipitation. Climate variability is shown to influence the magnitude and direction of change of both precipitation and in turn river flow, which are not apparent without the use of the resampling methodology. The transformation of precipitation changes to river flow changes display a degree of non-linearity due to the catchment's role in buffering the response. The resampling methodology developed in this paper provides a new technique for creating climate change scenarios which incorporate the important issue of climate variability.
Resumo:
We investigate the scaling between precipitation and temperature changes in warm and cold climates using six models that have simulated the response to both increased CO2 and Last Glacial Maximum (LGM) boundary conditions. Globally, precipitation increases in warm climates and decreases in cold climates by between 1.5%/°C and 3%/°C. Precipitation sensitivity to temperature changes is lower over the land than over the ocean and lower over the tropical land than over the extratropical land, reflecting the constraint of water availability. The wet tropics get wetter in warm climates and drier in cold climates, but the changes in dry areas differ among models. Seasonal changes of tropical precipitation in a warmer world also reflect this “rich get richer” syndrome. Precipitation seasonality is decreased in the cold-climate state. The simulated changes in precipitation per degree temperature change are comparable to the observed changes in both the historical period and the LGM.
Resumo:
Modeling the vertical penetration of photosynthetically active radiation (PAR) through the ocean, and its utilization by phytoplankton, is fundamental to simulating marine primary production. The variation of attenuation and absorption of light with wavelength suggests that photosynthesis should be modeled at high spectral resolution, but this is computationally expensive. To model primary production in global 3d models, a balance between computer time and accuracy is necessary. We investigate the effects of varying the spectral resolution of the underwater light field and the photosynthetic efficiency of phytoplankton (α∗), on primary production using a 1d coupled ecosystem ocean turbulence model. The model is applied at three sites in the Atlantic Ocean (CIS (∼60°N), PAP (∼50°N) and ESTOC (∼30°N)) to include the effect of different meteorological forcing and parameter sets. We also investigate three different methods for modeling α∗ – as a fixed constant, varying with both wavelength and chlorophyll concentration [Bricaud, A., Morel, A., Babin, M., Allali, K., Claustre, H., 1998. Variations of light absorption by suspended particles with chlorophyll a concentration in oceanic (case 1) waters. Analysis and implications for bio-optical models. J. Geophys. Res. 103, 31033–31044], and using a non-spectral parameterization [Anderson, T.R., 1993. A spectrally averaged model of light penetration and photosynthesis. Limnol. Oceanogr. 38, 1403–1419]. After selecting the appropriate ecosystem parameters for each of the three sites we vary the spectral resolution of light and α∗ from 1 to 61 wavebands and study the results in conjunction with the three different α∗ estimation methods. The results show modeled estimates of ocean primary productivity are highly sensitive to the degree of spectral resolution and α∗. For accurate simulations of primary production and chlorophyll distribution we recommend a spectral resolution of at least six wavebands if α∗ is a function of wavelength and chlorophyll, and three wavebands if α∗ is a fixed value.
Resumo:
Transient and equilibrium sensitivity of Earth's climate has been calculated using global temperature, forcing and heating rate data for the period 1970–2010. We have assumed increased long-wave radiative forcing in the period due to the increase of the long-lived greenhouse gases. By assuming the change in aerosol forcing in the period to be zero, we calculate what we consider to be lower bounds to these sensitivities, as the magnitude of the negative aerosol forcing is unlikely to have diminished in this period. The radiation imbalance necessary to calculate equilibrium sensitivity is estimated from the rate of ocean heat accumulation as 0.37±0.03W m^−2 (all uncertainty estimates are 1−σ). With these data, we obtain best estimates for transient climate sensitivity 0.39±0.07K (W m^−2)^−1 and equilibrium climate sensitivity 0.54±0.14K (W m^−2)^−1, equivalent to 1.5±0.3 and 2.0±0.5K (3.7W m^−2)^−1, respectively. The latter quantity is equal to the lower bound of the ‘likely’ range for this quantity given by the 2007 IPCC Assessment Report. The uncertainty attached to the lower-bound equilibrium sensitivity permits us to state, within the assumptions of this analysis, that the equilibrium sensitivity is greater than 0.31K (W m^−2)^−1, equivalent to 1.16K(3.7W m^−2)^−1, at the 95% confidence level.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.