864 resultados para Production Inventory Model with Switching Time
Resumo:
Variations in the Atlantic Meridional Overturning Circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic Sea Surface Temperatures (SSTs) to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Many studies warn that climate change may undermine global food security. Much work on this topic focuses on modelling crop-weather interactions but these models do not generally account for the ways in which socio-economic factors influence how harvests are affected by weather. To address this gap, this paper uses a quantitative harvest vulnerability index based on annual soil moisture and grain production data as the dependent variables in a Linear Mixed Effects model with national scale socio-economic data as independent variables for the period 1990-2005. Results show that rice, wheat and maize production in middle income countries were especially vulnerable to droughts. By contrast, harvests in countries with higher investments in agriculture (e.g higher amounts of fertilizer use) were less vulnerable to drought. In terms of differences between the world's major grain crops, factors that made rice and wheat crops vulnerable to drought were quite consistent, whilst those of maize crops varied considerably depending on the type of region. This is likely due to the fact that maize is produced under very different conditions worldwide. One recommendation for reducing drought vulnerability risks is coordinated development and adaptation policies, including institutional support that enables farmers to take proactive action.
Resumo:
Poly(vinyl ether) gels SLURPS (Superior Liquid Uptake Resin for Polymer-supported synthesis) with low cross-linking levels have been synthesized for the first time in beaded form using a non-aqueous inverse suspension polymerisation approach. The synthetic protocol was optimized with regards to several parameters including reactions conditions, type and concentration of suspension stabilizer and controlled low temperature addition of co-initiator. Particle size measurements confirm the production of beads with average diameters of 700e950 mm. Optimization of the monomer composition of the poly (vinyl ether) gels resulted in a novel beaded polymer support with considerably improved as well as unique swelling characteristics in solvents ranging from hexane to water. The synthetic utility of the new gel was confirmed by carrying out a set of transformations with complete conversion leading to a useful amino and hydroxy terminated solid-phase precursor resin. Reaction progress could be monitored easily by 1H and 13C gel-phase NMR.
Resumo:
Convective equilibrium is a long-standing and useful concept for understanding many aspects of the behaviour of deep moist convection. For example, it is often invoked in developing parameterizations for large-scale models. However, the equilibrium assumption may begin to break down as models are increasingly used with shorter timesteps and finer resolutions. Here we perform idealized cloud-system resolving model simulations of deep convection with imposed time variations in the surface forcing. A range of rapid forcing timescales from 1 − 36hr are used, in order to induce systematic departures from equilibrium. For the longer forcing timescales, the equilibrium assumption remains valid, in at least the limited sense that cycle-integrated measures of convective activity are very similar from cycle to cycle. For shorter forcing timescales, cycle-integrated convection becomes more variable, with enhanced activity on one cycle being correlated with reduced activity on the next, suggesting a role for convective memory. Further investigation shows that the memory does not appear to be carried by the domain-mean thermodynamic fields but rather by structures on horizontal scales of 5 − 20km. Such structures are produced by the convective clouds and can persist beyond the lifetime of the cloud, even through to the next forcing cycle.
Resumo:
Although the tube theory is successful in describing entangled polymers qualitatively, a more quantitative description requires precise and consistent definitions of its parameters. Here we investigate the simplest model of entangled polymers, namely a single Rouse chain in a cubic lattice of line obstacles, and illustrate the typical problems and uncertainties of the tube theory. In particular we show that in general one needs 3 entanglement related parameters, but only 2 combinations of them are relevant for the long-time dynamics. Conversely, the plateau modulus can not be determined from these two parameters and requires a more detailed model of entanglements with explicit entanglement forces, such as the slipsprings model. It is shown that for the grid model the Rouse time within the tube is larger than the Rouse time of the free chain, in contrast to what the standard tube theory assumes.
Resumo:
Correlations between various chemical species simulated by the Canadian Middle Atmosphere Model, a general circulation model with fully interactive chemistry, are considered in order to investigate the general conditions under which compact correlations can be expected to form. At the same time, the analysis serves to validate the model. The results are compared to previous work on this subject, both from theoretical studies and from atmospheric measurements made from space and from aircraft. The results highlight the importance of having a data set with good spatial coverage when working with correlations and provide a background against which the compactness of correlations obtained from atmospheric measurements can be confirmed. It is shown that for long-lived species, distinct correlations are found in the model in the tropics, the extratropics, and the Antarctic winter vortex. Under these conditions, sparse sampling such as arises from occultation instruments is nevertheless suitable to define a chemical correlation within each region even from a single day of measurements, provided a sufficient range of mixing ratio values is sampled. In practice, this means a large vertical extent, though the requirements are less stringent at more poleward latitudes.
Resumo:
Oculopharyngeal muscular dystrophy (OPMD) is an adult-onset disorder characterized by ptosis, dysphagia and proximal limb weakness. Autosomal-dominant OPMD is caused by a short (GCG)8–13 expansions within the first exon of the poly(A)-binding protein nuclear 1 gene (PABPN1), leading to an expanded polyalanine tract in the mutated protein. Expanded PABPN1 forms insoluble aggregates in the nuclei of skeletal muscle fibres. In order to gain insight into the different physiological processes affected in OPMD muscles, we have used a transgenic mouse model of OPMD (A17.1) and performed transcriptomic studies combined with a detailed phenotypic characterization of this model at three time points. The transcriptomic analysis revealed a massive gene deregulation in the A17.1 mice, among which we identified a significant deregulation of pathways associated with muscle atrophy. Using a mathematical model for progression, we have identified that one-third of the progressive genes were also associated with muscle atrophy. Functional and histological analysis of the skeletal muscle of this mouse model confirmed a severe and progressive muscular atrophy associated with a reduction in muscle strength. Moreover, muscle atrophy in the A17.1 mice was restricted to fast glycolytic fibres, containing a large number of intranuclear inclusions (INIs). The soleus muscle and, in particular, oxidative fibres were spared, even though they contained INIs albeit to a lesser degree. These results demonstrate a fibre-type specificity of muscle atrophy in this OPMD model. This study improves our understanding of the biological pathways modified in OPMD to identify potential biomarkers and new therapeutic targets.
Resumo:
During long-range transport, many distinct processes – including photochemistry, deposition, emissions and mixing – contribute to the transformation of air mass composition. Partitioning the effects of different processes can be useful when considering the sensitivity of chemical transformation to, for example, a changing environment or anthropogenic influence. However, transformation is not observed directly, since mixing ratios are measured, and models must be used to relate changes to processes. Here, four cases from the ITCT-Lagrangian 2004 experiment are studied. In each case, aircraft intercepted a distinct air mass several times during transport over the North Atlantic, providing a unique dataset and quantifying the net changes in composition from all processes. A new framework is presented to deconstruct the change in O3 mixing ratio (Δ O3) into its component processes, which were not measured directly, taking into account the uncertainty in measurements, initial air mass variability and its time evolution. The results show that the net chemical processing (Δ O3chem) over the whole simulation is greater than net physical processing (Δ O3phys) in all cases. This is in part explained by cancellation effects associated with mixing. In contrast, each case is in a regime of either net photochemical destruction (lower tropospheric transport) or production (an upper tropospheric biomass burning case). However, physical processes influence O3 indirectly through addition or removal of precursor gases, so that changes to physical parameters in a model can have a larger effect on Δ O3chem than Δ O3phys. Despite its smaller magnitude, the physical processing distinguishes the lower tropospheric export cases, since the net photochemical O3 change is −5 ppbv per day in all three cases. Processing is quantified using a Lagrangian photochemical model with a novel method for simulating mixing through an ensemble of trajectories and a background profile that evolves with them. The model is able to simulate the magnitude and variability of the observations (of O3, CO, NOy and some hydrocarbons) and is consistent with the time-average OH following air-masses inferred from hydrocarbon measurements alone (by Arnold et al., 2007). Therefore, it is a useful new method to simulate air mass evolution and variability, and its sensitivity to process parameters.
Resumo:
A recent nonlinear system by Friston et al. (2000. NeuroImage 12: 466–477) links the changes in BOLD response to changes in neural activity. The system consists of five subsystems, linking: (1) neural activity to flow changes; (2) flow changes to oxygen delivery to tissue; (3) flow changes to changes in blood volume and venous outflow; (4) changes in flow, volume, and oxygen extraction fraction to deoxyhemoglobin changes; and finally (5) volume and deoxyhemoglobin changes to the BOLD response. Friston et al. exploit, in subsystem 2, a model by Buxton and Frank coupling flow changes to changes in oxygen metabolism which assumes tissue oxygen concentration to be close to zero. We describe below a model of the coupling between flow and oxygen delivery which takes into account the modulatory effect of changes in tissue oxygen concentration. The major development has been to extend the original Buxton and Frank model for oxygen transport to a full dynamic capillary model making the model applicable to both transient and steady state conditions. Furthermore our modification enables us to determine the time series of CMRO2 changes under different conditions, including CO2 challenges. We compare the differences in the performance of the “Friston system” using the original model of Buxton and Frank and that of our model. We also compare the data predicted by our model (with appropriate parameters) to data from a series of OIS studies. The qualitative differences in the behaviour of the models are exposed by different experimental simulations and by comparison with the results of OIS data from brief and extended stimulation protocols and from experiments using hypercapnia.
Resumo:
The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.
Resumo:
The Eyjafjallajökull volcano in Iceland emitted a cloud of ash into the atmosphere during April and May 2010. Over the UK the ash cloud was observed by the FAAM BAe-146 Atmospheric Research Aircraft which was equipped with in-situ probes measuring the concentration of volcanic ash carried by particles of varying sizes. The UK Met Office Numerical Atmospheric-dispersion Modelling Environment (NAME) has been used to simulate the evolution of the ash cloud emitted by the Eyjafjallajökull volcano during the period 4–18 May 2010. In the NAME simulations the processes controlling the evolution of the concentration and particle size distribution include sedimentation and deposition of particles, horizontal dispersion and vertical wind shear. For travel times between 24 and 72 h, a 1/t relationship describes the evolution of the concentration at the centre of the ash cloud and the particle size distribution remains fairly constant. Although NAME does not represent the effects of microphysical processes, it can capture the observed decrease in concentration with travel time in this period. This suggests that, for this eruption, microphysical processes play a small role in determining the evolution of the distal ash cloud. Quantitative comparison with observations shows that NAME can simulate the observed column-integrated mass if around 4% of the total emitted mass is assumed to be transported as far as the UK by small particles (< 30 μm diameter). NAME can also simulate the observed particle size distribution if a distal particle size distribution that contains a large fraction of < 10 μm diameter particles is used, consistent with the idea that phraetomagmatic volcanoes, such as Eyjafjallajökull, emit very fine particles.
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
Resumo:
Diabatic processes can alter Rossby wave structure; consequently errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root mean square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter-season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead time up to about five days, consistent with under-representation of diabatic modification and transport of air from the lower troposphere into upper-tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.
Resumo:
Activating transcription factor 3 (Atf3) is rapidly and transiently upregulated in numerous systems, and is associated with various disease states. Atf3 is required for negative feedback regulation of other genes, but is itself subject to negative feedback regulation possibly by autorepression. In cardiomyocytes, Atf3 and Egr1 mRNAs are upregulated via ERK1/2 signalling and Atf3 suppresses Egr1 expression. We previously developed a mathematical model for the Atf3-Egr1 system. Here, we adjusted and extended the model to explore mechanisms of Atf3 feedback regulation. Introduction of an autorepressive loop for Atf3 tuned down its expression and inhibition of Egr1 was lost, demonstrating that negative feedback regulation of Atf3 by Atf3 itself is implausible in this context. Experimentally, signals downstream from ERK1/2 suppress Atf3 expression. Mathematical modelling indicated that this cannot occur by phosphorylation of pre-existing inhibitory transcriptional regulators because the time delay is too short. De novo synthesis of an inhibitory transcription factor (ITF) with a high affinity for the Atf3 promoter could suppress Atf3 expression, but (as with the Atf3 autorepression loop) inhibition of Egr1 was lost. Developing the model to include newly-synthesised miRNAs very efficiently terminated Atf3 protein expression and, with a 4-fold increase in the rate of degradation of mRNA from the mRNA/miRNA complex, profiles for Atf3 mRNA, Atf3 protein and Egr1 mRNA approximated to the experimental data. Combining the ITF model with that of the miRNA did not improve the profiles suggesting that miRNAs are likely to play a dominant role in switching off Atf3 expression post-induction.