118 resultados para Asymptotic Variance, Bayesian Models, Burn-in, Ergodic Average, Ising Model
Resumo:
Modern neuroimaging techniques rely on neurovascular coupling to show regions of increased brain activation. However, little is known of the neurovascular coupling relationships that exist for inhibitory signals. To address this issue directly we developed a preparation to investigate the signal sources of one of these proposed inhibitory neurovascular signals, the negative blood oxygen level-dependent (BOLD) response (NBR), in rat somatosensory cortex. We found a reliable NBR measured in rat somatosensory cortex in response to unilateral electrical whisker stimulation, which was located in deeper cortical layers relative to the positive BOLD response. Separate optical measurements (two-dimensional optical imaging spectroscopy and laser Doppler flowmetry) revealed that the NBR was a result of decreased blood volume and flow and increased levels of deoxyhemoglobin. Neural activity in the NBR region, measured by multichannel electrodes, varied considerably as a function of cortical depth. There was a decrease in neuronal activity in deep cortical laminae. After cessation of whisker stimulation there was a large increase in neural activity above baseline. Both the decrease in neuronal activity and increase above baseline after stimulation cessation correlated well with the simultaneous measurement of blood flow suggesting that the NBR is related to decreases in neural activity in deep cortical layers. Interestingly, the magnitude of the neural decrease was largest in regions showing stimulus-evoked positive BOLD responses. Since a similar type of neural suppression in surround regions was associated with a negative BOLD signal, the increased levels of suppression in positive BOLD regions could importantly moderate the size of the observed BOLD response.
Resumo:
This study examines, in a unified fashion, the budgets of ocean gravitational potential energy (GPE) and available gravitational potential energy (AGPE) in the control simulation of the coupled atmosphere–ocean general circulation model HadCM3. Only AGPE can be converted into kinetic energy by adiabatic processes. Diapycnal mixing supplies GPE, but not AGPE, whereas the reverse is true of the combined effect of surface buoyancy forcing and convection. Mixing and buoyancy forcing, thus, play complementary roles in sustaining the large scale circulation. However, the largest globally integrated source of GPE is resolved advection (+0.57 TW) and the largest sink is through parameterized eddy transports (-0.82 TW). The effect of these adiabatic processes on AGPE is identical to their effect on GPE, except for perturbations to both budgets due to numerical leakage exacerbated by non-linearities in the equation of state.
Resumo:
The role of air–sea coupling in the simulation of the Madden–Julian oscillation (MJO) is explored using two configurations of the Hadley Centre atmospheric model (AGCM), GA3.0, which differ only in F, a parameter controlling convective entrainment and detrainment. Increasing F considerably improves deficient MJO-like variability in the Indian and Pacific Oceans, but variability in and propagation through the Maritime Continent remains weak. By coupling GA3.0 in the tropical Indo-Pacific to a boundary-layer ocean model, KPP, and employing climatological temperature corrections, well resolved air–sea interactions are simulated with limited alterations to the mean state. At default F, when GA3.0 has a poor MJO, coupling produces a stronger MJO with some eastward propagation, although both aspects remain deficient. These results agree with previous sensitivity studies using AGCMs with poor variability. At higher F, coupling does not affect MJO amplitude but enhances propagation through the Maritime Continent, resulting in an MJO that resembles observations. A sensitivity experiment with coupling in only the Indian Ocean reverses these improvements, suggesting coupling in the Maritime Continent and West Pacific is critical for propagation. We hypothesise that for AGCMs with a poor MJO, coupling provides a “crutch” to artificially augment MJO-like activity through high-frequency SST anomalies. In related experiments, we employ the KPP framework to analyse the impact of air–sea interactions in the fully coupled GA3.0, which at default F shows a similar MJO to uncoupled GA3.0. This is due to compensating effects: an improvement from coupling and a degradation from mean-state errors. Future studies on the role of coupling should carefully separate these effects.
Resumo:
In its default configuration, the Hadley Centre climate model (GA2.0) simulates roughly one-half the observed level of Madden–Julian oscillation activity, with MJO events often lasting fewer than seven days. We use initialised, climate-resolution hindcasts to examine the sensitivity of the GA2.0 MJO to a range of changes in sub-grid parameterisations and model configurations. All 22 changes are tested for two cases during the Years of Tropical Convection. Improved skill comes only from (a) disabling vertical momentum transport by convection and (b) increasing mixing entrainment and detrainment for deep and mid-level convection. These changes are subsequently tested in a further 14 hindcast cases; only (b) consistently improves MJO skill, from 12 to 22 days. In a 20-year integration, (b) produces near-observed levels of MJO activity, but propagation through the Maritime Continent remains weak. With default settings, GA2.0 produces precipitation too readily, even in anomalously dry columns. Implementing (b) decreases the efficiency of convection, permitting instability to build during the suppressed MJO phase and producing a more favourable environment for the active phase. The distribution of daily rain rates is more consistent with satellite data; default entrainment produces 6–12 mm/day too frequently. These results are consistent with recent studies showing that greater sensitivity of convection to moisture improves the representation of the MJO.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
There has been recent interest in sensory systems that are able to display a response which is proportional to a fold change in stimulus concentration, a feature referred to as fold-change detection (FCD). Here, we demonstrate FCD in a recent whole-pathway mathematical model of Escherichia coli chemotaxis. FCD is shown to hold for each protein in the signalling cascade and to be robust to kinetic rate and protein concentration variation. Using a sensitivity analysis, we find that only variations in the number of receptors within a signalling team lead to the model not exhibiting FCD. We also discuss the ability of a cell with multiple receptor types to display FCD and explain how a particular receptor configuration may be used to elucidate the two experimentally determined regimes of FCD behaviour. All findings are discussed in respect of the experimental literature.
Resumo:
Many studies evaluating model boundary-layer schemes focus either on near-surface parameters or on short-term observational campaigns. This reflects the observational datasets that are widely available for use in model evaluation. In this paper we show how surface and long-term Doppler lidar observations, combined in a way to match model representation of the boundary layer as closely as possible, can be used to evaluate the skill of boundary-layer forecasts. We use a 2-year observational dataset from a rural site in the UK to evaluate a climatology of boundary layer type forecast by the UK Met Office Unified Model. In addition, we demonstrate the use of a binary skill score (Symmetric Extremal Dependence Index) to investigate the dependence of forecast skill on season, horizontal resolution and forecast leadtime. A clear diurnal and seasonal cycle can be seen in the climatology of both the model and observations, with the main discrepancies being the model overpredicting cumulus capped and decoupled stratocumulus capped boundary-layers and underpredicting well mixed boundary-layers. Using the SEDI skill score the model is most skillful at predicting the surface stability. The skill of the model in predicting cumulus capped and stratocumulus capped stable boundary layer forecasts is low but greater than a 24 hr persistence forecast. In contrast, the prediction of decoupled boundary-layers and boundary-layers with multiple cloud layers is lower than persistence. This process based evaluation approach has the potential to be applied to other boundary-layer parameterisation schemes with similar decision structures.
Resumo:
Wave solutions to a mechanochemical model for cytoskeletal activity are studied and the results applied to the waves of chemical and mechanical activity that sweep over an egg shortly after fertilization. The model takes into account the calcium-controlled presence of actively contractile units in the cytoplasm, and consists of a viscoelastic force equilibrium equation and a conservation equation for calcium. Using piecewise linear caricatures, we obtain analytic solutions for travelling waves on a strip and demonstrate uiat the full nonlinear system behaves as predicted by the analytic solutions. The equations are solved on a sphere and the numerical results are similar to the analytic solutions. We indicate how the speed of the waves can be used as a diagnostic tool with which the chemical reactivity of the egg surface can be measured.
Resumo:
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex.
Resumo:
The LMD AGCM was iteratively coupled to the global BIOME1 model in order to explore the role of vegetation-climate interactions in response to mid-Holocene (6000 y BP) orbital forcing. The sea-surface temperature and sea-ice distribution used were present-day and CO2 concentration was pre-industrial. The land surface was initially prescribed with present-day vegetation. Initial climate “anomalies” (differences between AGCM results for 6000 y BP and control) were used to drive BIOME1; the simulated vegetation was provided to a further AGCM run, and so on. Results after five iterations were compared to the initial results in order to identify vegetation feedbacks. These were centred on regions showing strong initial responses. The orbitally induced high-latitude summer warming, and the intensification and extension of Northern Hemisphere tropical monsoons, were both amplified by vegetation feedbacks. Vegetation feedbacks were smaller than the initial orbital effects for most regions and seasons, but in West Africa the summer precipitation increase more than doubled in response to changes in vegetation. In the last iteration, global tundra area was reduced by 25% and the southern limit of the Sahara desert was shifted 2.5 °N north (to 18 °N) relative to today. These results were compared with 6000 y BP observational data recording forest-tundra boundary changes in northern Eurasia and savana-desert boundary changes in northern Africa. Although the inclusion of vegetation feedbacks improved the qualitative agreement between the model results and the data, the simulated changes were still insufficient, perhaps due to the lack of ocean-surface feedbacks.
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
Atmospheric CO2 concentration is expected to continue rising in the coming decades, but natural or artificial processes may eventually reduce it. We show that, in the FAMOUS atmosphere-ocean general circulation model, the reduction of ocean heat content as radiative forcing decreases is greater than would be expected from a linear model simulation of the response to the applied forcings. We relate this effect to the behavior of the Atlantic meridional overturning circulation (AMOC): the ocean cools more efficiently with a strong AMOC. The AMOC weakens as CO2 rises, then strengthens as CO2 declines, but temporarily overshoots its original strength. This nonlinearity comes mainly from the accumulated advection of salt into the North Atlantic, which gives the system a longer memory. This implies that changes observed in response to different CO2 scenarios or from different initial states, such as from past changes, may not be a reliable basis for making projections.
Resumo:
We study cartel stability in a differentiated price-setting duopoly with returns to scale. We show that a cartel may be equally stable in the presence of lower differentiation, provided that the decreasing returns parameter is high. In addition we demonstrate that for a given factor of discount, there are technologies that can have decreasing returns to scale where the cartel always is stable independent of the differentiation degree.
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.