138 resultados para Random parameter Logit Model
Resumo:
FAMOUS fills an important role in the hierarchy of climate models, both explicitly resolving atmospheric and oceanic dynamics yet being sufficiently computationally efficient that either very long simulations or large ensembles are possible. An improved set of carbon cycle parameters for this model has been found using a perturbed physics ensemble technique. This is an important step towards building the "Earth System" modelling capability of FAMOUS, which is a reduced resolution, and hence faster running, version of the Hadley Centre Climate model, HadCM3. Two separate 100 member perturbed parameter ensembles were performed; one for the land surface and one for the ocean. The land surface scheme was tested against present-day and past representations of vegetation and the ocean ensemble was tested against observations of nitrate. An advantage of using a relatively fast climate model is that a large number of simulations can be run and hence the model parameter space (a large source of climate model uncertainty) can be more thoroughly sampled. This has the associated benefit of being able to assess the sensitivity of model results to changes in each parameter. The climatologies of surface and tropospheric air temperature and precipitation are improved relative to previous versions of FAMOUS. The improved representation of upper atmosphere temperatures is driven by improved ozone concentrations near the tropopause and better upper level winds.
Resumo:
We present a dynamic causal model that can explain context-dependent changes in neural responses, in the rat barrel cortex, to an electrical whisker stimulation at different frequencies. Neural responses were measured in terms of local field potentials. These were converted into current source density (CSD) data, and the time series of the CSD sink was extracted to provide a time series response train. The model structure consists of three layers (approximating the responses from the brain stem to the thalamus and then the barrel cortex), and the latter two layers contain nonlinearly coupled modules of linear second-order dynamic systems. The interaction of these modules forms a nonlinear regulatory system that determines the temporal structure of the neural response amplitude for the thalamic and cortical layers. The model is based on the measured population dynamics of neurons rather than the dynamics of a single neuron and was evaluated against CSD data from experiments with varying stimulation frequency (1–40 Hz), random pulse trains, and awake and anesthetized animals. The model parameters obtained by optimization for different physiological conditions (anesthetized or awake) were significantly different. Following Friston, Mechelli, Turner, and Price (2000), this work is part of a formal mathematical system currently being developed (Zheng et al., 2005) that links stimulation to the blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) signal through neural activity and hemodynamic variables. The importance of the model described here is that it can be used to invert the hemodynamic measurements of changes in blood flow to estimate the underlying neural activity.
Resumo:
The difference between the rate of change of cerebral blood volume (CBV) and cerebral blood flow (CBF) following stimulation is thought to be due to circumferential stress relaxation in veins (Mandeville, J.B., Marota, J.J.A., Ayata, C., Zaharchuk, G., Moskowitz, M.A., Rosen, B.R., Weisskoff, R.M., 1999. Evidence of a cerebrovascular postarteriole windkessel with delayed compliance. J. Cereb. Blood Flow Metab. 19, 679–689). In this paper we explore the visco-elastic properties of blood vessels, and present a dynamic model relating changes in CBF to changes in CBV. We refer to this model as the visco-elastic windkessel (VW) model. A novel feature of this model is that the parameter characterising the pressure–volume relationship of blood vessels is treated as a state variable dependent on the rate of change of CBV, producing hysteresis in the pressure–volume space during vessel dilation and contraction. The VW model is nonlinear time-invariant, and is able to predict the observed differences between the time series of CBV and that of CBF measurements following changes in neural activity. Like the windkessel model derived by Mandeville, J.B., Marota, J.J.A., Ayata, C., Zaharchuk, G., Moskowitz, M.A., Rosen, B.R., Weisskoff, R.M., 1999. Evidence of a cerebrovascular postarteriole windkessel with delayed compliance. J. Cereb. Blood Flow Metab. 19, 679–689, the VW model is primarily a model of haemodynamic changes in the venous compartment. The VW model is demonstrated to have the following characteristics typical of visco-elastic materials: (1) hysteresis, (2) creep, and (3) stress relaxation, hence it provides a unified model of the visco-elastic properties of the vasculature. The model will not only contribute to the interpretation of the Blood Oxygen Level Dependent (BOLD) signals from functional Magnetic Resonance Imaging (fMRI) experiments, but also find applications in the study and modelling of the brain vasculature and the haemodynamics of circulatory and cardiovascular systems.
Resumo:
Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.
Resumo:
In this paper we have proposed and analyzed a simple mathematical model consisting of four variables, viz., nutrient concentration, toxin producing phytoplankton (TPP), non-toxic phytoplankton (NTP), and toxin concentration. Limitation in the concentration of the extracellular nutrient has been incorporated as an environmental stress condition for the plankton population, and the liberation of toxic chemicals has been described by a monotonic function of extracellular nutrient. The model is analyzed and simulated to reproduce the experimental findings of Graneli and Johansson [Graneli, E., Johansson, N., 2003. Increase in the production of allelopathic Prymnesium parvum cells grown under N- or P-deficient conditions. Harmful Algae 2, 135–145]. The robustness of the numerical experiments are tested by a formal parameter sensitivity analysis. As the first theoretical model consistent with the experiment of Graneli and Johansson (2003), our results demonstrate that, when nutrient-deficient conditions are favorable for the TPP population to release toxic chemicals, the TPP species control the bloom of other phytoplankton species which are non-toxic. Consistent with the observations made by Graneli and Johansson (2003), our model overcomes the limitation of not incorporating the effect of nutrient-limited toxic production in several other models developed on plankton dynamics.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.
Resumo:
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small.
Resumo:
The role of air–sea coupling in the simulation of the Madden–Julian oscillation (MJO) is explored using two configurations of the Hadley Centre atmospheric model (AGCM), GA3.0, which differ only in F, a parameter controlling convective entrainment and detrainment. Increasing F considerably improves deficient MJO-like variability in the Indian and Pacific Oceans, but variability in and propagation through the Maritime Continent remains weak. By coupling GA3.0 in the tropical Indo-Pacific to a boundary-layer ocean model, KPP, and employing climatological temperature corrections, well resolved air–sea interactions are simulated with limited alterations to the mean state. At default F, when GA3.0 has a poor MJO, coupling produces a stronger MJO with some eastward propagation, although both aspects remain deficient. These results agree with previous sensitivity studies using AGCMs with poor variability. At higher F, coupling does not affect MJO amplitude but enhances propagation through the Maritime Continent, resulting in an MJO that resembles observations. A sensitivity experiment with coupling in only the Indian Ocean reverses these improvements, suggesting coupling in the Maritime Continent and West Pacific is critical for propagation. We hypothesise that for AGCMs with a poor MJO, coupling provides a “crutch” to artificially augment MJO-like activity through high-frequency SST anomalies. In related experiments, we employ the KPP framework to analyse the impact of air–sea interactions in the fully coupled GA3.0, which at default F shows a similar MJO to uncoupled GA3.0. This is due to compensating effects: an improvement from coupling and a degradation from mean-state errors. Future studies on the role of coupling should carefully separate these effects.
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
Resumo:
Cholesterol is one of the key constituents for maintaining the cellular membrane and thus the integrity of the cell itself. In contrast high levels of cholesterol in the blood are known to be a major risk factor in the development of cardiovascular disease. We formulate a deterministic nonlinear ordinary differential equation model of the sterol regulatory element binding protein 2 (SREBP-2) cholesterol genetic regulatory pathway in an hepatocyte. The mathematical model includes a description of genetic transcription by SREBP-2 which is subsequently translated to mRNA leading to the formation of 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR), a main precursor of cholesterol synthesis. Cholesterol synthesis subsequently leads to the regulation of SREBP-2 via a negative feedback formulation. Parameterised with data from the literature, the model is used to understand how SREBP-2 transcription and regulation affects cellular cholesterol concentration. Model stability analysis shows that the only positive steady-state of the system exhibits purely oscillatory, damped oscillatory or monotic behaviour under certain parameter conditions. In light of our findings we postulate how cholesterol homestasis is maintained within the cell and the advantages of our model formulation are discussed with respect to other models of genetic regulation within the literature.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.