821 resultados para Reward based model
Resumo:
In this paper, a new model-based proportional–integral–derivative (PID) tuning and controller approach is introduced for Hammerstein systems that are identified on the basis of the observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The control signal is composed of a PID controller, together with a correction term. Both the parameters in the PID controller and the correction term are optimized on the basis of minimizing the multistep ahead prediction errors. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on B-spline neural networks and the associated Jacobian matrix are calculated using the de Boor algorithms, including both the functional and derivative recursions. Numerical examples are utilized to demonstrate the efficacy of the proposed approaches.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on the surface of sea ice from snow and sea ice melt and their presence reduces the albedo of the ice cover, leading to further melt. Toward the end of the melt season, melt ponds cover up to 50% of the sea ice surface. We have developed a melt pond evolution theory. Here, we have incorporated this melt pond theory into the Los Alamos CICE sea ice model, which has required us to include the refreezing of melt ponds. We present results showing that the presence, or otherwise, of a representation of melt ponds has a significant effect on the predicted sea ice thickness and extent. We also present a sensitivity study to uncertainty in the sea ice permeability, number of thickness categories in the model representation, meltwater redistribution scheme, and pond albedo. We conclude with a recommendation that our melt pond scheme is included in sea ice models, and the number of thickness categories should be increased and concentrated at lower thicknesses.
Resumo:
In Part I of this study it was shown that moving from a moisture-convergent- to a relative-humidity-dependent organized entrainment rate in the formulation for deep convection was responsible for significant advances in the simulation of the Madden – Julian Oscillation (MJO) in the ECMWF model. However, the application of traditional MJO diagnostics were not adequate to understand why changing the control on convection had such a pronounced impact on the representation of the MJO. In this study a set of process-based diagnostics are applied to the hindcast experiments described in Part I to identify the physical mechanisms responsible for the advances in MJO simulation. Increasing the sensitivity of the deep convection scheme to environmental moisture is shown to modify the relationship between precipitation and moisture in the model. Through dry-air entrainment, convective plumes ascending in low-humidity environments terminate lower in the atmosphere. As a result, there is an increase in the occurrence of cumulus congestus, which acts to moisten the mid troposphere. Due to the modified precipitation – moisture relationship more moisture is able to build up, which effectively preconditions the tropical atmosphere for the t ransition t o d eep convection. R esults from this study suggest that a tropospheric moisture control on convection is key to simulating the interaction between the convective heating and the large-scale wave forcing associated with the MJO.
Resumo:
Brain activity can be measured with several non-invasive neuroimaging modalities, but each modality has inherent limitations with respect to resolution, contrast and interpretability. It is hoped that multimodal integration will address these limitations by using the complementary features of already available data. However, purely statistical integration can prove problematic owing to the disparate signal sources. As an alternative, we propose here an advanced neural population model implemented on an anatomically sound cortical mesh with freely adjustable connectivity, which features proper signal expression through a realistic head model for the electroencephalogram (EEG), as well as a haemodynamic model for functional magnetic resonance imaging based on blood oxygen level dependent contrast (fMRI BOLD). It hence allows simultaneous and realistic predictions of EEG and fMRI BOLD from the same underlying model of neural activity. As proof of principle, we investigate here the influence on simulated brain activity of strengthening visual connectivity. In the future we plan to fit multimodal data with this neural population model. This promises novel, model-based insights into the brain's activity in sleep, rest and task conditions.
Resumo:
We have developed a model of the local field potential (LFP) based on the conservation of charge, the independence principle of ionic flows and the classical Hodgkin–Huxley (HH) type intracellular model of synaptic activity. Insights were gained through the simulation of the HH intracellular model on the nonlinear relationship between the balance of synaptic conductances and that of post-synaptic currents. The latter is dependent not only on the former, but also on the temporal lag between the excitatory and inhibitory conductances, as well as the strength of the afferent signal. The proposed LFP model provides a method for decomposing the LFP recordings near the soma of layer IV pyramidal neurons in the barrel cortex of anaesthetised rats into two highly correlated components with opposite polarity. The temporal dynamics and the proportional balance of the two components are comparable to the excitatory and inhibitory post-synaptic currents computed from the HH model. This suggests that the two components of the LFP reflect the underlying excitatory and inhibitory post-synaptic currents of the local neural population. We further used the model to decompose a sequence of evoked LFP responses under repetitive electrical stimulation (5 Hz) of the whisker pad. We found that as neural responses adapted, the excitatory and inhibitory components also adapted proportionately, while the temporal lag between the onsets of the two components increased during frequency adaptation. Our results demonstrated that the balance between neural excitation and inhibition can be investigated using extracellular recordings. Extension of the model to incorporate multiple compartments should allow more quantitative interpretations of surface Electroencephalography (EEG) recordings into components reflecting the excitatory, inhibitory and passive ionic current flows generated by local neural populations.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.
Resumo:
In their contribution to PNAS, Penner et al. (1) used a climate model to estimate the radiative forcing by the aerosol first indirect effect (cloud albedo effect) in two different ways: first, by deriving a statistical relationship between the logarithm of cloud droplet number concentration, ln Nc, and the logarithm of aerosol optical depth, ln AOD (or the logarithm of the aerosol index, ln AI) for present-day and preindustrial aerosol fields, a method that was applied earlier to satellite data (2), and, second, by computing the radiative flux perturbation between two simulations with and without anthropogenic aerosol sources. They find a radiative forcing that is a factor of 3 lower in the former approach than in the latter [as Penner et al. (1) correctly noted, only their “inline” results are useful for the comparison]. This study is a very interesting contribution, but we believe it deserves several clarifications.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.
Resumo:
14C-dated pollen and lake-level data from Europe are used to assess the spatial patterns of climate change between 6000 yr BP and present, as simulated by the NCAR CCM1 (National Center for Atmospheric Research, Community Climate Model, version 1) in response to the change in the Earth’s orbital parameters during this perod. First, reconstructed 6000 yr BP values of bioclimate variables obtained from pollen and lake-level data with the constrained-analogue technique are compared with simulated values. Then a 6000 yr BP biome map obtained from pollen data with an objective biome reconstruction (biomization) technique is compared with BIOME model results derived from the same simulation. Data and simulations agree in some features: warmer-than-present growing seasons in N and C Europe allowed forests to extend further north and to higher elevations than today, and warmer winters in C and E Europe prevented boreal conifers from spreading west. More generally, however, the agreement is poor. Predominantly deciduous forest types in Fennoscandia imply warmer winters than the model allows. The model fails to simulate winters cold enough, or summers wet enough, to allow temperate deciduous forests their former extended distribution in S Europe, and it incorrectly simulates a much expanded area of steppe vegetation in SE Europe. Similar errors have also been noted in numerous 6000 yr BP simulations with prescribed modern sea surface temperatures. These errors are evidently not resolved by the inclusion of interactive sea-surface conditions in the CCM1. Accurate representation of mid-Holocene climates in Europe may require the inclusion of dynamical ocean–atmosphere and/or vegetation–atmosphere interactions that most palaeoclimate model simulations have so far disregarded.
Resumo:
The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/
Resumo:
In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.
Resumo:
Heavy precipitation affected Central Europe in May/June 2013, triggering damaging floods both on the Danube and the Elbe rivers. Based on a modelling approach with COSMO-CLM, moisture fluxes, backward trajectories, cyclone tracks and precipitation fields are evaluated for the relevant time period 30 May–2 June 2013. We identify potential moisture sources and quantify their contribution to the flood event focusing on the Danube basin through sensitivity experiments: Control simulations are performed with undisturbed ERA-Interim boundary conditions, while multiple sensitivity experiments are driven with modified evaporation characteristics over selected marine and land areas. Two relevant cyclones are identified both in reanalysis and in our simulations, which moved counter-clockwise in a retrograde path from Southeastern Europe over Eastern Europe towards the northern slopes of the Alps. The control simulations represent the synoptic evolution of the event reasonably well. The evolution of the precipitation event in the control simulations shows some differences in terms of its spatial and temporal characteristics compared to observations. The main precipitation event can be separated into two phases concerning the moisture sources. Our modelling results provide evidence that the two main sources contributing to the event were the continental evapotranspiration (moisture recycling; both phases) and the North Atlantic Ocean (first phase only). The Mediterranean Sea played only a minor role as a moisture source. This study confirms the importance of continental moisture recycling for heavy precipitation events over Central Europe during the summer half year.
Resumo:
Trust and reputation are important factors that influence the success of both traditional transactions in physical social networks and modern e-commerce in virtual Internet environments. It is difficult to define the concept of trust and quantify it because trust has both subjective and objective characteristics at the same time. A well-reported issue with reputation management system in business-to-consumer (BtoC) e-commerce is the “all good reputation” problem. In order to deal with the confusion, a new computational model of reputation is proposed in this paper. The ratings of each customer are set as basic trust score events. In addition, the time series of massive ratings are aggregated to formulate the sellers’ local temporal trust scores by Beta distribution. A logical model of trust and reputation is established based on the analysis of the dynamical relationship between trust and reputation. As for single goods with repeat transactions, an iterative mathematical model of trust and reputation is established with a closed-loop feedback mechanism. Numerical experiments on repeated transactions recorded over a period of 24 months are performed. The experimental results show that the proposed method plays guiding roles for both theoretical research into trust and reputation and the practical design of reputation systems in BtoC e-commerce.