834 resultados para Reward based model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Photoelectron spectroscopy and scanning tunneling microscopy have been used to investigate how the oxidation state of Ce in CeO2-x(111) ultrathin films is influenced by the presence of Pd nanoparticles. Pd induces an increase in the concentration of Ce3+ cations, which is interpreted as charge transfer from Pd to CeO2-x(111) on the basis of DFT+U calculations. Charge transfer from Pd to Ce4+ is found to be energetically favorable even for individual Pd adatoms. These results have implications for our understanding of the redox behavior of ceria-based model catalyst systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a new agent-based model, which incorporates the actions of individual homeowners in a long-term domestic stock model, and details how it was applied in energy policy analysis. The results indicate that current policies are likely to fall significantly short of the 80% target and suggest that current subsidy levels need re-examining. In the model, current subsidy levels appear to offer too much support to some technologies, which in turn leads to the suppression of other technologies that have a greater energy saving potential. The model can be used by policy makers to develop further scenarios to find alternative, more effective, sets of policy measures. The model is currently limited to the owner-occupied stock in England, although it can be expanded, subject to the availability of data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent research into flood modelling has primarily concentrated on the simulation of inundation flow without considering the influences of channel morphology. River channels are often represented by a simplified geometry that is implicitly assumed to remain unchanged during flood simulations. However, field evidence demonstrates that significant morphological changes can occur during floods to mobilise the boundary sediments. Despite this, the effect of channel morphology on model results has been largely unexplored. To address this issue, the impact of channel cross-section geometry and channel long-profile variability on flood dynamics is examined using an ensemble of a 1D-2D hydraulic model (LISFLOOD-FP) of the 1:2102 year recurrence interval floods in Cockermouth, UK, within an uncertainty framework. A series of hypothetical scenarios of channel morphology were constructed based on a simple velocity based model of critical entrainment. A Monte-Carlo simulation framework was used to quantify the effects of channel morphology together with variations in the channel and floodplain roughness coefficients, grain size characteristics, and critical shear stress on measures of flood inundation. The results showed that the bed elevation modifications generated by the simplistic equations reflected a good approximation of the observed patterns of spatial erosion despite its overestimation of erosion depths. The effect of uncertainty on channel long-profile variability only affected the local flood dynamics and did not significantly affect the friction sensitivity and flood inundation mapping. The results imply that hydraulic models generally do not need to account for within event morphodynamic changes of the type and magnitude modelled, as these have a negligible impact that is smaller than other uncertainties, e.g. boundary conditions. Instead morphodynamic change needs to happen over a series of events to become large enough to change the hydrodynamics of floods in supply limited gravel-bed rivers like the one used in this research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found. © 2012 The Authors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o “symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant”. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões “racionais”, ou “consistentes”. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são “inconsistentes”. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RePART (Reward/Punishment ART) is a neural model that constitutes a variation of the Fuzzy Artmap model. This network was proposed in order to minimize the inherent problems in the Artmap-based model, such as the proliferation of categories and misclassification. RePART makes use of additional mechanisms, such as an instance counting parameter, a reward/punishment process and a variable vigilance parameter. The instance counting parameter, for instance, aims to minimize the misclassification problem, which is a consequence of the sensitivity to the noises, frequently presents in Artmap-based models. On the other hand, the use of the variable vigilance parameter tries to smoouth out the category proliferation problem, which is inherent of Artmap-based models, decreasing the complexity of the net. RePART was originally proposed in order to minimize the aforementioned problems and it was shown to have better performance (higer accuracy and lower complexity) than Artmap-based models. This work proposes an investigation of the performance of the RePART model in classifier ensembles. Different sizes, learning strategies and structures will be used in this investigation. As a result of this investigation, it is aimed to define the main advantages and drawbacks of this model, when used as a component in classifier ensembles. This can provide a broader foundation for the use of RePART in other pattern recognition applications

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are strong uncertainties regarding LAI dynamics in forest ecosystems in response to climate change. While empirical growth & yield models (G&YMs) provide good estimations of tree growth at the stand level on a yearly to decennial scale, process-based models (PBMs) use LAI dynamics as a key variable for enabling the accurate prediction of tree growth over short time scales. Bridging the gap between PBMs and G&YMs could improve the prediction of forest growth and, therefore, carbon, water and nutrient fluxes by combining modeling approaches at the stand level.Our study aimed to estimate monthly changes of leaf area in response to climate variations from sparse measurements of foliage area and biomass. A leaf population probabilistic model (SLCD) was designed to simulate foliage renewal. The leaf population was distributed in monthly cohorts, and the total population size was limited depending on forest age and productivity. Foliage dynamics were driven by a foliation function and the probabilities ruling leaf aging or fall. Their formulation depends on the forest environment.The model was applied to three tree species growing under contrasting climates and soil types. In tropical Brazilian evergreen broadleaf eucalypt plantations, the phenology was described using 8 parameters. A multi-objective evolutionary algorithm method (MOEA) was used to fit the model parameters on litterfall and LAI data over an entire stand rotation. Field measurements from a second eucalypt stand were used to validate the model. Seasonal LAI changes were accurately rendered for both sites (R-2 = 0.898 adjustment, R-2 = 0.698 validation). Litterfall production was correctly simulated (R-2 = 0.562, R-2 = 0.4018 validation) and may be improved by using additional validation data in future work. In two French temperate deciduous forests (beech and oak), we adapted phenological sub-modules of the CASTANEA model to simulate canopy dynamics, and SLCD was validated using LAI measurements. The phenological patterns were simulated with good accuracy in the two cases studied. However, IA/max was not accurately simulated in the beech forest, and further improvement is required.Our probabilistic approach is expected to contribute to improving predictions of LAI dynamics. The model formalism is general and suitable to broadleaf forests for a large range of ecological conditions. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Access control is a key component of security in any computer system. In the last two decades, the research on Role Basead Access Control Models was intense. One of the most important components of a Role Based Model is the Role-Permission Relationship. In this paper, the technique of systematic mapping is used to identify, extract and analyze many approaches applied to establish the Role-Permission Relationship. The main goal of this mapping is pointing directions of significant research in the area of Role Based Access Control Models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, we propose a novel approach to model the diffusion of residential PV systems. For this purpose, we use an agent-based model where agents are the families living in the area of interest. The case study is the Emilia-Romagna Regional Energy plan, which aims to increase the produc- tion of electricity from renewable energy. So, we study the microdata from the Survey on Household Income and Wealth (SHIW) provided by Bank of Italy in order to obtain the characteristics of families living in Emilia-Romagna. These data have allowed us to artificial generate families and reproduce the socio-economic aspects of the region. The families generated by means of a software are placed on the virtual world by associating them with the buildings. These buildings are acquired by analysing the vector data of regional buildings made available by the region. Each year, the model determines the level of diffusion by simulating the installed capacity. The adoption behaviour is influenced by social interactions, household’s economic situation, the environmental benefits arising from the adoption and the payback period of the investment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.