5 resultados para Markov Chains


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a multi-target complex network, the links (L-ij) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K-i, K-m, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the stationarity of this ratio in the context of a Markov-switching model à la Hamilton (1989) where an asymmetric speed of adjustment is introduced. This particular specification robustly supports a nonlinear reversion process and identifies two relevant episodes: the post-war period from the mid-50’s to the mid-70’s and the so called “90’s boom” period. A three-regime Markov-switching model displays the best regime identification and reveals that only the first part of the 90’s boom (1985-1995) and the post-war period are near-nonstationary states. Interestingly, the last part of the 90’s boom (1996-2000), characterized by a growing price-dividend ratio, is entirely attributed to a regime featuring a highly reverting process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the basic present value model of interest rates under rational expectations with two additional features. First, following McCallum (1994), the model assumes a policy reaction function where changes in the short-term interest rate are determined by the long-short spread. Second, the short-term interest rate and the risk premium processes are characterized by a Markov regime-switching model. Using US post-war interest rate data, this paper finds evidence that a two-regime switching model fits the data better than the basic model. The estimation results also show the presence of two alternative states displaying quite different features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work the state of the art of the automatic dialogue strategy management using Markov decision processes (MDP) with reinforcement learning (RL) is described. Partially observable Markov decision processes (POMDP) are also described. To test the validity of these methods, two spoken dialogue systems have been developed. The first one is a spoken dialogue system for weather forecast providing, and the second one is a more complex system for train information. With the first system, comparisons between a rule-based system and an automatically trained system have been done, using a real corpus to train the automatic strategy. In the second system, the scalability of these methods when used in larger systems has been tested.