244 resultados para Markov Switching model
em University of Queensland eSpace - Australia
Resumo:
Latent inhibition (LI) is an important model for understanding cognitive deficits in schizophrenia, Disruption of LI is thought to result from an inability to ignore irrelevant stimuli. The study investigated LI in schizophrenic patients by using Pavlovian conditioning of electrodermal responses in a complete within-subject design. Thirty-two schizophrenic patients, ( 16 acute. unmedicated and 16 medicated patients) and 16 healthy control subjects (matched with respect to age and gender) participated in the study. The experiment consisted of two stages: preexposure and conditioning. During preexposure two visual stimuli were presented, one of which served as the to-be-conditioned stimulus (CSp +) and the other one was the not-to-be-conditioned stimulus (CSp -) during the following conditioning ( = acquisition). During acquisition. two novel visual stimuli (CSn + and CSn -) were introduced. A reaction time task was used as the unconditioned stimulus (US). LI was defined as the difference in response differentiation observed between proexposed and non-preexposed sets of CS + and CS -. During preexposure. the schizophrenic patients did not differ in electrodermal responding from the control subjects, neither concerning the extent of orienting nor the course of habituation. The exposure to novel stimuli at the beginning of the acquisition elicited reduced orienting responses in unmedicated patients compared to medicated patients and control subjects, LI was observed in medicated schizophrenic patients and healthy controls. but not in acute unmedicated patients. Furthermore LI was found to be correlated with the duration of illness: it was attenuated in patients who had suffered their first psychotic episode. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
The per iodic structure of business cycles suggests that significant asymmetries are present over different phases of the cycle. This paper uses markov regime-switching models with fixed and duration dependent transition probabilities to directly model expansions, contractions and durations in Australian GDP growth and unemployment growth. Evidence is found of significant asymmetry in growth rates across expansions and contractions for both series. GDP contractions exhibit duration dependence implying that as output recessions age the likelihood of switching into an expansion phase increases. Unemployment growth does not exhibit duration dependence in either phase. Evidence is also presented that non-linearities in unemployment growth are well explained by the asymmetries in the GDP growth cycle. The analysis suggests that recessions are periods of rapid and intense job destruction, that Australian unemployment tends to ratchet up in recessionary periods and, in contrast to US and UK studies, that shocks to Australian unemployment growth are more persistent in recessions than expansions. [E37 C5 C41].
Resumo:
We introduce a conceptual model for the in-plane physics of an earthquake fault. The model employs cellular automaton techniques to simulate tectonic loading, earthquake rupture, and strain redistribution. The impact of a hypothetical crustal elastodynamic Green's function is approximated by a long-range strain redistribution law with a r(-p) dependance. We investigate the influence of the effective elastodynamic interaction range upon the dynamical behaviour of the model by conducting experiments with different values of the exponent (p). The results indicate that this model has two distinct, stable modes of behaviour. The first mode produces a characteristic earthquake distribution with moderate to large events preceeded by an interval of time in which the rate of energy release accelerates. A correlation function analysis reveals that accelerating sequences are associated with a systematic, global evolution of strain energy correlations within the system. The second stable mode produces Gutenberg-Richter statistics, with near-linear energy release and no significant global correlation evolution. A model with effectively short-range interactions preferentially displays Gutenberg-Richter behaviour. However, models with long-range interactions appear to switch between the characteristic and GR modes. As the range of elastodynamic interactions is increased, characteristic behaviour begins to dominate GR behaviour. These models demonstrate that evolution of strain energy correlations may occur within systems with a fixed elastodynamic interaction range. Supposing that similar mode-switching dynamical behaviour occurs within earthquake faults then intermediate-term forecasting of large earthquakes may be feasible for some earthquakes but not for others, in alignment with certain empirical seismological observations. Further numerical investigation of dynamical models of this type may lead to advances in earthquake forecasting research and theoretical seismology.
Resumo:
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Bistability and switching are two important aspects of the genetic regulatory network of phage. Positive and negative feedbacks are key regulatory mechanisms in this network. By the introduction of threshold values, the developmental pathway of A phage is divided into different stages. If the protein level reaches a threshold value, positive or negative feedback will be effective and regulate the process of development. Using this regulatory mechanism, we present a quantitative model to realize bistability and switching of phage based on experimental data. This model gives descriptions of decisive mechanisms for different pathways in induction. A stochastic model is also introduced for describing statistical properties of switching in induction. A stochastic degradation rate is used to represent intrinsic noise in induction for switching the system from the lysogenic pathway to the lysis pathway. The approach in this paper represents an attempt to describe the regulatory mechanism in genetic regulatory network under the influence of intrinsic noise in the framework of continuous models. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper examines the economic significance of return predictability in Australian equities. In light of considerable model uncertainty, formal model-selection criteria are used to choose a specification for the predictive model. A portfolio-switching strategy is implemented according to model predictions. Relative to a buy-and-hold market investment, the returns to the portfolio-switching strategy are impressive under several model-selection criteria, even after accounting for transaction costs. However, as these findings are not robust across other model-selection criteria examined, it is difficult to conclude that the degree of return predictability is economically significant.
Resumo:
Let (Phi(t))(t is an element of R+) be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure pi. We investigate the rates of convergence of the transition function P-t(x, (.)) to pi; specifically, we find conditions under which r(t) vertical bar vertical bar P-t (x, (.)) - pi vertical bar vertical bar -> 0 as t -> infinity, for suitable subgeometric rate functions r(t), where vertical bar vertical bar - vertical bar vertical bar denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.
Resumo:
We investigate the dynamics of a cobweb model with heterogeneous beliefs, generalizing the example of Brock and Hommes (1997). We examine situations where the agents form expectations by using either rational expectations, or a type of adaptive expectations with limited memory defined from the last two prices. We specify conditions that generate cycles. These conditions depend on a set of factors that includes the intensity of switching between beliefs and the adaption parameter. We show that both Flip bifurcation and Neimark-Sacker bifurcation can occur as primary bifurcation when the steady state is unstable.
Resumo:
A stochastic metapopulation model accounting for habitat dynamics is presented. This is the stochastic SIS logistic model with the novel aspect that it incorporates varying carrying capacity. We present results of Kurtz and Barbour, that provide deterministic and diffusion approximations for a wide class of stochastic models, in a form that most easily allows their direct application to population models. These results are used to show that a suitably scaled version of the metapopulation model converges, uniformly in probability over finite time intervals, to a deterministic model previously studied in the ecological literature. Additionally, they allow us to establish a bivariate normal approximation to the quasi-stationary distribution of the process. This allows us to consider the effects of habitat dynamics on metapopulation modelling through a comparison with the stochastic SIS logistic model and provides an effective means for modelling metapopulations inhabiting dynamic landscapes.
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
The University of Queensland, Australia has developed Fez, a world-leading user-interface and management system for Fedora-based institutional repositories, which bridges the gap between a repository and users. Christiaan Kortekaas, Andrew Bennett and Keith Webster will review this open source software that gives institutions the power to create a comprehensive repository solution without the hassle..
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.