993 resultados para Subordinated Markov process
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
The frequency of persistent atmospheric blocking events in the 40-yr ECMWF Re-Analysis (ERA-40) is compared with the blocking frequency produced by a simple first-order Markov model designed to predict the time evolution of a blocking index [defined by the meridional contrast of potential temperature on the 2-PVU surface (1 PVU ≡ 1 × 10−6 K m2 kg−1 s−1)]. With the observed spatial coherence built into the model, it is able to reproduce the main regions of blocking occurrence and the frequencies of sector blocking very well. This underlines the importance of the climatological background flow in determining the locations of high blocking occurrence as being the regions where the mean midlatitude meridional potential vorticity (PV) gradient is weak. However, when only persistent blocking episodes are considered, the model is unable to simulate the observed frequencies. It is proposed that this persistence beyond that given by a red noise model is due to the self-sustaining nature of the blocking phenomenon.
Resumo:
Tests for business cycle asymmetries are developed for Markov-switching autoregressive models. The tests of deepness, steepness, and sharpness are Wald statistics, which have standard asymptotics. For the standard two-regime model of expansions and contractions, deepness is shown to imply sharpness (and vice versa), whereas the process is always nonsteep. Two and three-state models of U.S. GNP growth are used to illustrate the approach, along with models of U.S. investment and consumption growth. The robustness of the tests to model misspecification, and the effects of regime-dependent heteroscedasticity, are investigated.
Resumo:
The detection of physiological signals from the motor system (electromyographic signals) is being utilized in the practice clinic to guide the therapist in a more precise and accurate diagnosis of motor disorders. In this context, the process of decomposition of EMG (electromyographic) signals that includes the identification and classification of MUAP (Motor Unit Action Potential) of a EMG signal, is very important to help the therapist in the evaluation of motor disorders. The EMG decomposition is a complex task due to EMG features depend on the electrode type (needle or surface), its placement related to the muscle, the contraction level and the health of the Neuromuscular System. To date, the majority of researches on EMG decomposition utilize EMG signals acquired by needle electrodes, due to their advantages in processing this type of signal. However, relatively few researches have been conducted using surface EMG signals. Thus, this article aims to contribute to the clinical practice by presenting a technique that permit the decomposition of surface EMG signal via the use of Hidden Markov Models. This process is supported by the use of differential evolution and spectral clustering techniques. The developed system presented coherent results in: (1) identification of the number of Motor Units actives in the EMG signal; (2) presentation of the morphological patterns of MUAPs in the EMG signal; (3) identification of the firing sequence of the Motor Units. The model proposed in this work is an advance in the research area of decomposition of surface EMG signals.
Resumo:
We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.
Resumo:
In the surroundings of Caldas and El Retiro cities (Colombia) metamorphic rocks derived from basic and pelitic protoliths comprise the Caldas amphibole schist and the Ancon schist respectively. Subordinated metamorphosed granite bodies (La Miel gneiss) are associated to these units, and The El Retiro amphibolites, migmatites and granulites crops out eastwards of these units, separated by shear zones. The Caldas amphibole schist and the Ancon schist protoliths could have been formed in a distal marine reduced environment and amalgamated to the South American continent in an apparent Triassic subduction event. The El Retiro rocks are akin to a continental basement and possible include impure metasediments of continental margin, whose metamorphism originated granulite facies rocks and migmatites as a result of the anatexis of quartz-feldspathic rocks. The metamorphism was accompanied by intense deformation, which has juxtaposed both migmatites and granulite blocks. Afterward, heat and fluid circulation associated with the emplacement of minor igneous intrusions resulted in intense fluid-rock interaction, variations in the grain size of the minerals and, especially, intense retrograde metamorphic re-equilibrium. Thermobarometric estimations for the Caldas amphibole schist indicate metamorphism in the Barrovian amphibolite fades. The metamorphic path is counter-clockwise, but retrograde evolution could not be precisely defined. The pressures of the metamorphism in these rocks range from 6.3 to 13.5 kbar, with narrow temperature ranging from 550 to 630 degrees C. For the Ancon schist metapelites the P-T path is also counter-clockwise, with a temperature increase evidenced by the occurrence of sillimanite and the cooling by later kyanite. The progressive metamorphism event occurred at pressures of 7.6-7.2 kbar and temperatures of 645-635 degrees C for one sample and temperature between 500 and 600 degrees C under constant pressure of 6 kbar. The temperature estimated for these rocks varies between 400 and 555 degrees C at pressures of 5-6 kbar in the retrograde metamorphic path. The El Retiro rocks evidence strong decompression with narrow variation in temperature, showing pressure values between 8.7 and 2.7 kbar at temperatures of 740-633 degrees C. These metamorphic fragments of the basement in the Central Cordillera of the Colombian Andes could represent a close relationship with an antique subduction zone. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Before signing electronic contracts, a rational agent should estimate the expected utilities of these contracts and calculate the violation risks related to them. In order to perform such pre-signing procedures, this agent has to be capable of computing a policy taking into account the norms and sanctions in the contracts. In relation to this, the contribution of this work is threefold. First, we present the Normative Markov Decision Process, an extension of the Markov Decision Process for explicitly representing norms. In order to illustrate the usage of our framework, we model an example in a simulated aerospace aftermarket. Second, we specify an algorithm for identifying the states of the process which characterize the violation of norms. Finally, we show how to compute policies with our framework and how to calculate the risk of violating the norms in the contracts by adopting a particular policy.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The general assumption under which the (X) over bar chart is designed is that the process mean has a constant in-control value. However, there are situations in which the process mean wanders. When it wanders according to a first-order autoregressive (AR (1)) model, a complex approach involving Markov chains and integral equation methods is used to evaluate the properties of the (X) over bar chart. In this paper, we propose the use of a pure Markov chain approach to study the performance of the (X) over bar chart. The performance of the chat (X) over bar with variable parameters and the (X) over bar with double sampling are compared. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this work we study the Hidden Markov Models with finite as well as general state space. In the finite case, the forward and backward algorithms are considered and the probability of a given observed sequence is computed. Next, we use the EM algorithm to estimate the model parameters. In the general case, the kernel estimators are used and to built a sequence of estimators that converge in L1-norm to the density function of the observable process
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.
Resumo:
Purpose - The aim of this paper is to present a synthetic chart based on the non-central chi-square statistic that is operationally simpler and more effective than the joint X̄ and R chart in detecting assignable cause(s). This chart will assist in identifying which (mean or variance) changed due to the occurrence of the assignable causes. Design/methodology/approach - The approach used is based on the non-central chi-square statistic and the steady-state average run length (ARL) of the developed chart is evaluated using a Markov chain model. Findings - The proposed chart always detects process disturbances faster than the joint X̄ and R charts. The developed chart can monitor the process instead of looking at two charts separately. Originality/value - The most important advantage of using the proposed chart is that practitioners can monitor the process by looking at only one chart instead of looking at two charts separately. © Emerald Group Publishing Limted.