968 resultados para stochastic process
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.
Resumo:
A general model for the illness-death stochastic process with covariates has been developed for the analysis of survival data. This model incorporates important baseline and time-dependent covariates to make proper adjustment for the transition probabilities and survival probabilities. The follow-up period is subdivided into small intervals and a constant hazard is assumed for each interval. An approximation formula is derived to estimate the transition parameters when the exact transition time is unknown.^ The method developed is illustrated by using data from a study on the prevention of the recurrence of a myocardial infarction and subsequent mortality, the Beta-Blocker Heart Attack Trial (BHAT). This method provides an analytical approach which simultaneously includes provision for both fatal and nonfatal events in the model. According to this analysis, the effectiveness of the treatment can be compared between the Placebo and Propranolol treatment groups with respect to fatal and nonfatal events. ^
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
2000 Mathematics Subject Classification: Primary: 62M10, 62J02, 62F12, 62M05, 62P05, 62P10; secondary: 60G46, 60F15.
Resumo:
2000 Mathematics Subject Classi cation: 49L60, 60J60, 93E20.
Resumo:
2010 Mathematics Subject Classification: 62F12, 62M05, 62M09, 62M10, 60G42.
Resumo:
With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
We estimate the parameters of a stochastic process model for a macroparasite population within a host using approximate Bayesian computation (ABC). The immunity of the host is an unobserved model variable and only mature macroparasites at sacrifice of the host are counted. With very limited data, process rates are inferred reasonably precisely. Modeling involves a three variable Markov process for which the observed data likelihood is computationally intractable. ABC methods are particularly useful when the likelihood is analytically or computationally intractable. The ABC algorithm we present is based on sequential Monte Carlo, is adaptive in nature, and overcomes some drawbacks of previous approaches to ABC. The algorithm is validated on a test example involving simulated data from an autologistic model before being used to infer parameters of the Markov process model for experimental data. The fitted model explains the observed extra-binomial variation in terms of a zero-one immunity variable, which has a short-lived presence in the host.