969 resultados para Ornstein-Uhlenbeck equation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper develops analytical distributions of temperature indices on which temperature derivatives are written. If the deviations of daily temperatures from their expected values are modelled as an Ornstein-Uhlenbeck process with timevarying variance, then the distributions of the temperature index on which the derivative is written is the sum of truncated, correlated Gaussian deviates. The key result of this paper is to provide an analytical approximation to the distribution of this sum, thus allowing the accurate computation of payoffs without the need for any simulation. A data set comprising average daily temperature spanning over a hundred years for four Australian cities is used to demonstrate the efficacy of this approach for estimating the payoffs to temperature derivatives. It is demonstrated that expected payoffs computed directly from historical records are a particularly poor approach to the problem when there are trends in underlying average daily temperature. It is shown that the proposed analytical approach is superior to historical pricing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a new approach for modeling nonlinear multivariate interest rate processes based on time-varying copulas and reducible stochastic differential equations (SDEs). In the modeling of the marginal processes, we consider a class of nonlinear SDEs that are reducible to Ornstein--Uhlenbeck (OU) process or Cox, Ingersoll, and Ross (1985) (CIR) process. The reducibility is achieved via a nonlinear transformation function. The main advantage of this approach is that these SDEs can account for nonlinear features, observed in short-term interest rate series, while at the same time leading to exact discretization and closed-form likelihood functions. Although a rich set of specifications may be entertained, our exposition focuses on a couple of nonlinear constant elasticity volatility (CEV) processes, denoted as OU-CEV and CIR-CEV, respectively. These two processes encompass a number of existing models that have closed-form likelihood functions. The transition density, the conditional distribution function, and the steady-state density function are derived in closed form as well as the conditional and unconditional moments for both processes. In order to obtain a more flexible functional form over time, we allow the transformation function to be time varying. Results from our study of U.S. and UK short-term interest rates suggest that the new models outperform existing parametric models with closed-form likelihood functions. We also find the time-varying effects in the transformation functions statistically significant. To examine the joint behavior of interest rate series, we propose flexible nonlinear multivariate models by joining univariate nonlinear processes via appropriate copulas. We study the conditional dependence structure of the two rates using Patton (2006a) time-varying symmetrized Joe--Clayton copula. We find evidence of asymmetric dependence between the two rates, and that the level of dependence is positively related to the level of the two rates. (JEL: C13, C32, G12) Copyright The Author 2010. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org, Oxford University Press.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper derives the ARMA representation of integrated and realized variances when the spot variance depends linearly on two autoregressive factors, i.e., SR SARV(2) models. This class of processes includes affine, GARCH diffusion, CEV models, as well as the eigenfunction stochastic volatility and the positive Ornstein-Uhlenbeck models. We also study the leverage effect case, the relationship between weak GARCH representation of returns and the ARMA representation of realized variances. Finally, various empirical implications of these ARMA representations are considered. We find that it is possible that some parameters of the ARMA representation are negative. Hence, the positiveness of the expected values of integrated or realized variances is not guaranteed. We also find that for some frequencies of observations, the continuous time model parameters may be weakly or not identified through the ARMA representation of realized variances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las estrategias de inversión pairs trading se basan en desviaciones del precio entre pares de acciones correlacionadas y han sido ampliamente implementadas por fondos de inversión tomando posiciones largas y cortas en las acciones seleccionadas cuando surgen divergencias y obteniendo utilidad cerrando la posición al converger. Se describe un modelo de reversión a la media para analizar la dinámica que sigue el diferencial del precio entre acciones ordinarias y preferenciales de una misma empresa en el mismo mercado. La media de convergencia en el largo plazo es obtenida con un filtro de media móvil, posteriormente, los parámetros del modelo de reversión a la media se estiman mediante un filtro de Kalman bajo una formulación de estado espacio sobre las series históricas. Se realiza un backtesting a la estrategia de pairs trading algorítmico sobre el modelo propuesto indicando potenciales utilidades en mercados financieros que se observan por fuera del equilibrio. Aplicaciones de los resultados podrían mostrar oportunidades para mejorar el rendimiento de portafolios, corregir errores de valoración y sobrellevar mejor periodos de bajos retornos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present thesis is concerned with certain aspects of differential and pseudodifferential operators on infinite dimensional spaces. We aim to generalize classical operator theoretical concepts of pseudodifferential operators on finite dimensional spaces to the infinite dimensional case. At first we summarize some facts about the canonical Gaussian measures on infinite dimensional Hilbert space riggings. Considering the naturally unitary group actions in $L^2(H_-,gamma)$ given by weighted shifts and multiplication with $e^{iSkp{t}{cdot}_0}$ we obtain an unitary equivalence $F$ between them. In this sense $F$ can be considered as an abstract Fourier transform. We show that $F$ coincides with the Fourier-Wiener transform. Using the Fourier-Wiener transform we define pseudodifferential operators in Weyl- and Kohn-Nirenberg form on our Hilbert space rigging. In the case of this Gaussian measure $gamma$ we discuss several possible Laplacians, at first the Ornstein-Uhlenbeck operator and then pseudo-differential operators with negative definite symbol. In the second case, these operators are generators of $L^2_gamma$-sub-Markovian semi-groups and $L^2_gamma$-Dirichlet-forms. In 1992 Gramsch, Ueberberg and Wagner described a construction of generalized Hörmander classes by commutator methods. Following this concept and the classical finite dimensional description of $Psi_{ro,delta}^0$ ($0leqdeltaleqroleq 1$, $delta< 1$) in the $C^*$-algebra $L(L^2)$ by Beals and Cordes we construct in both cases generalized Hörmander classes, which are $Psi^*$-algebras. These classes act on a scale of Sobolev spaces, generated by our Laplacian. In the case of the Ornstein-Uhlenbeck operator, we prove that a large class of continuous pseudodifferential operators considered by Albeverio and Dalecky in 1998 is contained in our generalized Hörmander class. Furthermore, in the case of a Laplacian with negative definite symbol, we develop a symbolic calculus for our operators. We show some Fredholm-criteria for them and prove that these Fredholm-operators are hypoelliptic. Moreover, in the finite dimensional case, using the Gaussian-measure instead of the Lebesgue-measure the index of these Fredholm operators is still given by Fedosov's formula. Considering an infinite dimensional Heisenberg group rigging we discuss the connection of some representations of the Heisenberg group to pseudo-differential operators on infinite dimensional spaces. We use this connections to calculate the spectrum of pseudodifferential operators and to construct generalized Hörmander classes given by smooth elements which are spectrally invariant in $L^2(H_-,gamma)$. Finally, given a topological space $X$ with Borel measure $mu$, a locally compact group $G$ and a representation $B$ of $G$ in the group of all homeomorphisms of $X$, we construct a Borel measure $mu_s$ on $X$ which is invariant under $B(G)$.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.