961 resultados para continuous-time asymptotics
Resumo:
Laplacian-based descriptors, such as the Heat Kernel Signature and the Wave Kernel Signature, allow one to embed the vertices of a graph onto a vectorial space, and have been successfully used to find the optimal matching between a pair of input graphs. While the HKS uses a heat di↵usion process to probe the local structure of a graph, the WKS attempts to do the same through wave propagation. In this paper, we propose an alternative structural descriptor that is based on continuoustime quantum walks. More specifically, we characterise the structure of a graph using its average mixing matrix. The average mixing matrix is a doubly-stochastic matrix that encodes the time-averaged behaviour of a continuous-time quantum walk on the graph. We propose to use the rows of the average mixing matrix for increasing stopping times to develop a novel signature, the Average Mixing Matrix Signature (AMMS). We perform an extensive range of experiments and we show that the proposed signature is robust under structural perturbations of the original graphs and it outperforms both the HKS and WKS when used as a node descriptor in a graph matching task.
Resumo:
This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.
Resumo:
Understanding why market manipulation is conducted, under which conditions it is the most profitable and investigating the magnitude of these practices are crucial questions for financial regulators. Closing price manipulation induced by derivatives’ expiration is the primary subject of this thesis. The first chapter provides a mathematical framework in continuous time to study the incentive to manipulate a set of securities induced by a derivative position. An agent holding a European-type contingent claim, depending on the price of a basket of underlying securities, is considered. The agent can affect the price of the underlying securities by trading on each of them before expiration. The elements of novelty are at least twofold: (1) a multi-asset market is considered; (2) the problem is solved by means of both classic optimisation and stochastic control techniques. Both linear and option payoffs are considered. In the second chapter an empirical investigation is conducted on the existence of expiration day effects on the UK equity market. Intraday data on FTSE 350 stocks over a six-year period from 2015-2020 are used. The results show that the expiration of index derivatives is associated with a rise in both trading activity and volatility, together with significant price distortions. The expiration of single stock options appears to have little to no impact on the underlying securities. The last chapter examines the existence of patterns in line with closing price manipulation of UK stocks on option expiration days. The main contributions are threefold: (1) this is one of the few empirical studies on manipulation induced by the options market; (2) proprietary equity orderbook and transaction data sets are used to define manipulation proxies, providing a more detailed analysis; (3) the behaviour of proprietary trading firms is studied. Despite the industry concerns, no evidence is found of this type of manipulative behaviour.
Resumo:
We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.
Resumo:
In the recent years, autonomous aerial vehicles gained large popularity in a variety of applications in the field of automation. To accomplish various and challenging tasks the capability of generating trajectories has assumed a key role. As higher performances are sought, traditional, flatness-based trajectory generation schemes present their limitations. In these approaches the highly nonlinear dynamics of the quadrotor is, indeed, neglected. Therefore, strategies based on optimal control principles turn out to be beneficial, since in the trajectory generation process they allow the control unit to best exploit the actual dynamics, and enable the drone to perform quite aggressive maneuvers. This dissertation is then concerned with the development of an optimal control technique to generate trajectories for autonomous drones. The algorithm adopted to this end is a second-order iterative method working directly in continuous-time, which, under proper initialization, guarantees quadratic convergence to a locally optimal trajectory. At each iteration a quadratic approximation of the cost functional is minimized and a decreasing direction is then obtained as a linear-affine control law, after solving a differential Riccati equation. The algorithm has been implemented and its effectiveness has been tested on the vectored-thrust dynamical model of a quadrotor in a realistic simulative setup.
Resumo:
The receiver-operating characteristic (ROC) curve is the most widely used measure for evaluating the performance of a diagnostic biomarker when predicting a binary disease outcome. The ROC curve displays the true positive rate (or sensitivity) and the false positive rate (or 1-specificity) for different cut-off values used to classify an individual as healthy or diseased. In time-to-event studies, however, the disease status (e.g. death or alive) of an individual is not a fixed characteristic, and it varies along the study. In such cases, when evaluating the performance of the biomarker, several issues should be taken into account: first, the time-dependent nature of the disease status; and second, the presence of incomplete data (e.g. censored data typically present in survival studies). Accordingly, to assess the discrimination power of continuous biomarkers for time-dependent disease outcomes, time-dependent extensions of true positive rate, false positive rate, and ROC curve have been recently proposed. In this work, we present new nonparametric estimators of the cumulative/dynamic time-dependent ROC curve that allow accounting for the possible modifying effect of current or past covariate measures on the discriminatory power of the biomarker. The proposed estimators can accommodate right-censored data, as well as covariate-dependent censoring. The behavior of the estimators proposed in this study will be explored through simulations and illustrated using data from a cohort of patients who suffered from acute coronary syndrome.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.
Resumo:
An equation for mean first-passage times of non-Markovian processes driven by colored noise is derived through an appropriate backward integro-differential equation. The equation is solved in a Bourret-like approximation. In a weak-noise bistable situation, non-Markovian effects are taken into account by an effective diffusion coefficient. In this situation, our results compare satisfactorily with other approaches and experimental data.
Resumo:
We investigate the phase transition in a strongly disordered short-range three-spin interaction model characterized by the absence of time-reversal symmetry in the Hamiltonian. In the mean-field limit the model is well described by the Adam-Gibbs-DiMarzio scenario for the glass transition; however, in the short-range case this picture turns out to be modified. The model presents a finite temperature continuous phase transition characterized by a divergent spin-glass susceptibility and a negative specific-heat exponent. We expect the nature of the transition in this three-spin model to be the same as the transition in the Edwards-Anderson model in a magnetic field, with the advantage that the strong crossover effects present in the latter case are absent.
Resumo:
An equation for mean first-passage times of non-Markovian processes driven by colored noise is derived through an appropriate backward integro-differential equation. The equation is solved in a Bourret-like approximation. In a weak-noise bistable situation, non-Markovian effects are taken into account by an effective diffusion coefficient. In this situation, our results compare satisfactorily with other approaches and experimental data.
Resumo:
We generalize to arbitrary waiting-time distributions some results which were previously derived for discrete distributions. We show that for any two waiting-time distributions with the same mean delay time, that with higher dispersion will lead to a faster front. Experimental data on the speed of virus infections in a plaque are correctly explained by the theoretical predictions using a Gaussian delay-time distribution, which is more realistic for this system than the Dirac delta distribution considered previously [J. Fort and V. Méndez, Phys. Rev. Lett.89, 178101 (2002)]
Resumo:
In this paper, we present an on-line estimation algorithm for an uncertain time delay in a continuous system based on the observational input-output data, subject to observational noise. The first order Pade approximation is used to approximate the time delay. At each time step, the algorithm combines the well known Kalman filter algorithm and the recursive instrumental variable least squares (RIVLS) algorithm in cascade form. The instrumental variable least squares algorithm is used in order to achieve the consistency of the delay parameter estimate, since an error-in-the-variable model is involved. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.
Resumo:
Dynamic multi-user interactions in a single networked virtual environment suffer from abrupt state transition problems due to communication delays arising from network latency--an action by one user only becoming apparent to another user after the communication delay. This results in a temporal suspension of the environment for the duration of the delay--the virtual world `hangs'--followed by an abrupt jump to make up for the time lost due to the delay so that the current state of the virtual world is displayed. These discontinuities appear unnatural and disconcerting to the users. This paper proposes a novel method of warping times associated with users to ensure that each user views a continuous version of the virtual world, such that no hangs or jumps occur despite other user interactions. Objects passed between users within the environment are parameterized, not by real time, but by a virtual local time, generated by continuously warping real time. This virtual time periodically realigns itself with real time as the virtual environment evolves. The concept of a local user dynamically warping the local time is also introduced. As a result, the users are shielded from viewing discontinuities within their virtual worlds, consequently enhancing the realism of the virtual environment.