2 resultados para dynamic decomposition

em Aston University Research Archive


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The appealing feature of the arbitrage-free Nelson-Siegel model of the yield curve is the ability to capture movements in the yield curve through readily interpretable shifts in its level, slope or curvature, all within a dynamic arbitrage-free framework. To ensure that the level, slope and curvature factors evolve so as not to admit arbitrage, the model introduces a yield-adjustment term. This paper shows how the yield-adjustment term can also be decomposed into the familiar level, slope and curvature elements plus some additional readily interpretable shape adjustments. This means that, even in an arbitrage-free setting, it continues to be possible to interpret movements in the yield curve in terms of level, slope and curvature influences. © 2014 © 2014 Taylor & Francis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Walk with Restart (RWR) is an appealing measure of proximity between nodes based on graph structures. Since real graphs are often large and subject to minor changes, it is prohibitively expensive to recompute proximities from scratch. Previous methods use LU decomposition and degree reordering heuristics, entailing O(|V|^3) time and O(|V|^2) memory to compute all (|V|^2) pairs of node proximities in a static graph. In this paper, a dynamic scheme to assess RWR proximities is proposed: (1) For unit update, we characterize the changes to all-pairs proximities as the outer product of two vectors. We notice that the multiplication of an RWR matrix and its transition matrix, unlike traditional matrix multiplications, is commutative. This can greatly reduce the computation of all-pairs proximities from O(|V|^3) to O(|delta|) time for each update without loss of accuracy, where |delta| (<<|V|^2) is the number of affected proximities. (2) To avoid O(|V|^2) memory for all pairs of outputs, we also devise efficient partitioning techniques for our dynamic model, which can compute all pairs of proximities segment-wisely within O(l|V|) memory and O(|V|/l) I/O costs, where 1<=l<=|V| is a user-controlled trade-off between memory and I/O costs. (3) For bulk updates, we also devise aggregation and hashing methods, which can discard many unnecessary updates further and handle chunks of unit updates simultaneously. Our experimental results on various datasets demonstrate that our methods can be 1–2 orders of magnitude faster than other competitors while securing scalability and exactness.