969 resultados para Consistent term structure models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Animal models provide a basis for clarifying the complex pathogenesis of delayed cerebral vasospasm (DCVS) and for screening of potential therapeutic approaches. Arbitrary use of experimental parameters in current models can lead to results of uncertain relevance. The aim of this work was to identify and analyze the most consistent and feasible models and their parameters for each animal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-resolution α, x-ray, and γ-ray coincidence spectroscopy experiment was conducted at the GSI Helmholtzzentrum für Schwerionenforschung. Thirty correlated α-decay chains were detected following the fusion-evaporation reaction Ca48+Am243. The observations are consistent with previous assignments of similar decay chains to originate from element Z=115. For the first time, precise spectroscopy allows the derivation of excitation schemes of isotopes along the decay chains starting with elements Z>112. Comprehensive Monte Carlo simulations accompany the data analysis. Nuclear structure models provide a first level interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using miniature thermistors with integrated data loggers, the decrease in summer lake surface water temperature (LSWT) with increasing altitude a.s.l. was investigated in 10 Swiss Alpine lakes located between 613 m a.s.l. and 2339 m a.s.l. The LSWTs exhibit essentially the same short-term structure as regional air temperature, but are about 3 to 5°C higher than the air temperature at the altitude of the lake. LSWTs decrease approximately linearly with increasing altitude at a rate slightly greater than the surface air temperature lapse rate. Diel variations in LSWT are large, implying that single water temperature measurements are un- likely to be representative of the mean. Local factors will affect LSWT more than they affect air temperature, possibly resulting in severe distortion of the empirical relationship between the two. Several implications for paleoclimate reconstruction studies result. (1) Paleolimnologically reconstructed LSWTs are likely to be higher than the air temperatures prevailing at the altitude of the lake. (2) Lakes used for paleoclimate reconstruction should be selected to minimize local effects on LSWT. (3) The calibration of organism-specific quantitative paleotemperature inference models should not be based on single water temperature measurements. (4) Consideration should be given to calibrating such models directly against air temperature rather than water temperature. (5) The primary climate effect on the aquatic biota of high-altitude lakes may be mediated by the timing of the ice cover.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in multidimensional heteronuclear NMR spectroscopy and large-scale synthesis of uniformly 13C- and 15N-labeled oligonucleotides have greatly improved the prospects for determination of the solution structure of RNA. However, there are circumstances in which it may be advantageous to label only a segment of the entire RNA chain. For example, in a larger RNA molecule the structural question of interest may reside in a localized domain. Labeling only the corresponding nucleotides simplifies the spectrum and resonance assignments because one can filter proton spectra for coupling to 13C and 15N. Another example is in resolving alternative secondary structure models that are indistinguishable in imino proton connectivities. Here we report a general method for enzymatic synthesis of quantities of segmentally labeled RNA molecules required for NMR spectroscopy. We use the method to distinguish definitively two competing secondary structure models for the 5' half of Caenorhabditis elegans spliced leader RNA by comparison of the two-dimensional [15N] 1H heteronuclear multiple quantum correlation spectrum of the uniformly labeled sample with that of a segmentally labeled sample. The method requires relatively small samples; solutions in the 200-300 microM concentration range, with a total of 30 nmol or approximately 40 micrograms of RNA in approximately 150 microliters, give strong NMR signals in a short accumulation time. The method can be adapted to label an internal segment of a larger RNA chain for study of localized structural problems. This definitive approach provides an alternative to the more common enzymatic and chemical footprinting methods for determination of RNA secondary structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We briefly review the observed structure and evolution of the M87 jet on scales less, similar1 parsec (pc; 1 pc = 3.09 x 10(16) m). Filamentary features, limb-brightening, and side-to-side oscillation are common characteristics of the pc-scale, and kpc-scale jets. The most prominent emission features on both the pc and subpc scales appear stationary (v/c < 0.1). Nonetheless, based on the jet's flux evolution, the presence of kpc-scale superluminal motion, and the absence of a visible counter-jet, we argue for the presence of an underlying relativistic flow, consistent with unified models. The initial jet collimation appears to occur on scales <0.1 pc, thus favoring electromagnetic processes associated with a black hole and accretion disk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important assumption in the statistical analysis of the financial market effects of the central bank’s large scale asset purchase program is that the "long-term debt stock variables were exogenous to term premia". We test this assumption for a small open economy in a currency union over the period 2000M3 to 2015M10, via the determinants of short- term financing relative to long-term financing. Empirical estimations indicate that the maturity composition of debt does not respond to the level of interest rate or to the term structure. These findings suggest a lower adherence to the cost minimization mandate of debt management. However, we find that volatility and relative market size respectively decrease and increase short-term financing relative to long-term financing, while it decreases with an increase in government indebtedness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper evaluates the performance of a survivorship bias-free data set of Portuguese funds investing in Euro-denominated bonds by using conditional models that consider the public information available to investors when the returns are generated. We find that bond funds underperform the market significantly and by an economically relevant magnitude. This underperformance cannot be explained by the expenses they charge. Our findings support the use of conditional performance evaluation models, since we find strong evidence of both time-varying risk and performance, dependent on the slope of the term structure and the inverse relative wealth variables. We also show that survivorship bias has a significant impact on performance estimates. Furthermore, during the European debt crisis, bond fund managers performed significantly better than in non-crisis periods and were able to achieve neutral performance. This improved performance throughout the crisis seems to be related to changes in funds’ investment styles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last ten years, graphene oxide has been explored in many applications due to its remarkable electroconductivity, thermal properties and mobility of charge carriers, among other properties. As discussed in this review, the literature suggests that a total characterization of graphene oxide must be conducted because oxidation debris (synthesis impurities) present in the graphene oxides could act as a graphene oxide surfactant, stabilizing aqueous dispersions. It is also important to note that the structure models of graphene oxide need to be revisited because of significant implications for its chemical composition and its direct covalent functionalization. Another aspect that is discussed is the need to consider graphene oxide surface chemistry. The hemolysis assay is recommended as a reliable test for the preliminary assessment of graphene oxide toxicity, biocompatibility and cell membrane interaction. More recently, graphene oxide has been extensively explored for drug delivery applications. An important increase in research efforts in this emerging field is clearly represented by the hundreds of related publications per year, including some reviews. Many studies have been performed to explore the graphene oxide properties that enable it to deliver more than one activity simultaneously and to combine multidrug systems with photothermal therapy, indicating that graphene oxide is an attractive tool to overcome hurdles in cancer therapies. Some strategic aspects of the application of these materials in cancer treatment are also discussed. In vitro studies have indicated that graphene oxide can also promote stem cell adhesion, growth and differentiation, and this review discusses the recent and pertinent findings regarding graphene oxide as a valuable nanomaterial for stem cell research in medicine. The protein corona is a key concept in nanomedicine and nanotoxicology because it provides a biomolecular identity for nanomaterials in a biological environment. Understanding protein corona-nanomaterial interactions and their influence on cellular responses is a challenging task at the nanobiointerface. New aspects and developments in this area are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sugarcane yield and quality are affected by a number of biotic and abiotic stresses. In response to such stresses, plants may increase the activities of some enzymes such as glutathione transferase (GST), which are involved in the detoxification of xenobiotics. Thus, a sugarcane GST was modelled and molecular docked using the program LIGIN to investigate the contributions of the active site residues towards the binding of reduced glutathione (GSH) and 1-chloro-2,4-dinitrobenzene (CDNB). As a result, W13 and I119 were identified as key residues for the specificity of sugarcane GSTF1 (SoGSTF1) towards CDNB. To obtain a better understanding of the catalytic specificity of sugarcane GST (SoGSTF1), two mutants were designed, W13L and I119F. Tertiary structure models and the same docking procedure were performed to explain the interactions between sugarcane GSTs with GSH and CDNB. An electron-sharing network for GSH interaction was also proposed. The SoGSTF1 and the mutated gene constructions were cloned and expressed in Escherichia coli and the expressed protein purified. Kinetic analyses revealed different Km values not only for CDNB, but also for GSH. The Km values were 0.2, 1.3 and 0.3 mM for GSH, and 0.9, 1.2 and 0.5 mM for CDNB, for the wild type, W13L mutant and I119F mutant, respectively. The V(max) values were 297.6, 224.5 and 171.8 mu mol min(-1) mg(-1) protein for GSH, and 372.3, 170.6 and 160.4 mu mol min(-1) mg(-1) protein for CDNB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.