960 resultados para Conditional correlations
Resumo:
In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
This study compares the impact of obesogenic environment (OE) in six different periods of development on sperm parameters and the testicular structure of adult rats and their correlations with sex steroid and metabolic scenario. Wistar rats were exposed to OE during gestation (O1), during gestation/lactation (O2), from weaning to adulthood (O3), from lactation to adulthood (O4), from gestation to sexual maturity (O5), and after sexual maturation (O6). OE was induced by a 20% fat diet, and control groups were fed a balanced diet (4% fat). Serum leptin levels and adiposity index indicate that all groups were obese, except for O1. Three progressive levels of impaired metabolic status were observed: O1 presented insulin resistance, O2 were insulin resistant and obese, and groups O3, O4, and O5 were insulin resistant, obese, and diabetic. These three levels of metabolic damage were proportional to the increase of leptin and decreased circulating testosterone. The impairment in the daily sperm production (DSP) paralleled these three levels of metabolic and hormonal damage being marginal in O1, increasing in O2, and being higher in groups O3, O4, O5, and O6. None of the OE periods affected the sperm transit time in the epididymis, and the lower sperm reserves were caused mainly by impaired DSP. In conclusion, OE during sexual maturation markedly reduces the DSP at adulthood in the rat. A severe reduction in the DSP also occurs in OE exposure during gestation/lactation but not in gestation, indicating that breast-feeding is a critical period for spermatogenic impairment under obesogenic conditions.
Resumo:
We perform variational studies of the interaction-localization problem to describe the interaction-induced renormalizations of the effective (screened) random potential seen by quasiparticles. Here we present results of careful finite-size scaling studies for the conductance of disordered Hubbard chains at half-filling and zero temperature. While our results indicate that quasiparticle wave functions remain exponentially localized even in the presence of moderate to strong repulsive interactions, we show that interactions produce a strong decrease of the characteristic conductance scale g^{*} signaling the crossover to strong localization. This effect, which cannot be captured by a simple renormalization of the disorder strength, instead reflects a peculiar non-Gaussian form of the spatial correlations of the screened disordered potential, a hitherto neglected mechanism to dramatically reduce the impact of Anderson localization (interference) effects.
Resumo:
Nutrients composition, phenolic compounds, antioxidant activity and estimated glycemic index (EGI) were evaluated in sorghum bran (SB) and decorticated sorghum flour (DSF), obtained by a rice-polisher, as well as whole sorghum flour (WSF). Correlation between EGI and the studied parameters were determined. SB presented the highest protein, lipid, ash, β-glucan, total and insoluble dietary fiber contents; and the lowest non-resistant and total starch contents. The highest carbohydrate and resistant starch contents were in DSF and WSF, respectively. Phenolic compounds and antioxidant activities were concentrated in SB. The EGI values were: DSF 84.5±0.41; WSF 77.2±0.33; and SB 60.3±0.78. Phenolic compounds, specific flavonoids and antioxidant activities, as well as total, insoluble and soluble dietary fiber and β-glucans of sorghum flour samples were all negatively correlated to EGI. RS content was not correlated to EGI.
Resumo:
The family Malpighiaceae presents species with different habits, fruit types and cytological characters. Climbers are considered the most derived habit, followed, respectively, by the shrubby and arboreal ones. The present study examines the relationship between basic chromosome numbers and the derivation of climbing habit and fruit types in Malpighiaceae. A comparison of all the chromosome number reports for Malpighiaceae showed a predominance of chromosome numbers based on x=5 or 10 in the genera of sub-family Malpighioideae, mainly represented by climbers with winged fruits, whereas non-climbing species with non-winged fruits, which predominate in sub-family Byrsonimoideae, had counts based on x=6, which is considered the less derived basic number for the family. Based on such data, confirmed by statistic assays, and on the monophyletic origin of this family, we admit the hypothesis that morphological derivation of habit and fruit is correlated with chromosome basic number variation in the family Malpighiaceae.
Resumo:
The goals of this research were to estimate the phenotypic correlations among various meat quality traits from a male broiler line and to describe the relation among these variables. Phenotypical correlations were determined among quality traits, isolating the effects of slaughter date, the age of the mother and sex. The evaluated traits were pH measurements taken at time 0 and at 6 and 24 hours after slaughtering, color parameters, water loss due to exudation, thawing and cooking of the meat, and shear force. Important associations (P<0.01) were found to be significant and, in most cases, weak or moderate, varying from -0.35 to 0.28. The initial pH of the meat was not associated (P>0.05) to the other traits of the meat, whereas the pH at 24 hours after slaughter was able of directly interfering with the attributes of the meat, since this trait was inversely related with lightness and water losses, which indicates an effect of pH fall along 24h after slaughtering on protein denaturation. This study demonstrates that the variables of poultry meat quality are related and that there is a phenotypical association between lightness and cooking losses and the other attributes of the meat. The pH at 24 hours after slaughtering, lightness and cooking losses could be efficient meat quality indicators in this broiler line.
Resumo:
Context. Two main scenarios for the formation of the Galactic bulge are invoked, the first one through gravitational collapse or hierarchical merging of subclumps, the second through secular evolution of the Galactic disc. Aims. We aim to constrain the formation of the Galactic bulge through studies of the correlation between kinematics and metallicities in Baade's Window (l = 1 degrees, b = -4 degrees) and two other fields along the bulge minor axis (l = 0 degrees, b = -6 degrees and b = -12 degrees). Methods. We combine the radial velocity and the [Fe/H] measurements obtained with FLAMES/GIRAFFE at the VLT with a spectral resolution of R = 20 000, plus for the Baade's Window field the OGLE-II proper motions, and compare these with published N-body simulations of the Galactic bulge. Results. We confirm the presence of two distinct populations in Baade's Window found in Hill et al. (2010, A&A, submitted): the metal-rich population presents bar-like kinematics while the metal-poor population shows kinematics corresponding to an old spheroid or a thick disc. In this context the metallicity gradient along the bulge minor axis observed by Zoccali et al. (2008, A&A, 486, 177), visible also in the kinematics, can be related to a varying mix of these two populations as one moves away from the Galactic plane, alleviating the apparent contradiction between the kinematic evidence of a bar and the existence of a metallicity gradient. Conclusions. We show evidence that the two main scenarios for the bulge formation co-exist within the Milky Way bulge.
Resumo:
Barium stars are optimal sites for studying the correlations between the neutron-capture elements and other species that may be depleted or enhanced, because they act as neutron seeds or poisons during the operation of the s-process. These data are necessary to help constrain the modeling of the neutron-capture paths and explain the s-process abundance curve of the solar system. Chemical abundances for a large number of barium stars with different degrees of s-process excesses, masses, metallicities, and evolutionary states are a crucial step towards this goal. We present abundances of Mn, Cu, Zn, and various light and heavy elements for a sample of barium and normal giant stars, and present correlations between abundances contributed to different degrees by the weak-s, mains, and r-processes of neutron capture, between Fe-peak elements and heavy elements. Data from the literature are also considered in order to better study the abundance pattern of peculiar stars. The stellar spectra were observed with FEROS/ESO. The stellar atmospheric parameters of the eight barium giant stars and six normal giants that we analyzed lie in the range 4300 < T(eff)/K < 5300, -0.7 < [Fe/H] <= 0.12 and 1.5 <= log g < 2.9. Carbon and nitrogen abundances were derived by spectral synthesis of the molecular bands of C(2), CH, and CN. For all other elements we used the atomic lines to perform the spectral synthesis. A very large scatter was found mainly for the Mn abundances when data from the literature were considered. We found that [Zn/Fe] correlates well with the heavy element excesses, its abundance clearly increasing as the heavy element excesses increase, a trend not shown by the [Cu/Fe] and [Mn/Fe] ratios. Also, the ratios involving Mn, Cu, and Zn and heavy elements usually show an increasing trend toward higher metallicities. Our results suggest that a larger fraction of the Zn synthesis than of Cu is owed to massive stars, and that the contribution of the main-s process to the synthesis of both elements is small. We also conclude that Mn is mostly synthesized by SN Ia, and that a non-negligible fraction of the synthesis of Mn, Cu, and Zn is owed to the weak s-process.
Resumo:
We show that the one-loop effective action at finite temperature for a scalar field with quartic interaction has the same renormalized expression as at zero temperature if written in terms of a certain classical field phi(c), and if we trade free propagators at zero temperature for their finite-temperature counterparts. The result follows if we write the partition function as an integral over field eigenstates (boundary fields) of the density matrix element in the functional Schrodinger field representation, and perform a semiclassical expansion in two steps: first, we integrate around the saddle point for fixed boundary fields, which is the classical field phi(c), a functional of the boundary fields; then, we perform a saddle-point integration over the boundary fields, whose correlations characterize the thermal properties of the system. This procedure provides a dimensionally reduced effective theory for the thermal system. We calculate the two-point correlation as an example.
Resumo:
Measurements of electrons from the decay of open-heavy-flavor mesons have shown that the yields are suppressed in Au+Au collisions compared to expectations from binary-scaled p+p collisions. These measurements indicate that charm and bottom quarks interact with the hot dense matter produced in heavy-ion collisions much more than expected. Here we extend these studies to two-particle correlations where one particle is an electron from the decay of a heavy-flavor meson and the other is a charged hadron from either the decay of the heavy meson or from jet fragmentation. These measurements provide more detailed information about the interactions between heavy quarks and the matter, such as whether the modification of the away-side-jet shape seen in hadron-hadron correlations is present when the trigger particle is from heavy-meson decay and whether the overall level of away-side-jet suppression is consistent. We statistically subtract correlations of electrons arising from background sources from the inclusive electron-hadron correlations and obtain two-particle azimuthal correlations at root s(NN) = 200 GeV between electrons from heavy-flavor decay with charged hadrons in p+p and also first results in Au+Au collisions. We find the away-side-jet shape and yield to be modified in Au+Au collisions compared to p+p collisions.
Resumo:
Correlations of charged hadrons of 1< p(T) < 10 Gev/c with high pT direct photons and pi(0) mesons in the range 5< p(T) < 15 Gev/c are used to study jet fragmentation in the gamma + jet and dijet channels, respectively. The magnitude of the partonic transverse momentum, k(T), is obtained by comparing to a model incorporating a Gaussian kT smearing. The sensitivity of the associated charged hadron spectra to the underlying fragmentation function is tested and the data are compared to calculations using recent global fit results. The shape of the direct photon-associated hadron spectrum as well as its charge asymmetry are found to be consistent with a sample dominated by quark-gluon Compton scattering. No significant evidence of fragmentation photon correlated production is observed within experimental uncertainties.
Resumo:
Hard-scattered parton probes produced in collisions of large nuclei indicate large partonic energy loss, possibly with collective produced-medium response to the lost energy. We present measurements of pi(0) trigger particles at transverse momenta p(T)(t) = 4-12 GeV/c and associated charged hadrons (p(T)(a) = 0.5-7 GeV/c) vs relative azimuthal angle Delta phi in Au + Au and p + p collisions at root s(NN) = 200 GeV. The Au + Au distribution at low p(T)(a), whose shape has been interpreted as a medium effect, is modified for p(T)(t) < 7 GeV/c. At higher p(T)(t), the data are consistent with unmodified or very weakly modified shapes, even for the lowest measured p(T)(a), which quantitatively challenges some medium response models. The associated yield of hadrons opposing the trigger particle in Au + Au relative to p + p (I(AA)) is suppressed at high p(T) (I(AA) approximate to 0.35-0.5), but less than for inclusive suppression (R(AA) approximate to 0.2).
Resumo:
The momentum distribution of electrons from semileptonic decays of charm and bottom quarks for midrapidity |y|< 0.35 in p+p collisions at s=200 GeV is measured by the PHENIX experiment at the Relativistic Heavy Ion Collider over the transverse momentum range 2 < p(T)< 7 GeV/c. The ratio of the yield of electrons from bottom to that from charm is presented. The ratio is determined using partial D/D -> e(+/-)K(-/+)X (K unidentified) reconstruction. It is found that the yield of electrons from bottom becomes significant above 4 GeV/c in p(T). A fixed-order-plus-next-to-leading-log perturbative quantum chromodynamics calculation agrees with the data within the theoretical and experimental uncertainties. The extracted total bottom production cross section at this energy is sigma(bb)=3.2(-1.1)(+1.2)(stat)(-1.3)(+1.4)(syst)mu b.