18 resultados para Standard model
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In this work discuss the use of the standard model for the calculation of the solvency capital requirement (SCR) when the company aims to use the specific parameters of the model on the basis of the experience of its portfolio. In particular, this analysis focuses on the formula presented in the latest quantitative impact study (2010 CEIOPS) for non-life underwriting premium and reserve risk. One of the keys of the standard model for premium and reserves risk is the correlation matrix between lines of business. In this work we present how the correlation matrix between lines of business could be estimated from a quantitative perspective, as well as the possibility of using a credibility model for the estimation of the matrix of correlation between lines of business that merge qualitative and quantitative perspective.
Resumo:
We construct a cofibrantly generated Thomason model structure on the category of small n-fold categories and prove that it is Quillen equivalent to the standard model structure on the category of simplicial sets. An n-fold functor is a weak equivalence if and only if the diagonal of its n-fold nerve is a weak equivalence of simplicial sets. We introduce an n-fold Grothendieck construction for multisimplicial sets, and prove that it is a homotopy inverse to the n-fold nerve. As a consequence, the unit and counit of the adjunction between simplicial sets and n-fold categories are natural weak equivalences.
Resumo:
We consider the two Higgs doublet model extension of the standard model in the limit where all physical scalar particles are very heavy, too heavy, in fact, to be experimentally produced in forthcoming experiments. The symmetry-breaking sector can thus be described by an effective chiral Lagrangian. We obtain the values of the coefficients of the O(p4) operators relevant to the oblique corrections and investigate to what extent some nondecoupling effects may remain at low energies. A comparison with recent CERN LEP data shows that this model is indistinguishable from the standard model with one doublet and with a heavy Higgs boson, unless the scalar mass splittings are large.
Resumo:
We show a standard model where the optimal tax reform is to cut labor taxes and leave capital taxes very high in the short and medium run. Only in the very long run would capital taxes be zero. Our model is a version of Chamley??s, with heterogeneous agents, without lump sum transfers, an upper bound on capital taxes, and a focus on Pareto improving plans. For our calibration labor taxes should be low for the first ten to twenty years, while capital taxes should be at their maximum. This policy ensures that all agents benefit from the tax reform and that capital grows quickly after when the reform begins. Therefore, the long run optimal tax mix is the opposite from the short and medium run tax mix. The initial labor tax cut is financed by deficits that lead to a positive long run level of government debt, reversing the standard prediction that government accumulates savings in models with optimal capital taxes. If labor supply is somewhat elastic benefits from tax reform are high and they can be shifted entirely to capitalists or workers by varying the length of the transition. With inelastic labor supply there is an increasing part of the equilibrium frontier, this means that the scope for benefitting the workers is limited and the total benefits from reforming taxes are much lower.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
This paper points out an empirical puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, both sticky wages and match-specific productivity shocks help the model reproduce the stylized facts: both make the firm's flow of surplus more procyclical, thus making hiring more procyclical too.
Resumo:
We present a standard model of financial innovation, in which intermediaries engineer securities with cash flows that investors seek, but modify two assumptions. First, investors (and possibly intermediaries) neglect certain unlikely risks. Second, investors demand securities with safe cash flows. Financial intermediaries cater to these preferences and beliefs by engineering securities perceived to be safe but exposed to neglected risks. Because the risks are neglected, security issuance is excessive. As investors eventually recognize these risks, they fly back to safety of traditional securities and markets become fragile, even without leverage, precisely because the volume of new claims is excessive.
Resumo:
We explore a view of the crisis as a shock to investor sentiment that led to the collapse of abubble or pyramid scheme in financial markets. We embed this view in a standard model of thefinancial accelerator and explore its empirical and policy implications. In particular, we show howthe model can account for: (i) a gradual and protracted expansionary phase followed by a suddenand sharp recession; (ii) the connection (or lack of connection!) between financial and real economicactivity and; (iii) a fast and strong transmission of shocks across countries. We also use the modelto explore the role of fiscal policy.
Resumo:
This paper theoretically and empirically documents a puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, either sticky wages or match-specific productivity shocks can improve the model's performance by making the firm's flow of surplus more procyclical, which makes hiring more procyclical too.
Resumo:
We present a standard model of financial innovation, in which intermediaries engineer securities with cash flows that investors seek, but modify two assumptions. First, investors (and possibly intermediaries) neglect certain unlikely risks. Second, investors demand securities with safe cash flows. Financial intermediaries cater to these preferences and beliefs by engineering securities perceived to be safe but exposed to neglected risks. Because the risks are neglected, security issuance is excessive. As investors eventually recognize these risks, they fly back to safety of traditional securities and markets become fragile, even without leverage, precisely because the volume of new claims is excessive.
Resumo:
Accurately calibrated effective field theories are used to compute atomic parity nonconserving (APNC) observables. Although accurately calibrated, these effective field theories predict a large spread in the neutron skin of heavy nuclei. Whereas the neutron skin is strongly correlated to numerous physical observables, in this contribution we focus on its impact on new physics through APNC observables. The addition of an isoscalar-isovector coupling constant to the effective Lagrangian generates a wide range of values for the neutron skin of heavy nuclei without compromising the success of the model in reproducing well-constrained nuclear observables. Earlier studies have suggested that the use of isotopic ratios of APNC observables may eliminate their sensitivity to atomic structure. This leaves nuclear structure uncertainties as the main impediment for identifying physics beyond the standard model. We establish that uncertainties in the neutron skin of heavy nuclei are at present too large to measure isotopic ratios to better than the 0.1% accuracy required to test the standard model. However, we argue that such uncertainties will be significantly reduced by the upcoming measurement of the neutron radius in 208^Pb at the Jefferson Laboratory.
Resumo:
We estimate the attainable limits on the coupling of a nonstandard Higgs boson to two photons taking into account the data collected by the Fermilab collaborations on diphoton events. We based our analysis on a general set of dimension-6 effective operators that give rise to anomalous couplings in the bosonic sector of the standard model. If the coefficients of all blind operators have the same magnitude, indirect bounds on the anomalous triple vector-boson couplings can also be inferred, provided there is no large cancellation in the Higgs-gamma-gamma coupling.
Resumo:
If there are large extra dimensions and the fundamental Planck scale is at the TeV scale, then the question arises of whether ultrahigh energy cosmic rays might probe them. We study the neutrino-nucleon cross section in these models. The elastic forward scattering is analyzed in some detail, hoping to clarify earlier discussions. We also estimate the black hole production rate. We study energy loss from graviton mediated interactions and conclude that they cannot explain the cosmic ray events above the GZK energy limit. However, these interactions could start horizontal air showers with characteristic profile and at a rate higher than in the standard model.
Resumo:
We present an update of neutral Higgs boson decays into bottom quark pairs in the minimal supersymmetric extension of the standard model. In particular the resummation of potentially large higher-order corrections due to the soft supersymmetry (SUSY) breaking parameters Ab and is extended. The remaining theoretical uncertainties due to unknown higher-order SUSY-QCD corrections are analyzed quantitatively.
Resumo:
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0