985 resultados para Scientific theory
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
We extend the HamiltonJacobi formulation to constrained dynamical systems. The discussion covers both the case of first-class constraints alone and that of first- and second-class constraints combined. The HamiltonDirac equations are recovered as characteristic of the system of partial differential equations satisfied by the HamiltonJacobi function.
Resumo:
We develop a theory of canonical transformations for presymplectic systems, reducing this concept to that of canonical transformations for regular coisotropic canonical systems. In this way we can also link these with the usual canonical transformations for the symplectic reduced phase space. Furthermore, the concept of a generating function arises in a natural way as well as that of gauge group.
Resumo:
We extend the HamiltonJacobi formulation to constrained dynamical systems. The discussion covers both the case of first-class constraints alone and that of first- and second-class constraints combined. The HamiltonDirac equations are recovered as characteristic of the system of partial differential equations satisfied by the HamiltonJacobi function.
Resumo:
The BatalinVilkovisky formalism is studied in the framework of perturbation theory by analyzing the antibracket BecchiRouetStoraTyutin (BRST) cohomology of the proper solution S0. It is concluded that the recursive equations for the complete proper solution S can be solved at any order of perturbation theory. If certain conditions on the classical action and on the gauge generators are imposed the solution can be taken local.
Resumo:
We analyze the influence of the density dependence of the symmetry energy on the average excitation energy of the isoscalar giant monopole resonance (GMR) in stable and exotic neutron-rich nuclei by applying the relativistic extended Thomas-Fermi method in scaling and constrained calculations. For the effective nuclear interaction, we employ the relativistic mean field model supplemented by an isoscalar-isovector meson coupling that allows one to modify the density dependence of the symmetry energy without compromising the success of the model for binding energies and charge radii. The semiclassical estimates of the average energy of the GMR are known to be in good agreement with the results obtained in full RPA calculations. The present analysis is performed along the Pb and Zr isotopic chains. In the scaling calculations, the excitation energy is larger when the symmetry energy is softer. The same happens in the constrained calculations for nuclei with small and moderate neutron excess. However, for nuclei of large isospin the constrained excitation energy becomes smaller in models having a soft symmetry energy. This effect is mainly due to the presence of loosely-bound outer neutrons in these isotopes. A sharp increase of the estimated width of the resonance is found in largely neutron-rich isotopes, even for heavy nuclei, which is enhanced when the symmetry energy of the model is soft. The results indicate that at large neutron numbers the structure of the low-energy region of the GMR strength distribution changes considerably with the density dependence of the nuclear symmetry energy, which may be worthy of further characterization in RPA calculations of the response function.
Resumo:
We derive analytical expressions for the excitation energy of the isoscalar giant monopole and quadrupole resonances in finite nuclei, by using the scaling method and the extended ThomasFermi approach to relativistic mean-field theory. We study the ability of several nonlinear σω parameter sets of common use in reproducing the experimental data. For monopole oscillations the calculations agree better with experiment when the nuclear matter incompressibility of the relativistic interaction lies in the range 220260 MeV. The breathing-mode energies of the scaling method compare satisfactorily with those obtained in relativistic RPA and time-dependent mean-field calculations. For quadrupole oscillations, all the analyzed nonlinear parameter sets reproduce the empirical trends reasonably well.
Resumo:
We study steady-state correlation functions of nonlinear stochastic processes driven by external colored noise. We present a methodology that provides explicit expressions of correlation functions approximating simultaneously short- and long-time regimes. The non-Markov nature is reduced to an effective Markovian formulation, and the nonlinearities are treated systematically by means of double expansions in high and low frequencies. We also derive some exact expressions for the coefficients of these expansions for arbitrary noise by means of a generalization of projection-operator techniques.
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
A mathematical model that describes the behavior of low-resolution Fresnel lenses encoded in any low-resolution device (e.g., a spatial light modulator) is developed. The effects of low-resolution codification, such the appearance of new secondary lenses, are studied for a general case. General expressions for the phase of these lenses are developed, showing that each lens behaves as if it were encoded through all pixels of the low-resolution device. Simple expressions for the light distribution in the focal plane and its dependence on the encoded focal length are developed and commented on in detail. For a given codification device an optimum focal length is found for best lens performance. An optimization method for codification of a single lens with a short focal length is proposed.