47 resultados para DYNAMIC-ANALYSIS
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper extends existing insurance results on the type of insurance contracts needed for insurance market efficiency toa dynamic setting. It introduces continuosly open markets that allow for more efficient asset allocation. It alsoeliminates the role of preferences and endowments in the classification of risks, which is done primarily in terms of the actuarial properties of the underlying riskprocess. The paper further extends insurability to include correlated and catstrophic events. Under these very general conditions the paper defines a condition that determines whether a small number of standard insurance contracts (together with aggregate assets) suffice to complete markets or one needs to introduce such assets as mutual insurance.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.
Resumo:
Available empirical evidence regarding the degree of symmetry between European economies in the context of Monetary Unification is not conclusive. This paper offers new empirical evidence concerning this issue related to the manufacturing sector. Instead of using a static approach as most empirical studies do, we analyse the dynamic evolution of shock symmetry using a state-space model. The results show a clear reduction of asymmetries in terms of demand shocks between 1975 and 1996, with an increase in terms of supply shocks at the end of the period.
Resumo:
This paper measures the connectedness in EMU sovereign market volatility between April 1999 and January 2014, in order to monitor stress transmission and to identify episodes of intensive spillovers from one country to the others. To this end, we first perform a static and dynamic analysis to measure the total volatility connectedness in the entire period (the system-wide approach) using a framework recently proposed by Diebold and Yılmaz (2014). Second, we make use of a dynamic analysis to evaluate the net directional connectedness for each country and apply panel model techniques to investigate its determinants. Finally, to gain further insights, we examine the timevarying behaviour of net pair-wise directional connectedness at different stages of the recent sovereign debt crisis.
Resumo:
En termes de temps d'execució i ús de dades, les aplicacions paral·leles/distribuïdes poden tenir execucions variables, fins i tot quan s'empra el mateix conjunt de dades d'entrada. Existeixen certs aspectes de rendiment relacionats amb l'entorn que poden afectar dinàmicament el comportament de l'aplicació, tals com: la capacitat de la memòria, latència de la xarxa, el nombre de nodes, l'heterogeneïtat dels nodes, entre d'altres. És important considerar que l'aplicació pot executar-se en diferents configuracions de maquinari i el desenvolupador d'aplicacions no port garantir que els ajustaments de rendiment per a un sistema en particular continuïn essent vàlids per a d'altres configuracions. L'anàlisi dinàmica de les aplicacions ha demostrat ser el millor enfocament per a l'anàlisi del rendiment per dues raons principals. En primer lloc, ofereix una solució molt còmoda des del punt de vista dels desenvolupadors mentre que aquests dissenyen i evaluen les seves aplicacions paral·leles. En segon lloc, perquè s'adapta millor a l'aplicació durant l'execució. Aquest enfocament no requereix la intervenció de desenvolupadors o fins i tot l'accés al codi font de l'aplicació. S'analitza l'aplicació en temps real d'execució i es considra i analitza la recerca dels possibles colls d'ampolla i optimitzacions. Per a optimitzar l'execució de l'aplicació bioinformàtica mpiBLAST, vam analitzar el seu comportament per a identificar els paràmetres que intervenen en el rendiment d'ella, com ara: l'ús de la memòria, l'ús de la xarxa, patrons d'E/S, el sistema de fitxers emprat, l'arquitectura del processador, la grandària de la base de dades biològica, la grandària de la seqüència de consulta, la distribució de les seqüències dintre d'elles, el nombre de fragments de la base de dades i/o la granularitat dels treballs assignats a cada procés. El nostre objectiu és determinar quins d'aquests paràmetres tenen major impacte en el rendiment de les aplicacions i com ajustar-los dinàmicament per a millorar el rendiment de l'aplicació. Analitzant el rendiment de l'aplicació mpiBLAST hem trobat un conjunt de dades que identifiquen cert nivell de serial·lització dintre l'execució. Reconeixent l'impacte de la caracterització de les seqüències dintre de les diferents bases de dades i una relació entre la capacitat dels workers i la granularitat de la càrrega de treball actual, aquestes podrien ser sintonitzades dinàmicament. Altres millores també inclouen optimitzacions relacionades amb el sistema de fitxers paral·lel i la possibilitat d'execució en múltiples multinucli. La grandària de gra de treball està influenciat per factors com el tipus de base de dades, la grandària de la base de dades, i la relació entre grandària de la càrrega de treball i la capacitat dels treballadors.
Resumo:
This paper looks at the dynamic management of risk in an economy with discrete time consumption and endowments and continuous trading. I study how agents in such an economy deal with all the risk in the economy and attain their Pareto optimal allocations by trading in a few natural securities: private insurance contracts and a common set of derivatives on the aggregate endowment. The parsimonious nature ofthe implied securities needed for Pareto optimality suggests that insuch contexts complete markets is a very reasonable assumption.
Resumo:
Long-run economic growth arouses a great interest since it can shed light on the income-path of an economy and try to explain the large differences in income we observe across countries and over time. The neoclassical model has been followed by several endogenous growth models which, contrarily to the former, seem to predict that economies with similar preferences and technological level, do not necessarily tend to converge to similar per capita income levels. This paper attempts to show a possible mechanismthrough which macroeconomic disequilibria and inefficiencies, represented by budget deficits, may hinder human capital accumulation and therefore economic growth. Using a mixed education system, deficit is characterized as a bug agent which may end up sharply reducing the resources devoted to education and training. The paper goes a step further from the literature on deficit by introducing a rich dynamic analysis of the effects of a deficit reduction on different economic aspects.Following a simple growth model and allowing for slight changes in the law of human capital accumulation, we reach a point where deficit might sharply reduce human capital accumulation. On the other hand, a deficit reduction carried on for a long time, taking that reduction as a more efficient management of the economy, may prove useful in inducing endogenous growth. Empirical evidence for a sample of countries seems to support the theoretical assumptions in the model: (1) evidence on an inverse relationship betweendeficit and human capital accumulation, (2) presence of a strongly negative associationbetween the quantity of deficit in the economy and the rate of growth. They may prove a certain role for budget deficit in economic growth
Resumo:
Long-run economic growth arouses a great interest since it can shed light on the income-path of an economy and try to explain the large differences in income we observe across countries and over time. The neoclassical model has been followed by several endogenous growth models which, contrarily to the former, seem to predict that economies with similar preferences and technological level, do not necessarily tend to converge to similar per capita income levels. This paper attempts to show a possible mechanismthrough which macroeconomic disequilibria and inefficiencies, represented by budget deficits, may hinder human capital accumulation and therefore economic growth. Using a mixed education system, deficit is characterized as a bug agent which may end up sharply reducing the resources devoted to education and training. The paper goes a step further from the literature on deficit by introducing a rich dynamic analysis of the effects of a deficit reduction on different economic aspects.Following a simple growth model and allowing for slight changes in the law of human capital accumulation, we reach a point where deficit might sharply reduce human capital accumulation. On the other hand, a deficit reduction carried on for a long time, taking that reduction as a more efficient management of the economy, may prove useful in inducing endogenous growth. Empirical evidence for a sample of countries seems to support the theoretical assumptions in the model: (1) evidence on an inverse relationship betweendeficit and human capital accumulation, (2) presence of a strongly negative associationbetween the quantity of deficit in the economy and the rate of growth. They may prove a certain role for budget deficit in economic growth
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
In the present research we have set forth a new, simple, Trade-Off model that would allow us to calculate how much debt and, by default, how much equity a company should have, using easily available information and calculating the cost of debt dynamically on the basis of the effect that the capital structure of the company has on the risk of bankruptcy; in an attempt to answer this question. The proposed model has been applied to the companies that make up the Dow Jones Industrial Average (DJIA) in 2007. We have used consolidated financial data from 1996 to 2006, published by Bloomberg. We have used simplex optimization method to find the debt level that maximizes firm value. Then, we compare the estimated debt with real debt of companies using statistical nonparametric Mann-Whitney. The results indicate that 63% of companies do not show a statistically significant difference between the real and the estimated debt.
Resumo:
We analyse volatility spillovers in EMU sovereign bond markets. First, we examine the unconditional patterns during the full sample (April 1999-January 2014) using a measure recently proposed by Diebold and Yılmaz (2012). Second, we make use of a dynamic analysis to evaluate net directional volatility spillovers for each of the eleven countries under study, and to determine whether core and peripheral markets present differences. Finally, we apply a panel analysis to empirically investigate the determinants of net directional spillovers of this kind.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.
Resumo:
The aim of this paper is to analyse the impact of university knowledge and technology transfer activities on academic research output. Specifically, we study whether researchers with collaborative links with the private sector publish less than their peers without such links, once controlling for other sources of heterogeneity. We report findings from a longitudinal dataset on researchers from two engineering departments in the UK between 1985 until 2006. Our results indicate that researchers with industrial links publish significantly more than their peers. Academic productivity, though, is higher for low levels of industry involvement as compared to high levels.