984 resultados para Convergence model
Resumo:
Following a general macroeconomic approach, this paper sets a closed micro-founded structural model to determine the long run real exchange rate of a developed economy. In particular, the analysis follows the structure of a Natrex model. The main contribution of this research paper is the development of a solid theoretical framework that analyse in depth the basis of the real exchange rate and the details of the equilibrium dynamics after any shock influencing the steady state. In our case, the intertemporal factors derived from the stock-flow relationship will be particularly determinant. The main results of the paper can be summarised as follows. In first place, a complete well-integrated structural model for long-run real exchange rate determination is developed from first principles. Moreover, within the concrete dynamics of the model, it is found that some convergence restrictions will be necessary. On one hand, for the medium run convergence the sensitivity of the trade balance to changes in real exchange rate should be higher that the correspondent one to the investment decisions. On the other hand, and regarding long-run convergence, it is also necessary both that there exists a negative relationship between investment and capital stock accumulation and that the global saving of the economy depends positively on net foreign debt accumulation. In addition, there are also interesting conclusions about the effects that certain shocks over the exogenous variables of the model have on real exchange rates.
Resumo:
This paper surveys the recent literature on convergence across countries and regions. I discuss the main convergence and divergence mechanisms identified in the literature and develop a simple model that illustrates their implications for income dynamics. I then review the existing empirical evidence and discuss its theoretical implications. Early optimism concerning the ability of a human capital-augmented neoclassical model to explain productivity differences across economies has been questioned on the basis of more recent contributions that make use of panel data techniques and obtain theoretically implausible results. Some recent research in this area tries to reconcile these findings with sensible theoretical models by exploring the role of alternative convergence mechanisms and the possible shortcomings of panel data techniques for convergence analysis.
Resumo:
The value of the elasticity of substitution of capital for resources is a crucial element in the debate over whether continual growth is possible. It is generally held that the elasticity has to be at least one to permit continual growth and that there is no way of estimating this outside the range of the data. This paper presents a model in which the elasticity is determined endogenously and may converge to one. It is concluded that the general opinion is wrong: that the possibility of continual growth does not depend on the exogenously given value of the elasticity and that the value of the elasticity outside the range of the data can be studied by econometric methods.
Resumo:
Weak solutions of the spatially inhomogeneous (diffusive) Aizenmann-Bak model of coagulation-breakup within a bounded domain with homogeneous Neumann boundary conditions are shown to converge, in the fast reaction limit, towards local equilibria determined by their mass. Moreover, this mass is the solution of a nonlinear diffusion equation whose nonlinearity depends on the (size-dependent) diffusion coefficient. Initial data are assumed to have integrable zero order moment and square integrable first order moment in size, and finite entropy. In contrast to our previous result [CDF2], we are able to show the convergence without assuming uniform bounds from above and below on the number density of clusters.
Resumo:
We consider a nonlinear cyclin content structured model of a cell population divided into proliferative and quiescent cells. We show, for particular values of the parameters, existence of solutions that do not depend on the cyclin content. We make numerical simulations for the general case obtaining, for some values of the parameters convergence to the steady state but also oscillations of the population for others.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
The evolution of a quantitative phenotype is often envisioned as a trait substitution sequence where mutant alleles repeatedly replace resident ones. In infinite populations, the invasion fitness of a mutant in this two-allele representation of the evolutionary process is used to characterize features about long-term phenotypic evolution, such as singular points, convergence stability (established from first-order effects of selection), branching points, and evolutionary stability (established from second-order effects of selection). Here, we try to characterize long-term phenotypic evolution in finite populations from this two-allele representation of the evolutionary process. We construct a stochastic model describing evolutionary dynamics at non-rare mutant allele frequency. We then derive stability conditions based on stationary average mutant frequencies in the presence of vanishing mutation rates. We find that the second-order stability condition obtained from second-order effects of selection is identical to convergence stability. Thus, in two-allele systems in finite populations, convergence stability is enough to characterize long-term evolution under the trait substitution sequence assumption. We perform individual-based simulations to confirm our analytic results.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
The NLR family, pyrin domain-containing 3 (NLRP3) inflammasome is a multiprotein complex that activates caspase 1, leading to the processing and secretion of the pro-inflammatory cytokines interleukin-1beta (IL-1beta) and IL-18. The NLRP3 inflammasome is activated by a wide range of danger signals that derive not only from microorganisms but also from metabolic dysregulation. It is unclear how these highly varied stress signals can be detected by a single inflammasome. In this Opinion article, we review the different signalling pathways that have been proposed to engage the NLRP3 inflammasome and suggest a model in which one of the crucial elements for NLRP3 activation is the generation of reactive oxygen species (ROS).
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
The paper proposes a technique to jointly test for groupings of unknown size in the cross sectional dimension of a panel and estimates the parameters of each group, and applies it to identifying convergence clubs in income per-capita. The approach uses the predictive density of the data, conditional on the parameters of the model. The steady state distribution of European regional data clusters around four poles of attraction with different economic features. The distribution of incomeper-capita of OECD countries has two poles of attraction and each grouphas clearly identifiable economic characteristics.
Resumo:
The ability of tumor cells to leave a primary tumor, to disseminate through the body, and to ultimately seed new secondary tumors is universally agreed to be the basis for metastasis formation. An accurate description of the cellular and molecular mechanisms that underlie this multistep process would greatly facilitate the rational development of therapies that effectively allow metastatic disease to be controlled and treated. A number of disparate and sometimes conflicting hypotheses and models have been suggested to explain various aspects of the process, and no single concept explains the mechanism of metastasis in its entirety or encompasses all observations and experimental findings. The exciting progress made in metastasis research in recent years has refined existing ideas, as well as giving rise to new ones. In this review we survey some of the main theories that currently exist in the field, and show that significant convergence is emerging, allowing a synthesis of several models to give a more comprehensive overview of the process of metastasis. As a result we postulate a stromal progression model of metastasis. In this model, progressive modification of the tumor microenvironment is equally as important as genetic and epigenetic changes in tumor cells during primary tumor progression. Mutual regulatory interactions between stroma and tumor cells modify the stemness of the cells that drive tumor growth, in a manner that involves epithelial-mesenchymal and mesenchymal-epithelial-like transitions. Similar interactions need to be recapitulated at secondary sites for metastases to grow. Early disseminating tumor cells can progress at the secondary site in parallel to the primary tumor, both in terms of genetic changes, as well as progressive development of a metastatic stroma. Although this model brings together many ideas in the field, there remain nevertheless a number of major open questions, underscoring the need for further research to fully understand metastasis, and thereby identify new and effective ways of treating metastatic disease.
Resumo:
This paper examines the properties of G-7 cycles using a multicountry Bayesian panelVAR model with time variations, unit specific dynamics and cross country interdependences.We demonstrate the presence of a significant world cycle and show that country specificindicators play a much smaller role. We detect differences across business cycle phasesbut, apart from an increase in synchronicity in the late 1990s, find little evidence of major structural changes. We also find no evidence of the existence of an Euro area specific cycle or of its emergence in the 1990s.
Resumo:
In this paper we address a problem arising in risk management; namely the study of price variations of different contingent claims in the Black-Scholes model due to anticipating future events. The method we propose to use is an extension of the classical Vega index, i.e. the price derivative with respect to the constant volatility, in thesense that we perturb the volatility in different directions. Thisdirectional derivative, which we denote the local Vega index, will serve as the main object in the paper and one of the purposes is to relate it to the classical Vega index. We show that for all contingent claims studied in this paper the local Vega index can be expressed as a weighted average of the perturbation in volatility. In the particular case where the interest rate and the volatility are constant and the perturbation is deterministic, the local Vega index is an average of this perturbation multiplied by the classical Vega index. We also study the well-known goal problem of maximizing the probability of a perfect hedge and show that the speed of convergence is in fact dependent of the local Vega index.