991 resultados para Policy diffusion
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
In this paper we consider a representative a priori unstable Hamiltonian system with 2+1/2 degrees of freedom, to which we apply the geometric mechanism for diffusion introduced in the paper Delshams et al., Mem.Amer.Math. Soc. 2006, and generalized in Delshams and Huguet, Nonlinearity 2009, and provide explicit, concrete and easily verifiable conditions for the existence of diffusing orbits. The simplification of the hypotheses allows us to perform explicitly the computations along the proof, which contribute to present in an easily understandable way the geometric mechanism of diffusion. In particular, we fully describe the construction of the scattering map and the combination of two types of dynamics on a normally hyperbolic invariant manifold.
Resumo:
We examine the evolution of monetary policy rules in a group of inflation targeting countries (Australia, Canada, New Zealand, Sweden and the United Kingdom) applying moment- based estimator at time-varying parameter model with endogenous regressors. Using this novel flexible framework, our main findings are threefold. First, monetary policy rules change gradually pointing to the importance of applying time-varying estimation framework. Second, the interest rate smoothing parameter is much lower that what previous time-invariant estimates of policy rules typically report. External factors matter for all countries, albeit the importance of exchange rate diminishes after the adoption of inflation targeting. Third, the response of interest rates on inflation is particularly strong during the periods, when central bankers want to break the record of high inflation such as in the U.K. or in Australia at the beginning of 1980s. Contrary to common wisdom, the response becomes less aggressive after the adoption of inflation targeting suggesting the positive effect of this regime on anchoring inflation expectations. This result is supported by our finding that inflation persistence as well as policy neutral rate typically decreased after the adoption of inflation targeting.
Resumo:
Estimated Taylor rules became popular as a description of monetary policy conduct. There are numerous reasons why real monetary policy can be asymmetric and estimated Taylor rule nonlinear. This paper tests whether monetary policy can be described as asymmetric in three new European Union (EU) members (the Czech Republic, Hungary and Poland), which apply an inflation targeting regime. Two different empirical frameworks are
Resumo:
We examine whether and how main central banks responded to episodes of financial stress over the last three decades. We employ a new methodology for monetary policy rules estimation, which allows for time-varying response coefficients as well as corrects for endogeneity. This flexible framework applied to the U.S., U.K., Australia, Canada and Sweden together with a new financial stress dataset developed by the International Monetary Fund allows not only testing whether the central banks responded to financial stress but also detects the periods and type of stress that were the most worrying for monetary authorities and to quantify the intensity of policy response. Our findings suggest that central banks often change policy
Resumo:
This paper investigates the effects of fiscal policy on the trade balance using a structural factor model. A fiscal policy shock worsens the trade balance and produces an appreciation of the domestic currency but the effects are quantitatively small. The findings match the theoretical predictions of the standard Mundell-Fleming model, although fiscal policy should not be considered one of the main causes of the large US external deficit. My conclusions differ from those reached using VAR models since the fiscal shock, possibly due to fiscal foresight, is nonfundamental for the variables typically used in open economy VARs.
Resumo:
This paper addresses the issue of policy evaluation in a context in which policymakers are uncertain about the effects of oil prices on economic performance. I consider models of the economy inspired by Solow (1980), Blanchard and Gali (2007), Kim and Loungani (1992) and Hamilton (1983, 2005), which incorporate different assumptions on the channels through which oil prices have an impact on economic activity. I first study the characteristics of the model space and I analyze the likelihood of the different specifications. I show that the existence of plausible alternative representations of the economy forces the policymaker to face the problem of model uncertainty. Then, I use the Bayesian approach proposed by Brock, Durlauf and West (2003, 2007) and the minimax approach developed by Hansen and Sargent (2008) to integrate this form of uncertainty into policy evaluation. I find that, in the environment under analysis, the standard Taylor rule is outperformed under a number of criteria by alternative simple rules in which policymakers introduce persistence in the policy instrument and respond to changes in the real price of oil.
Resumo:
We present a dynamic model where the accumulation of patents generates an increasing number of claims on sequential innovation. We compare innovation activity under three regimes -patents, no-patents, and patent pools- and find that none of them can reach the first best. We find that the first best can be reached through a decentralized tax-subsidy mechanism, by which innovators receive a subsidy when they innovate, and are taxed with subsequent innovations. This finding implies that optimal transfers work in the exact opposite way as traditional patents. Finally, we consider patents of finite duration and determine the optimal patent length.