938 resultados para Zero-lower bound


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho, propusemos um modelo DSGE que busca responder algumas questões sobre políticas de afrouxamento monetário (Quantitative Easing - QE) recentemente implementadas em resposta à crise de 2008. Desenvolvemos um modelo DSGE com agentes heterogêneos e preferred-habitat nas compras de títulos do governo. Nosso modelo permite o estudo da otimalidade da compra de portfolio (em termos de duration dos títulos) para os bancos centrais quando estão implementando a política. Além disso, a estrutura heterogênea nos permite olhar para distribuição de renda provocada pelas compras de títulos. Nossos resultados preliminares evidenciam o efeito distributivo do QE. No entanto, nosso modelo expandido apresentou alguns problemas de estabilidade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I show that when a central bank is financially independent from the treasury and has balance sheet concerns, an increase in the size or a change in the composition of the central bank's balance sheet (quantitative easing) can serve as a commitment device in a liquidity trap scenario. In particular, when the short-term interest rate is up against the zero lower bound, an open market operation by the central bank that involves purchases of long-term bonds can help mitigate the deation and a large negative output gap under a discretionary equilibrium. This is because such an open market operation provides an incentive to the central bank to keep interest rates low in future in order to avoid losses in its balance sheet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A situação conhecida como “Zero Lower Bound” ocorre quando a taxa de juros de curto prazo é muito baixa e os bancos centrais perdem seu principal instrumento de política monetária para estimular a atividade econômica. Nestas condições, políticas não convencionais são utilizadas como a expansão monetária (QE) e comunicados ao mercado sobre as intenções do banco central em um horizonte maior de tempo. O Japão enfrenta esta situação desde a década de 90 e tem utilizado largamente ambas. Após uma revisão da literatura a respeito, este trabalho investiga a eficácia dos QEs praticados pelo BOJ com os dados disponíveis através de autoregressão de vetores e conclui que não há evidência estatística sobre os resultados desejados. Dada a inabilidade de melhorar o crescimento econômico com inflação dentro de uma meta, sugere que trabalhos que conclusões robustas estatisticamente devem estar sujeitos à crítica de Lucas

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An engineer assessing the load-carrying capacity of an existing reinforced concrete slab is likely to use elastic analysis to check the load at which the structure might be expected to fail in flexure or in shear. In practice, many reinforced concrete slabs are highly ductile in flexure, so an elastic analysis greatly underestimates the loads at which they fail in this mode. The use of conservative elastic analysis has led engineers to incorrectly condemn many slabs and therefore to specify unnecessary and wasteful flexural strengthening or replacement. The lower bound theorem is based on the same principles as the upper bound theorem used in yield line analysis, but any solution that rigorously satisfies the lower bound theorem is guaranteed to be a safe underestimate of the collapse load. Jackson presented a rigorous lower bound method that obtains very accurate results for complex real slabs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the statistics of optical data transmission in a noisy nonlinear fiber channel with a weak dispersion management and zero average dispersion. Applying analytical expressions for the output probability density functions both for a nonlinear channel and for a linear channel with additive and multiplicative noise we calculate in a closed form a lower bound estimate on the Shannon capacity for an arbitrary signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let V be an array. The range query problem concerns the design of data structures for implementing the following operations. The operation update(j,x) has the effect vj ← vj + x, and the query operation retrieve(i,j) returns the partial sum vi + ... + vj. These tasks are to be performed on-line. We define an algebraic model – based on the use of matrices – for the study of the problem. In this paper we establish as well a lower bound for the sum of the average complexity of both kinds of operations, and demonstrate that this lower bound is near optimal – in terms of asymptotic complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the equilibrium properties of the nearest-neighbor Ising antiferromagnet on a triangular lattice in the presence of a staggered field conjugate to one of the degenerate ground states. Using a mapping of the ground states of the model without the staggered field to dimer coverings on the dual lattice, we classify the ground states into sectors specified by the number of "strings." We show that the effect of the staggered field is to generate long-range interactions between strings. In the limiting case of the antiferromagnetic coupling constant J becoming infinitely large, we prove the existence of a phase transition in this system and obtain a finite lower bound for the transition temperature. For finite J, we study the equilibrium properties of the system using Monte Carlo simulations with three different dynamics. We find that in all the three cases, equilibration times for low-field values increase rapidly with system size at low temperatures. Due to this difficulty in equilibrating sufficiently large systems at low temperatures, our finite-size scaling analysis of the numerical results does not permit a definite conclusion about the existence of st phase transition for finite values of J. A surprising feature in the system is the fact that unlike usual glassy systems; a zero-temperature quench almost always leads to the ground state, while a slow cooling does not.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motivated by the viscosity bound in gauge/gravity duality, we consider the ratio of shear viscosity (eta) to entropy density (s) in black hole accretion flows. We use both an ideal gas equation of state and the QCD equation of state obtained from lattice for the fluid accreting onto a Kerr black hole. The QCD equation of state is considered since the temperature of accreting matter is expected to approach 10(12) K in certain hot flows. We find that in both the cases eta/s is small only for primordial black holes and several orders of magnitude larger than any known fluid for stellar and supermassive black holes. We show that a lower bound on the mass of primordial black holes leads to a lower bound on eta/s and vice versa. Finally we speculate that the Shakura-Sunyaev viscosity parameter should decrease with increasing density and/or temperatures. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The maximal rate of a nonsquare complex orthogonal design for transmit antennas is 1/2 + 1/n if is even and 1/2 + 1/n+1 if is odd and the codes have been constructed for all by Liang (2003) and Lu et al. (2005) to achieve this rate. A lower bound on the decoding delay of maximal-rate complex orthogonal designs has been obtained by Adams et al. (2007) and it is observed that Liang's construction achieves the bound on delay for equal to 1 and 3 modulo 4 while Lu et al.'s construction achieves the bound for n = 0, 1, 3 mod 4. For n = 2 mod 4, Adams et al. (2010) have shown that the minimal decoding delay is twice the lower bound, in which case, both Liang's and Lu et al.'s construction achieve the minimum decoding delay. For large value of, it is observed that the rate is close to half and the decoding delay is very large. A class of rate-1/2 codes with low decoding delay for all has been constructed by Tarokh et al. (1999). In this paper, another class of rate-1/2 codes is constructed for all in which case the decoding delay is half the decoding delay of the rate-1/2 codes given by Tarokh et al. This is achieved by giving first a general construction of square real orthogonal designs which includes as special cases the well-known constructions of Adams, Lax, and Phillips and the construction of Geramita and Pullman, and then making use of it to obtain the desired rate-1/2 codes. For the case of nine transmit antennas, the proposed rate-1/2 code is shown to be of minimal delay. The proposed construction results in designs with zero entries which may have high peak-to-average power ratio and it is shown that by appropriate postmultiplication, a design with no zero entry can be obtained with no change in the code parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

String theory and gauge/gravity duality suggest the lower bound of shear viscosity (eta) to entropy density (s) for any matter to be mu h/4 pi k(B), when h and k(B) are reduced Planck and Boltzmann constants respectively and mu <= 1. Motivated by this, we explore eta/s in black hole accretion flows, in order to understand if such exotic flows could be a natural site for the lowest eta/s. Accretion flow plays an important role in black hole physics in identifying the existence of the underlying black hole. This is a rotating shear flow with insignificant molecular viscosity, which could however have a significant turbulent viscosity, generating transport, heat and hence entropy in the flow. However, in presence of strong magnetic field, magnetic stresses can help in transporting matter independent of viscosity, via celebrated Blandford-Payne mechanism. In such cases, energy and then entropy produces via Ohmic dissipation. In,addition, certain optically thin, hot, accretion flows, of temperature greater than or similar to 10(9) K, may be favourable for nuclear burning which could generate/absorb huge energy, much higher than that in a star. We find that eta/s in accretion flows appears to be close to the lower bound suggested by theory, if they are embedded by strong magnetic field or producing nuclear energy, when the source of energy is not viscous effects. A lower bound on eta/s also leads to an upper bound on the Reynolds number of the flow.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the present paper, based on the principles of gauge/gravity duality we analytically compute the shear viscosity to entropy (eta/s) ratio corresponding to the super fluid phase in Einstein Gauss-Bonnet gravity. From our analysis we note that the ratio indeed receives a finite temperature correction below certain critical temperature (T < T-c). This proves the non universality of eta/s ratio in higher derivative theories of gravity. We also compute the upper bound for the Gauss-Bonnet coupling (lambda) corresponding to the symmetry broken phase and note that the upper bound on the coupling does not seem to change as long as we are close to the critical point of the phase diagram. However the corresponding lower bound of the eta/s ratio seems to get modified due to the finite temperature effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.