957 resultados para Monetary Dynamic Models
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.
Resumo:
Future changes in runoff can have important implications for water resources and flooding. In this study, runoff projections from ISI-MIP (Inter-sectoral Impact Model Inter-comparison Project) simulations forced with HadGEM2-ES bias-corrected climate data under the Representative Concentration Pathway 8.5 have been analysed for differences between impact models. Projections of change from a baseline period (1981-2010) to the future (2070-2099) from 12 impacts models which contributed to the hydrological and biomes sectors of ISI-MIP were studied. The biome models differed from the hydrological models by the inclusion of CO2 impacts and most also included a dynamic vegetation distribution. The biome and hydrological models agreed on the sign of runoff change for most regions of the world. However, in West Africa, the hydrological models projected drying, and the biome models a moistening. The biome models tended to produce larger increases and smaller decreases in regionally averaged runoff than the hydrological models, although there is large inter-model spread. The timing of runoff change was similar, but there were differences in magnitude, particularly at peak runoff. The impact of vegetation distribution change was much smaller than the projected change over time, while elevated CO2 had an effect as large as the magnitude of change over time projected by some models in some regions. The effect of CO2 on runoff was not consistent across the models, with two models showing increases and two decreases. There was also more spread in projections from the runs with elevated CO2 than with constant CO2. The biome models which gave increased runoff from elevated CO2 were also those which differed most from the hydrological models. Spatially, regions with most difference between model types tended to be projected to have most effect from elevated CO2, and seasonal differences were also similar, so elevated CO2 can partly explain the differences between hydrological and biome model runoff change projections. Therefore, this shows that a range of impact models should be considered to give the full range of uncertainty in impacts studies.
Resumo:
While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.
Resumo:
Purpose – The purpose of this paper is to explore the role of the housing market in the monetary policy transmission to consumption among euro area member states. It has been argued that the housing market in one country is then important when its mortgage market is well developed. The countries in the euro area follow unitary monetary policy, however, their housing and mortgage markets show some heterogeneity, which may lead to different policy effects on aggregate consumption through the housing market. Design/methodology/approach – The housing market can act as a channel of monetary policy shocks to household consumption through changes in house prices and residential investment – the housing market channel. We estimate vector autoregressive models for each country and conduct a counterfactual analysis in order to disentangle the housing market channel and assess its importance across the euro area member states. Findings – We find little evidence for heterogeneity of the monetary policy transmission through house prices across the euro area countries. Housing market variations in the euro area seem to be better captured by changes in residential investment rather than by changes in house prices. As a result we do not find significantly large house price channels. For some of the countries however, we observe a monetary policy channel through residential investment. The existence of a housing channel may depend on institutional features of both the labour market or with institutional factors capturing the degree of household debt as is the LTV ratio. Originality/value – The study contributes to the existing literature by assessing whether a unitary monetary policy has a different impact on consumption across the euro area countries through their housing and mortgage markets. We disentangle monetary-policy-induced effects on consumption associated with variations on the housing markets due to either house price variations or residential investment changes. We show that the housing market can play a role in the monetary transmission mechanism even in countries with less developed mortgage markets through variations in residential investment.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Two stochastic epidemic lattice models, the susceptible-infected-recovered and the susceptible-exposed-infected models, are studied on a Cayley tree of coordination number k. The spreading of the disease in the former is found to occur when the infection probability b is larger than b(c) = k/2(k - 1). In the latter, which is equivalent to a dynamic site percolation model, the spreading occurs when the infection probability p is greater than p(c) = 1/(k - 1). We set up and solve the time evolution equations for both models and determine the final and time-dependent properties, including the epidemic curve. We show that the two models are closely related by revealing that their relevant properties are exactly mapped into each other when p = b/[k - (k - 1) b]. These include the cluster size distribution and the density of individuals of each type, quantities that have been determined in closed forms.
Resumo:
Zwitterionic peptides with trypanocidal activity are promising lead compounds for the treatment of African Sleeping Sickness, and have motivated research into the design of compounds capable of disrupting the protozoan membrane. In this study, we use the Langmuir monolayer technique to investigate the surface properties of an antiparasitic peptide, namely S-(2,4-dinitrophenyl)glutathione di-2-propyl ester, and its interaction with a model membrane comprising a phospholipid monolayer. The drug formed stable Langmuir monolayers. whose main feature was a phase transition accompanied by a negative surface elasticity. This was attributed to aggregation upon compression due to intermolecular bond associations of the molecules, inferred from surface pressure and surface potential isotherms. Brewster angle microscopy (BAM) images, infrared spectroscopy and dynamic elasticity measurements. When co-spread with dipalmitoyl phosphatidyl choline (DPPC). the drug affected both the surface pressure and the monolayer morphology, even at high surface pressures and with low amounts of the drug. The results were interpreted by assuming a repulsive, cooperative interaction between the drug and DPPC molecules. Such repulsive interaction and the large changes in fluidity arising from drug aggregation may be related to the disruption of the membrane, which is key for the parasite killing property. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Frutalin is a homotetrameric alpha-D-galactose (D-Gal)-binding lectin that activates natural killer cells in vitro and promotes leukocyte migration in vivo. Because lectins are potent lymphocyte stimulators, understanding the interactions that occur between them and cell surfaces can help to the action mechanisms involved in this process. In this paper, we present a detailed investigation of the interactions of frutalin with phospho- and glycolipids using Langmuir monolayers as biomembrane models. The results confirm the specificity of frutalin for D-Gal attached to a biomembrane. Adsorption of frutalin was more efficient for the galactose polar head lipids, in contrast to the one for sulfated galactose, in which a lag time is observed, indicating a rearrangement of the monolayer to incorporate the protein. Regarding ganglioside GM1 monolayers, lower quantities of the protein were adsorbed, probably due to the farther apart position of D-galactose from the interface. Binary mixtures containing galactocerebroside revealed small domains formed at high lipid packing in the presence of frutalin, suggesting that lectin induces the clusterization and the forming of domains in vitro, which may be a form of receptor internalization. This is the first experimental evidence of such lectin effect, and it may be useful to understand the mechanism of action of lectins at the molecular level. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Dynamic Time Warping (DTW), a pattern matching technique traditionally used for restricted vocabulary speech recognition, is based on a temporal alignment of the input signal with the template models. The principal drawback of DTW is its high computational cost as the lengths of the signals increase. This paper shows extended results over our previously published conference paper, which introduces an optimized version of the DTW I hat is based on the Discrete Wavelet Transform (DWT). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We investigate several two-dimensional guillotine cutting stock problems and their variants in which orthogonal rotations are allowed. We first present two dynamic programming based algorithms for the Rectangular Knapsack (RK) problem and its variants in which the patterns must be staged. The first algorithm solves the recurrence formula proposed by Beasley; the second algorithm - for staged patterns - also uses a recurrence formula. We show that if the items are not so small compared to the dimensions of the bin, then these algorithms require polynomial time. Using these algorithms we solved all instances of the RK problem found at the OR-LIBRARY, including one for which no optimal solution was known. We also consider the Two-dimensional Cutting Stock problem. We present a column generation based algorithm for this problem that uses the first algorithm above mentioned to generate the columns. We propose two strategies to tackle the residual instances. We also investigate a variant of this problem where the bins have different sizes. At last, we study the Two-dimensional Strip Packing problem. We also present a column generation based algorithm for this problem that uses the second algorithm above mentioned where staged patterns are imposed. In this case we solve instances for two-, three- and four-staged patterns. We report on some computational experiments with the various algorithms we propose in this paper. The results indicate that these algorithms seem to be suitable for solving real-world instances. We give a detailed description (a pseudo-code) of all the algorithms presented here, so that the reader may easily implement these algorithms. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Ghana faces a macroeconomic problem of inflation for a long period of time. The problem in somehow slows the economic growth in this country. As we all know, inflation is one of the major economic challenges facing most countries in the world especially those in African including Ghana. Therefore, forecasting inflation rates in Ghana becomes very important for its government to design economic strategies or effective monetary policies to combat any unexpected high inflation in this country. This paper studies seasonal autoregressive integrated moving average model to forecast inflation rates in Ghana. Using monthly inflation data from July 1991 to December 2009, we find that ARIMA (1,1,1)(0,0,1)12 can represent the data behavior of inflation rate in Ghana well. Based on the selected model, we forecast seven (7) months inflation rates of Ghana outside the sample period (i.e. from January 2010 to July 2010). The observed inflation rate from January to April which was published by Ghana Statistical Service Department fall within the 95% confidence interval obtained from the designed model. The forecasted results show a decreasing pattern and a turning point of Ghana inflation in the month of July.
Resumo:
The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
Several works in the shopping-time and in the human-capital literature, due to the nonconcavity of the underlying Hamiltonian, use Örst-order conditions in dynamic optimization to characterize necessity, but not su¢ ciency, in intertemporal problems. In this work I choose one paper in each one of these two areas and show that optimality can be characterized by means of a simple aplication of Arrowís (1968) su¢ ciency theorem.
Resumo:
This paper investigates which properties money-demand functions have to satisfy to be consistent with multidimensional extensions of Lucasí(2000) versions of the Sidrauski (1967) and the shopping-time models. We also investigate how such classes of models relate to each other regarding the rationalization of money demands. We conclude that money demand functions rationalizable by the shoppingtime model are always rationalizable by the Sidrauski model, but that the converse is not true. The log-log money demand with an interest-rate elasticity greater than or equal to one and the semi-log money demand are counterexamples.