92 resultados para new keynesian models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimated Taylor rules became popular as a description of monetary policy conduct. There are numerous reasons why real monetary policy can be asymmetric and estimated Taylor rule nonlinear. This paper tests whether monetary policy can be described as asymmetric in three new European Union (EU) members (the Czech Republic, Hungary and Poland), which apply an inflation targeting regime. Two different empirical frameworks are

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is about the role played by stock of human capital on location decisions of new manufacturing plants. We analyse the effect of several skill levels (from basic school to PhD) on decisions about the location of plants in various industries and, therefore, of different technological levels. We also test whether spatial aggregation level biases the results and determine the most appropriate areas to be considered in analyses of these phenomena. Our main statistical source is the Register of Manufacturing Establishments of Catalonia (REIC), which has plant-level microdata on the locations of new manufacturing plants. Keywords: agglomeration economies, industrial location, human capital, count-data models, spatial econometrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tries to resolve some of the main shortcomings in the empirical literature of location decisions for new plants, i.e. spatial effects and overdispersion. Spatial effects are omnipresent, being a source of overdispersion in the data as well as a factor shaping the functional relationship between the variables that explain a firm’s location decisions. Using Count Data models, empirical researchers have dealt with overdispersion and excess zeros by developments of the Poisson regression model. This study aims to take this a step further, by adopting Bayesian methods and models in order to tackle the excess of zeros, spatial and non-spatial overdispersion and spatial dependence simultaneously. Data for Catalonia is used and location determinants are analysed to that end. The results show that spatial effects are determinant. Additionally, overdispersion is descomposed into an unstructured iid effect and a spatially structured effect. Keywords: Bayesian Analysis, Spatial Models, Firm Location. JEL Classification: C11, C21, R30.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to analyze why firms in some industries locate in specialized economic environments (localization economies) while those in other industries prefer large city locations (urbanization economies). To this end, we examine the location decisions of new manufacturing firms in Spain at the city level and for narrowly defined industries (three-digit level). First, we estimate firm location models to obtain estimates that reflect the importance of localization and urbanization economies in each industry. In a second step, we regress these estimates on industry characteristics that are related to the potential importance of three agglomeration theories, namely, labor market pooling, input sharing and knowledge spillovers. Localization effects are low and urbanization effects are high in knowledge-intensive industries, suggesting that firms (partly) locate in large cities to reap the benefits of inter-industry knowledge spillovers. We also find that localization effects are high in industries that employ workers whose skills are more industry-specific, suggesting that industries (partly) locate in specialized economic environments to share a common pool of specialized workers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general perspective of M-technologies and M-Services at the Spanish universities is not still in a very high level when we are ending the first decade of the 21st century. Some Universities and some of their libraries are starting to try out with M-technologies, but are still far from a model of massive exploitation, less than in some other countries. A deep study is needed to know the main reasons, study that we will not do in this paper. This general perspective does not mean that there are no significant initiatives which start to trust in M-technologies from Universities and their libraries. Models based in M-technologies make more sense than ever in open universities and in open libraries. That's the reason why the UOC's Library began in late 90s its first experiences in the M-Technologies and M-Libraries developments. In 1999 the appropriate technology offered the opportunity to carry out the first pilot test with SMS, and then applying the WAP technology. At those moments we managed to link-up mobile phones to the OPAC through a WAP system that allowed searching the catalogue by categories and finding the final location of a document, offering also the address of the library in which the user could loan it. Since then, UOC (and its library) directs its efforts towards adapting the offer of services to all sorts of M-devices used by end users. Left the WAP technology, nowadays the library is experimenting with some new devices like e-books, and some new services to get more feedback through the OPAC and metalibrary search products. We propose the case of Open University of Catalonia, in two levels: M-services applied in the library and M-technologies applied in some other university services and resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquest document de treball mira d'establir un nou camp d'investigació a la cruïlla entre els fluxos de migració i d'informació i comunicació. Hi ha diversos factors que fan que valgui la pena adoptar aquesta perspectiva. El punt central és que la migració internacional contemporània és incrustada en la dinàmica de la societat de la informació, seguint models comuns i dinàmiques interconnectades. Per consegüent, s'està començant a identificar els fluxos d'informació com a qüestions clau en les polítiques de migració. A més, hi ha una manca de coneixement empíric en el disseny de xarxes d'informació i l'ús de les tecnologies d'informació i comunicació en contextos migratoris. Aquest document de treball també mira de ser una font d'hipòtesis per a investigacions posteriors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of new benzolactam derivatives was synthesized and the derivatives were evaluated for theiraffinities at the dopamine D1, D2, and D3 receptors. Some of these compounds showed high D2 and/orD3 affinity and selectivity over the D1 receptor. The SAR study of these compounds revealed structuralcharacteristics that decisively influenced their D2 and D3 affinities. Structural models of the complexesbetween some of the most representative compounds of this series and the D2 and D3 receptors wereobtained with the aim of rationalizing the observed experimental results. Moreover, selected compoundsshowed moderate binding affinity on 5-HT2A which could contribute to reducing the occurrence of extrapyramidalside effects as potential antipsychotics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to examine (1) some of the models commonly used to represent fading,and (2) the information-theoretic metrics most commonly used to evaluate performance over those models. We raise the question of whether these models and metrics remain adequate in light of the advances that wireless systems haveundergone over the last two decades. Weaknesses are pointedout, and ideas on possible fixes are put forth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper breaks new ground toward contractual and institutional innovation in models of homeownership, equity building, and mortgage enforcement. Inspired by recent developments in the affordable housing sector and in other types of public financing schemes, this paper suggests extending institutional and financial strategies such as timeand place-based division of property rights, conditional subsidies, and credit mediation to alleviate the systemic risks of mortgage foreclosure. Alongside a for-profit shared equity scheme that would be led by local governments, we also outline a private market shared equity model, one of bootstrapping home buying with purchase options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past four decades have witnessed an explosive growth in the field of networkbased facility location modeling. This is not at all surprising since location policy is one of the most profitable areas of applied systems analysis in regional science and ample theoretical and applied challenges are offered. Location-allocation models seek the location of facilities and/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or several objectives generally related to the efficiency of the system or to the allocation of resources. This paper concerns the location of facilities or services in discrete space or networks, that are related to the public sector, such as emergency services (ambulances, fire stations, and police units), school systems and postal facilities. The paper is structured as follows: first, we will focus on public facility location models that use some type of coverage criterion, with special emphasis in emergency services. The second section will examine models based on the P-Median problem and some of the issues faced by planners when implementing this formulation in real world locational decisions. Finally, the last section will examine new trends in public sector facility location modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a metaheuristic to solve a new version of the Maximum Capture Problem. In the original MCP, market capture is obtained by lower traveling distances or lower traveling time, in this new version not only the traveling time but also the waiting time will affect the market share. This problem is hard to solve using standard optimization techniques. Metaheuristics are shown to offer accurate results within acceptable computing times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.