815 resultados para Zero interest rate policy
Resumo:
Esta investigación tiene como objetivo general el análisis del impacto de la venta de acciones sobre la salud financiera y el riesgo en el grupo Aval. La necesidad por este estudio nace del interés por conocer los costos y beneficios que tienen las empresas a la hora de emitir acciones, siendo ésta última una práctica común en las últimas décadas. Algunas de las motivaciones relevantes para emitir acciones, son la financiación de nuevos proyectos de la empresa, el status que le pueda dar a la misma, una manera de hacer frente a la deuda, etc. Es importante conocer las implicaciones que tienen sobre la empresa la venta de acciones en términos de sus resultados, el impacto sobre los accionistas y sobre la misma sociedad. Esta investigación busca responder a la pregunta: ¿Cuál es el impacto de la venta de acciones sobre la salud financiera y el riesgo en los grupos financieros? Nos interesaremos por la revisión bibliográfica acerca de la salud financiera abordando autores que hablan de la misma desde el punto de vista de la posición de la empresa, refiriéndonos siempre a tres indicadores relevantes para el estudio y que son utilizados en la literatura para medir la salud financiera: liquidez, rentabilidad y endeudamiento. En la revisión de la literatura se ha encontrado una relación entre la salud financiera y el riesgo, por lo tanto buscaremos identificar cuál es el riesgo que afecta a las empresas cuando se emiten acciones centrándonos en tres tipos de riesgos financieros: riesgo de mercado, de interés y riesgo operacional; se ha escogido el grupo Aval para éste estudio ya que es uno de los grupos financieros más importantes en Colombia, con varios años de gestión y que actualmente realiza la práctica de emitir acciones.
Resumo:
En el presente documento se descompone la estructura a términos de las tasas de interés de los bonos soberanos de EE.UU. y Colombia. Se utiliza un modelo afín de cuatro factores, donde el primero de ellos corresponde a un factor de pronóstico de los retornos y, los demás, a los tres primeros componentes principales de la matriz de varianza-covarianza de las tasas de interés. Para la descomposición de las tasas de interés de Colombia se utiliza el factor de pronóstico de EE.UU. para capturar efectos de spillovers. Se logra concluir que las tasas en EE.UU. no tienen un efecto sobre el nivel de tasas en Colombia pero sí influyen en los excesos de retorno esperado de los bonos y también existen efectos sobre los factores locales, aunque el factor determinante de la dinámica de las tasas locales es el “nivel”. De la descomposición se obtienen las expectativas de la tasa corta y la prima por vencimiento. En ese sentido, se observa que el valor de la prima por vencimiento y su volatilidad incrementa con el vencimiento y que este valor ha venido disminuyendo en el tiempo.
Resumo:
In this paper we introduce a financial market model based on continuos time random motions with alternanting constant velocities and with jumps ocurring when the velocity switches. if jump directions are in the certain corresondence with the velocity directions of the underlyng random motion with respect to the interest rate, the model is free of arbitrage. The replicating strategies for options are constructed in details. Closed form formulas for the opcion prices are obtained.
Resumo:
En este documento está desarrollado un modelo de mercado financiero basado en movimientos aleatorios con tiempo continuo, con velocidades constantes alternantes y saltos cuando hay cambios en la velocidad. Si los saltos en la dirección tienen correspondencia con la dirección de la velocidad del comportamiento aleatorio subyacente, con respecto a la tasa de interés, el modelo no presenta arbitraje y es completo. Se construye en detalle las estrategias replicables para opciones, y se obtiene una presentación cerrada para el precio de las opciones. Las estrategias de cubrimiento quantile para opciones son construidas. Esta metodología es aplicada al control de riesgo y fijación de precios de instrumentos de seguros.
Resumo:
Este documento propone un modelo para la estructura a plazos del riesgo interbancario a partir del spread entre los Interest Rate Swap (IRS) y los Overnight Indexed Swaps (OIS) en dólares durante la crisis financiera 2007-08 y la crisis del euro en 2010. Adicionalmente hace la descomposición del riesgo interbancario entre riesgo de default y no-default (liquidez). Los resultados sugieren que la crisis financiera tuvo importantes repercusiones en la estructura a plazos del riesgo interbancario y sus componentes: en los años previos a la crisis, el riesgo de no-default explicaba la mayor parte del riesgo interbancario; durante la crisis y posterior a ella, el riesgo de default conducía el comportamiento del riesgo interbancario. Adicionalmente, se encuentra que, a partir de la estructura a plazos de cada componente del riesgo interbancario, la crisis financiera se caracterizó por ser un problema más de corto que de largo plazo, en contraste con la crisis del euro de 2010. Estos resultados siguen lo propuesto por Filipovic & Trolle (2012) y dejan importantes implicaciones sobre el riesgo interbancario durante los periodos de stress financiero.
Resumo:
This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.
Resumo:
Climate models provide compelling evidence that if greenhouse gas emissions continue at present rates, then key global temperature thresholds (such as the European Union limit of two degrees of warming since pre-industrial times) are very likely to be crossed in the next few decades. However, there is relatively little attention paid to whether, should a dangerous temperature level be exceeded, it is feasible for the global temperature to then return to safer levels in a usefully short time. We focus on the timescales needed to reduce atmospheric greenhouse gases and associated temperatures back below potentially dangerous thresholds, using a state-of-the-art general circulation model. This analysis is extended with a simple climate model to provide uncertainty bounds. We find that even for very large reductions in emissions, temperature reduction is likely to occur at a low rate. Policy-makers need to consider such very long recovery timescales implicit in the Earth system when formulating future emission pathways that have the potential to 'overshoot' particular atmospheric concentrations of greenhouse gases and, more importantly, related temperature levels that might be considered dangerous.
Resumo:
Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.
Resumo:
This article applies a three-regime Markov switching model to investigate the impact of the macroeconomy on the dynamics of the residential real estate market in the US. Focusing on the period between 1960 and 2011, the methodology implemented allows for a clearer understanding of the drivers of the real estate market in “boom”, “steady-state” and “crash” regimes. Our results show that the sensitivity of the real estate market to economic changes is regime-dependent. The paper then proceeds to examine whether policymakers are able to influence a regime switch away from the crash regime. We find that a decrease in interest rate spreads could be an effective catalyst to precipitate such a change of state.
Resumo:
This paper examines the cyclical regularities of macroeconomic, financial and property market aggregates in relation to the property stock price cycle in the UK. The Hodrick Prescott filter is employed to fit a long-term trend to the raw data, and to derive the short-term cycles of each series. It is found that the cycles of consumer expenditure, total consumption per capita, the dividend yield and the long-term bond yield are moderately correlated, and mainly coincident, with the property price cycle. There is also evidence that the nominal and real Treasury Bill rates and the interest rate spread lead this cycle by one or two quarters, and therefore that these series can be considered leading indicators of property stock prices. This study recommends that macroeconomic and financial variables can provide useful information to explain and potentially to forecast movements of property-backed stock returns in the UK.
Resumo:
This paper employs a vector autoregressive model to investigate the impact of macroeconomic and financial variables on a UK real estate return series. The results indicate that unexpected inflation, and the interest rate term spread have explanatory powers for the property market. However, the most significant influence on the real estate series are the lagged values of the real estate series themselves. We conclude that identifying the factors that have determined UK property returns over the past twelve years remains a difficult task.
Resumo:
This paper presents and implements a number of tests for non-linear dependence and a test for chaos using transactions prices on three LIFFE futures contracts: the Short Sterling interest rate contract, the Long Gilt government bond contract, and the FTSE 100 stock index futures contract. While previous studies of high frequency futures market data use only those transactions which involve a price change, we use all of the transaction prices on these contracts whether they involve a price change or not. Our results indicate irrefutable evidence of non-linearity in two of the three contracts, although we find no evidence of a chaotic process in any of the series. We are also able to provide some indications of the effect of the duration of the trading day on the degree of non-linearity of the underlying contract. The trading day for the Long Gilt contract was extended in August 1994, and prior to this date there is no evidence of any structure in the return series. However, after the extension of the trading day we do find evidence of a non-linear return structure.
Resumo:
Previous research has suggested collateral has the role of sorting entrepreneurs either by observed risk or by private information. In order to test these roles, this paper develops a model which incorporates a signalling process (sorting by observed risk) into the design of an incentivecompatible menu of loan contracts which works as a self-selection mechanism (sorting by private information). It then tests this Sorting by Signalling and Self-Selection Model, using the 1998 US Survey of Small Business Finances. It reports for the first time that: high type entrepreneurs are more likely to pledge collateral and pay a lower interest rate; and entrepreneurs who transfer good signals enjoy better contracts than those transferring bad signals. These findings suggest that the Sorting by Signalling and Self-Selection Model sheds more light on entrepreneurial debt finance than either the sorting-by-observed-risk or the sorting-by-private information paradigms on their own.
Resumo:
The paper explores the lived experience of leadership learning and development in a single case study of an entrepreneur participating in a major leadership development programme for owner-managers of Small and Medium Sized Enterprises (SMEs). Based on autobiographical research, it provides a rich contextual account of the nature and underlying influences of leadership learning throughout the life-course, and as a consequence of participation in the programme. Whilst the paper should interest scholars, policy makers, and those concerned with programme development, it may also resonate with entrepreneurs and help them make sense of their experience of leadership development.
Resumo:
The aim of this thesis is to investigate computerized voice assessment methods to classify between the normal and Dysarthric speech signals. In this proposed system, computerized assessment methods equipped with signal processing and artificial intelligence techniques have been introduced. The sentences used for the measurement of inter-stress intervals (ISI) were read by each subject. These sentences were computed for comparisons between normal and impaired voice. Band pass filter has been used for the preprocessing of speech samples. Speech segmentation is performed using signal energy and spectral centroid to separate voiced and unvoiced areas in speech signal. Acoustic features are extracted from the LPC model and speech segments from each audio signal to find the anomalies. The speech features which have been assessed for classification are Energy Entropy, Zero crossing rate (ZCR), Spectral-Centroid, Mean Fundamental-Frequency (Meanf0), Jitter (RAP), Jitter (PPQ), and Shimmer (APQ). Naïve Bayes (NB) has been used for speech classification. For speech test-1 and test-2, 72% and 80% accuracies of classification between healthy and impaired speech samples have been achieved respectively using the NB. For speech test-3, 64% correct classification is achieved using the NB. The results direct the possibility of speech impairment classification in PD patients based on the clinical rating scale.