878 resultados para Algorithmic Trading
Resumo:
I demonstrate a powerful tension between acquiring information and incorporating it into asset prices, the two core elements of price discovery. As a salient case, I focus on the transformative rise of algorithmic trading (AT) typically associated with improved price efficiency. Using a measure of the relative information content of prices and a comprehensive panel of 37,325 stock-quarters of SEC market data, I establish instead that algorithmic trading strongly decreases the net amount of information in prices. The increase in price distortions associated with the AT “information gap” is roughly $42.6 billion/year for U.S. common stocks around earnings announcement events alone. Information losses are concentrated among stocks with high shares of algorithmic liquidity takers relative to algorithmic liquidity makers, suggesting that aggressive AT powerfully deters fundamental information acquisition despite its importance for translating available information into prices.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Pairs trading is an algorithmic trading strategy that is based on the historical co-movement of two separate assets and trades are executed on the basis of degree of relative mispricing. The purpose of this study is to explore one new and alternative copula-based method for pairs trading. The objective is to find out whether the copula method generates more trading opportunities and higher profits than the more traditional distance and cointegration methods applied extensively in previous empirical studies. Methods are compared by selecting top five pairs from stocks of the large and medium-sized companies in the Finnish stock market. The research period includes years 2006-2015. All the methods are proven to be profitable and the Finnish stock market suitable for pairs trading. However, copula method doesn’t generate more trading opportunities or higher profits than the other methods. It seems that the limitations of the more traditional methods are not too restrictive for this particular sample data.
Resumo:
Tutkielmassa selvitettiin algoritmisen kaupankäynnin vaikutuksia markkinatehokkuuteen. Se toteutettiin kirjallisuuskatsauksena tuoreimmista tutkimuksista. Aluksi esitellään algoritmisten kaupankäyntijärjestelmien rakenne ja tapa toimia, jonka jälkeen aiemmat tutkimukset käydään läpi. Tulokset osoittavat, että algoritminen kaupankäynti on parantanut markkinatehokkuutta hinnanmuodostuksen ja likviditeetin osalta. Merkittäviä negatiivisia vaikutuksia ei huomattu.
Resumo:
En este documento se explica el rol de las compañías aseguradoras colombianas dentro del sistema pensional y se busca, a través de la comprensión de la evolución del entorno macroeconómico y del marco regulatorio, identificar los retos que enfrentan. Los retos explicados en el documento son tres: el reto de la rentabilidad, el reto que plantean los cambios relativamente frecuentes de la regulación, y el reto del “calce”. El documento se enfoca principalmente en el reto de la rentabilidad y desarrolla un ejercicio de frontera eficiente que utiliza retornos esperados calculados a partir de la metodología de Damodaran (2012). Los resultados del ejercicio soportan la idea de que en efecto los retornos esperados serán menores para cualquier nivel de riesgo y sugiere que ante tal panorama, la relajación de las restricciones impuestas por el Régimen de inversiones podría alivianar los preocupaciones de las compañías aseguradoras en esta materia. Para los otros dos retos también se sugieren alternativas: el Algorithmic Trading para el caso del reto que impone los cambios en la regulación, y las Asociaciones Público-Privadas para abordar el reto del “calce”.
Resumo:
Este trabalho apresenta um estudo do impacto das negociações algorítmicas no processo de descoberta de preços no mercado de câmbio. Foram utilizados dados de negociação de alta frequência para contratos futuros de reais por dólar (DOL), negociados na Bolsa de Valores de São Paulo no período de janeiro a junho de 2013. No intuito de verificar se as estratégias algorítmicas de negociação são mais dependentes do que as negociações não algorítmicas, foi examinada a frequência em que algoritmos negociam entre si e comparou-se a um modelo benchmark que produz probabilidades teóricas para diferentes tipos de negociadores. Os resultados obtidos para as negociações minuto a minuto apresentam evidências de que as ações e estratégias de negociadores algorítmicos parecem ser menos diversas e mais dependentes do que aquelas realizadas por negociadores não algorítmicos. E para modelar a interação entre a autocorrelação serial dos retornos e negociações algorítmicas, foi estimado um vetor autorregressivo de alta frequência (VAR) em sua forma reduzida. As estimações mostram que as atividades dos algoritmos de negociação causam um aumento na autocorrelação dos retornos, indicando que eles podem contribuir para o aumento da volatilidade.
Resumo:
This thesis studies how commercial practice is developing with artificial intelligence (AI) technologies and discusses some normative concepts in EU consumer law. The author analyses the phenomenon of 'algorithmic business', which defines the increasing use of data-driven AI in marketing organisations for the optimisation of a range of consumer-related tasks. The phenomenon is orienting business-consumer relations towards some general trends that influence power and behaviors of consumers. These developments are not taking place in a legal vacuum, but against the background of a normative system aimed at maintaining fairness and balance in market transactions. The author assesses current developments in commercial practices in the context of EU consumer law, which is specifically aimed at regulating commercial practices. The analysis is critical by design and without neglecting concrete practices tries to look at the big picture. The thesis consists of nine chapters divided in three thematic parts. The first part discusses the deployment of AI in marketing organisations, a brief history, the technical foundations, and their modes of integration in business organisations. In the second part, a selected number of socio-technical developments in commercial practice are analysed. The following are addressed: the monitoring and analysis of consumers’ behaviour based on data; the personalisation of commercial offers and customer experience; the use of information on consumers’ psychology and emotions, the mediation through marketing conversational applications. The third part assesses these developments in the context of EU consumer law and of the broader policy debate concerning consumer protection in the algorithmic society. In particular, two normative concepts underlying the EU fairness standard are analysed: manipulation, as a substantive regulatory standard that limits commercial behaviours in order to protect consumers’ informed and free choices and vulnerability, as a concept of social policy that portrays people who are more exposed to marketing practices.
Resumo:
Las estrategias de inversión pairs trading se basan en desviaciones del precio entre pares de acciones correlacionadas y han sido ampliamente implementadas por fondos de inversión tomando posiciones largas y cortas en las acciones seleccionadas cuando surgen divergencias y obteniendo utilidad cerrando la posición al converger. Se describe un modelo de reversión a la media para analizar la dinámica que sigue el diferencial del precio entre acciones ordinarias y preferenciales de una misma empresa en el mismo mercado. La media de convergencia en el largo plazo es obtenida con un filtro de media móvil, posteriormente, los parámetros del modelo de reversión a la media se estiman mediante un filtro de Kalman bajo una formulación de estado espacio sobre las series históricas. Se realiza un backtesting a la estrategia de pairs trading algorítmico sobre el modelo propuesto indicando potenciales utilidades en mercados financieros que se observan por fuera del equilibrio. Aplicaciones de los resultados podrían mostrar oportunidades para mejorar el rendimiento de portafolios, corregir errores de valoración y sobrellevar mejor periodos de bajos retornos.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
Stock splits are known to have a negative effect on market quality—while stock prices adjust consistently with the split's scale, the bid/ask spread and market depth do not. Two possible explanations for the relative increase in spread are that (i) splits cause an increase in market maker costs that are passed along to investors or (ii) splits provide a mechanism for market makers to increase excess profits. Using a robust econometric methodology, we find evidence of the latter, which raises questions about the motivation of the splitting practice. We also document that while NASDAQ spreads appear to adjust more fully than those of NYSE/AMEX stocks, NASDAQ spreads are higher in general.
Resumo:
Many organisations make extensive use of electronic linkages to facilitate their trading exchanges with partners such as suppliers, distributors and customers. This research explores how the use of inter-organisational systems (IOS) both affects, and is affected by, the relationships between trading partners. In doing this, it brings together two existing but distinct perspectives and literatures; the rational view informed by IOS research, and the behavioural or relationship perspective embodied in inter-organisational relationships (IOR) literature. The research was undertaken in the European paper industry by means of six dyadic case studies. The dyads studied covered both traditional electronic data interchange systems and newer e-marketplace environments. A framework was derived from existing literature that integrates the two perspectives of interest. The framework was used to analyse the case studies undertaken and enabled the inter-relationship between IOS use and IOR to be explained.