901 resultados para market microstructure noise, optimal sampling frequency, exchange traded funds, DCC-GARCH, factor modeling, PANIC
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross-bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non-linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high-frequency exchange rate returns, and their out-of-sample forecasting performance is compared to that of other time series models
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Este artigo tem como objetivo verificar a robustez do contéudo preditivo de regras da análise técnica, usando informações intradiárias do mercado futuro do índice de ações da Bolsa de Valores de São Paulo (Ibovespa Futuro). A metodologia sugerida foi a avaliacão em grupos, conforme os resultados de Baptista (2002), tal que as regras são obtidas conforme os resultados em alguns dos subperíodos estudados, sendo testadas em períodos subsequentes. Como resultado, obteve-se robustez ao longo do tempo e à taxa de amostragem dos dados no desempenho das regras acima do benchmark (buy-and-hold), porém considerações realistas acerca do momento de compra, assim como da corretagem (exceto grande investidor), podem reduzir substancialmente os ganhos
Resumo:
Este trabalho está dividido em dois ensaios. O primeiro ensaio examina aspectos da liquidez do mercado secundário de títulos públicos no Brasil no período 2003 a 2006 e os determinantes do spread de compra e venda no mercado secundário de LTN - Letra do Tesouro Nacional no período 2005 a 2006. Os spreads foram calculados com base em dados diários de alta freqüência, para períodos de 30 minutos e de um dia. Em linhas gerais, a liquidez é um determinante importante no cálculo do spread. Especificamente os spreads diminuem quando os volumes ofertados aumentam. No caso dos prazos de vencimento, os spreads aumentam quando os prazos se ampliam. LTNs com prazos de vencimentos até 30 dias apresentaram spreads de 1 centavo de reais (1.89 bp) enquanto que LTNs com prazos acima de dois anos apresentaram spreads médios em torno de 54 centavos de reais (3.84 bp) para intervalos de 30 minutos e 81 centavos de reais (5.72 bp) para intervalos de um dia. Os testes econométricos foram realizados com base em um modelo apresentado por Chakravarty e Sarkar (1999) e aplicado ao mercado americano de bonds no período de 1995 e 1997. Os testes foram feitos utilizando-se a técnica do Método dos Momentos Generalizados (GMM). Os resultados confirmam o spread de compra e venda como medida importante no acompanhamento da liquidez. O segundo ensaio compara aspectos da liquidez e da microestrutura do mercado de títulos públicos em alguns paises como Brasil, Chile, México, Coréia, Singapura, Polônia e Estados Unidos. A análise utiliza algumas dimensões da microestrutura como a liquidez do mercado secundário (spread de compra e venda, giro do estoque de títulos e vencimentos mais negociados), os custos de eficiência, a estrutura e transparência do mercado primário e secundário e, por último, a segurança do mercado. O objetivo é comparar as características e o funcionamento dos mercados secundários desses paises e, confrontar com a realidade do mercado brasileiro face ao desenvolvimento da microestrutura. Apesar da falta de alongamento dos prazos dos títulos públicos, o mercado secundário no Brasil apresenta aspectos da microestrutura semelhantes aos paises em consideração o que sugere a existência de outros fatores fora a microestrutura que limitam o aumento dos prazos. Os resultados do primeiro ensaio ajudam nas comparações dos demais paises. Como resultado, encontramos que embora a liquidez do mercado secundário de títulos públicos no Brasil concentra-se em papéis de prazo menor, este fato provavelmente não se deve a questões de microestrutura do mercado.
Resumo:
We extend the standard price discovery analysis to estimate the information share of dual-class shares across domestic and foreign markets. By examining both common and preferred shares, we aim to extract information not only about the fundamental value of the rm, but also about the dual-class premium. In particular, our interest lies on the price discovery mechanism regulating the prices of common and preferred shares in the BM&FBovespa as well as the prices of their ADR counterparts in the NYSE and in the Arca platform. However, in the presence of contemporaneous correlation between the innovations, the standard information share measure depends heavily on the ordering we attribute to prices in the system. To remain agnostic about which are the leading share class and market, one could for instance compute some weighted average information share across all possible orderings. This is extremely inconvenient given that we are dealing with 2 share prices in Brazil, 4 share prices in the US, plus the exchange rate (and hence over 5,000 permutations!). We thus develop a novel methodology to carry out price discovery analyses that does not impose any ex-ante assumption about which share class or trading platform conveys more information about shocks in the fundamental price. As such, our procedure yields a single measure of information share, which is invariant to the ordering of the variables in the system. Simulations of a simple market microstructure model show that our information share estimator works pretty well in practice. We then employ transactions data to study price discovery in two dual-class Brazilian stocks and their ADRs. We uncover two interesting ndings. First, the foreign market is at least as informative as the home market. Second, shocks in the dual-class premium entail a permanent e ect in normal times, but transitory in periods of nancial distress. We argue that the latter is consistent with the expropriation of preferred shareholders as a class.
Resumo:
Este trabalho apresenta um estudo do impacto das negociações algorítmicas no processo de descoberta de preços no mercado de câmbio. Foram utilizados dados de negociação de alta frequência para contratos futuros de reais por dólar (DOL), negociados na Bolsa de Valores de São Paulo no período de janeiro a junho de 2013. No intuito de verificar se as estratégias algorítmicas de negociação são mais dependentes do que as negociações não algorítmicas, foi examinada a frequência em que algoritmos negociam entre si e comparou-se a um modelo benchmark que produz probabilidades teóricas para diferentes tipos de negociadores. Os resultados obtidos para as negociações minuto a minuto apresentam evidências de que as ações e estratégias de negociadores algorítmicos parecem ser menos diversas e mais dependentes do que aquelas realizadas por negociadores não algorítmicos. E para modelar a interação entre a autocorrelação serial dos retornos e negociações algorítmicas, foi estimado um vetor autorregressivo de alta frequência (VAR) em sua forma reduzida. As estimações mostram que as atividades dos algoritmos de negociação causam um aumento na autocorrelação dos retornos, indicando que eles podem contribuir para o aumento da volatilidade.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper analyses three aspects of the share market operated by the Lima Stock Exchange: (i) the short-term relationship between the pricing, direction and volume of order flows; (ii) the components of the spread and the equilibrium point of the limit order book per share, and (iii) the pricing, order direction and trading volume dynamic resulting from shocks in the same variables when lagged. The econometric results for intraday data from 2012 show that the short-run dynamic of the most and least liquid shares in the General Index of the Lima Stock Exchange is explained by the direction of order flow, whose price impact is temporary in both cases.
Resumo:
The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA) method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction) was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.
Resumo:
With Hg-199 atoms confined in an optical lattice trap in the Lamb-Dicke regime, we obtain a spectral line at 265.6 nm for which the FWHM is similar to 15 Hz. Here we lock an ultrastable laser to this ultranarrow S-1(0) - P-3(0) clock transition and achieve a fractional frequency instability of 5.4 x 10(-15) / root tau for tau <= 400 s. The highly stable laser light used for the atom probing is derived from a 1062.6 nm fiber laser locked to an ultrastable optical cavity that exhibits a mean drift rate of -6.0 x 10(-17) s-(1) (-16.9 mHzs(-1) at 282 THz) over a six month period. A comparison between two such lasers locked to independent optical cavities shows a flicker noise limited fractional frequency instability of 4 x 10(-16) per cavity. (c) 2012 Optical Society of America
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
Seasonal patterns in hydrography, partial pressure of CO2, fCO2, pHt, total alkalinity, AT, total dissolved inorganic carbon, CT, nutrients, and chlorophyll a were measured in surface waters on monthly cruises at the European Station for Time Series in the Ocean at the Canary Islands (ESTOC) located in the northeast Atlantic subtropical gyre. With over 5 years of oceanographic data starting in 1996, seasonal and interannual trends of CO2 species and air-sea exchange of CO2 were determined. Net CO2 fluxes show this area acts as a minor source of CO2, with an average outgassing value of 179 mmol CO2/m**2 yr controlled by the dominant trade winds blowing from May to August. The effect of short-term wind variability on the CO2 flux has been addressed by increasing air-sea fluxes by 63% for 6-hourly sampling frequency. The processes governing the monthly variations of CT have been determined. From March to October, when CT decreases, mixing at the base of the mixed layer (11.5 ± 1.5 mmol/m**3) is compensated by air-sea exchange, and a net organic production of 25.5 ± 5.7 mmol/m**3 is estimated. On an annual scale, biological drawdown accounts for the decrease in inorganic carbon from March to October, while mixing processes control the CT increase from October to the end of autumn. After removing seasonality variability, fCO2sw increases at a rate of 0.71 ± 5.1 µatm/yr, and as a response to the atmospheric trend, inorganic carbon increases at a rate of 0.39 ± 1.6 µmol/kg yr.
Resumo:
We describe a compact lightweight impulse radar for radio-echo sounding of subsurface structures designed specifically for glaciological applications. The radar operates at frequencies between 10 and 75 MHz. Its main advantages are that it has a high signal-to-noise ratio and a corresponding wide dynamic range of 132 dB due mainly to its ability to perform real-time stacking (up to 4096 traces) as well as to the high transmitted power (peak voltage 2800 V). The maximum recording time window, 40 ?s at 100 MHz sampling frequency, results in possible radar returns from as deep as 3300 m. It is a versatile radar, suitable for different geophysical measurements (common-offset profiling, common midpoint, transillumination, etc.) and for different profiling set-ups, such as a snowmobile and sledge convoy or carried in a backpack and operated by a single person. Its low power consumption (6.6 W for the transmitter and 7.5 W for the receiver) allows the system to operate under battery power for mayor que7 hours with a total weight of menor que9 kg for all equipment, antennas and batteries.
Resumo:
An EMI filter for a three-phase buck-type medium power pulse-width modulation rectifier is designed. This filter considers differential mode noise and complies with MIL-STD- 461E for the frequency range of 10kHz to 10MHz. In industrial applications, the frequency range of the standard starts at 150kHz and the designer typically uses a switching frequency of 28kHz because the fifth harmonic is out of the range. This approach is not valid for aircraft applications. In order to design the switching frequency in aircraft applications, the power losses in the semiconductors and the weight of the reactive components should be considered. The proposed design is based on a harmonic analysis of the rectifier input current and an analytical study of the input filter. The classical industrial design does not consider the inductive effect in the filter design because the grid frequency is 50/60Hz. However, in the aircraft applications, the grid frequency is 400Hz and the inductance cannot be neglected. The proposed design considers the inductance and the capacitance effect of the filter in order to obtain unitary power factor at full power. In the optimization process, several filters are designed for different switching frequencies of the converter. In addition, designs from single to five stages are considered. The power losses of the converter plus the EMI filter are estimated at these switching frequencies. Considering overall losses and minimal filter volume, the optimal switching frequency is selected