937 resultados para Algoritmic pairs trading, statistical arbitrage, Kalman filter, mean reversion.
Resumo:
We investigate the cointegration between VIX and CDS indices, and the possibility of exploiting it in an existing credit market timing investment model. We find cointegration over most of the sample period and the leadership of VIX over the CDS in the price discovery process. We present two methods for including cointegration into the model. Both strategies improve the in-sample and out-of-sample model performances, even though out-of-sample results are weaker. We find that in-sample better performances are explained by a stronger cointegration, concluding that in the presence of cointegration our strategies can be profitable in an investment model that considers transaction costs.
Resumo:
This paper proposes an on-board Electric Vehicle (EV) battery charger with enhanced Vehicle-to-Home (V2H) operation mode. For such purpose was adapted an on-board bidirectional battery charger prototype to allow the Grid-to-Vehicle (G2V), Vehicle-to-Grid (V2G) and V2H operation modes. Along the paper are presented the hardware topology and the control algorithms of this battery charger. The idea underlying to this paper is the operation of the on-board bidirectional battery charger as an energy backup system when occurs a power outages. For detecting the power outage were compared two strategies, one based on the half-cycle rms calculation of the power grid voltage, and another in the determination of the rms value based in a Kalman filter. The experimental results were obtained considering the on-board EV battery charger under the G2V, V2G, and V2H operation modes. The results show that the power outage detection is faster using a Kalman filter, up to 90% than the other strategy. This also enables a faster transition between operation modes when a power outage occurs.
Resumo:
PURPOSE: To analyze the influence of biventricular pacing (BP) on clinical behavior, ventricular arrhythmia (VA) prevalence, and left ventricular ejection fraction (LV EF) by gated ventriculography. METHODS: Twenty-four patients with left bundle branch block (LBBB) and NYHA class III and IV underwent pacemaker implantation and were randomized either to the conventional or BP group, all receiving BP after 6 months. RESULTS: Sixteen patients were in NYHA class IV (66.6%) and 8 were in class III (33.4%). After 1-year follow-up, 14 patients were in class II (70%) and 5 were in class III (25%). Two sudden cardiac deaths occurred. A significant reduction in QRS length was found with BP (p=0.006). A significant statistical increase, from a mean of 19.13 ± 5.19% (at baseline) to 25.33 ± 5.90% (with BP) was observed in LVEF Premature ventricular contraction prevalence decreased from a mean of 10,670.00 ± 12,595.39 SD or to a mean of 3,007.00 ± 3,216.63 SD PVC/24 h with BP (p<0.05). Regarding the hospital admission rate over 1 year, we observed a significant reduction from 60. To 16 admissions with BP (p<0.05). CONCLUSION: Patients with LBBB and severe heart failure experienced, with BP, a significant NYHA class and LVEF improvement. A reduction in the hospital admission rate and VA prevalence also occurred.
Resumo:
Background:Systemic hypertension is highly prevalent and an important risk factor for cardiovascular events. Blood pressure control in hypertensive patients enrolled in the Hiperdia Program, a program of the Single Health System for the follow-up and monitoring of hypertensive patients, is still far below the desired level.Objective:To describe the epidemiological profile and to assess blood pressure control of patients enrolled in Hiperdia, in the city of Novo Hamburgo (State of Rio Grande do Sul, Brazil).Methods:Cross-sectional study with a stratified cluster random sample, including 383 adults enrolled in the Hiperdia Program of the 15 Basic Health Units of the city of Porto Alegre, conducted between 2010 and 2011. Controlled blood pressure was defined as ≤140 mmHg × 90 mmHg. The hypertensive patients were interviewed and their blood pressure was measured using a calibrated aneroid device. Prevalence ratios (PR) with 95% confidence interval, Wald's χ2 test, and simple and multiple Poisson regression were used in the statistical analysis.Results:The mean age was 63 ± 10 years, and most of the patients were females belonging to social class C, with a low level of education, a sedentary lifestyle, and family history positive for systemic hypertension. Diabetes mellitus (DM) was observed in 31%; adherence to the antihypertensive treatment in 54.3%; and 33.7% had their blood pressure controlled. DM was strongly associated with inadequate BP control, with only 15.7% of the diabetics showing BP considered as controlled.Conclusion:Even for hypertensive patients enrolled in the Hiperdia Program, BP control is not satisfactorily reached or sustained. Diabetic hypertensive patients show the most inappropriate BP control.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
Els sistemes automatitzats que requereixen d’un control d’estabilitat o moviment es poden trobar cada cop en més à mbits. Aplicacions UAV o de posicionament global són les més comunes per aquest tipus de sistemes, degut a que necessiten d’un control de moviment molt precÃs. Per a dur a terme aquest procés s’utilitzen unitats de mesura inercial, que mitjançant acceleròmetres i giroscopis degudament posicionats, a més a més d’una correcció del possible error que puguin introduir aquests últims, proporcionen una acceleració i una velocitat angular de les quals es pot extreure el camà efectuat per aquestes unitats. La IMU, combinada amb un GPS i mitjançant un filtre de Kalman, proporcionen una major exactitud , a més d’un punt de partida (proporcionat per el GPS), un recorregut representable en un mapa y, en el cas de perdre la senyal GPS, poder seguir adquirint dades de la IMU. Aquestes dades poden ser recollides i processades per una FPGA, que a la vegada podem sincronitzar amb una PDA per a que l’usuari pugui veure representat el moviment del sistema. Aquest treball es centra en el funcionament de la IMU i l’adquisició de dades amb la FPGA. També introdueix el filtre de Kalman per a la correcció de l’error dels sensors.
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
This paper proposes MSISpIC, a probabilistic sonar scan matching algorithm for the localization of an autonomous underwater vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), the robot displacement estimated through dead-reckoning using a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method is an extension of the pIC algorithm. An extended Kalman filter (EKF) is used to estimate the robot-path during the scan in order to reference all the range and bearing measurements as well as their uncertainty to a scan fixed frame before registering. The major contribution consists of experimentally proving that probabilistic sonar scan matching techniques have the potential to improve the DVL-based navigation. The algorithm has been tested on an AUV guided along a 600 m path within an abandoned marina underwater environment with satisfactory results
Resumo:
In dam inspection tasks, an underwater robot has to grab images while surveying the wall meanwhile maintaining a certain distance and relative orientation. This paper proposes the use of an MSIS (mechanically scanned imaging sonar) for relative positioning of a robot with respect to the wall. An imaging sonar gathers polar image scans from which depth images (range & bearing) are generated. Depth scans are first processed to extract a line corresponding to the wall (with the Hough transform), which is then tracked by means of an EKF (Extended Kalman Filter) using a static motion model and an implicit measurement equation associating the sensed points to the candidate line. The line estimate is referenced to the robot fixed frame and represented in polar coordinates (rho&thetas) which directly corresponds to the actual distance and relative orientation of the robot with respect to the wall. The proposed system has been tested in simulation as well as in water tank conditions
Resumo:
This paper describes a navigation system for autonomous underwater vehicles (AUVs) in partially structured environments, such as dams, harbors, marinas or marine platforms. A mechanical scanning imaging sonar is used to obtain information about the location of planar structures present in such environments. A modified version of the Hough transform has been developed to extract line features, together with their uncertainty, from the continuous sonar dataflow. The information obtained is incorporated into a feature-based SLAM algorithm running an Extended Kalman Filter (EKF). Simultaneously, the AUV's position estimate is provided to the feature extraction algorithm to correct the distortions that the vehicle motion produces in the acoustic images. Experiments carried out in a marina located in the Costa Brava (Spain) with the Ictineu AUV show the viability of the proposed approach
Resumo:
El projecte consisteix en el desenvolupament d'un algorisme que millori el posicionament final d'un sistema que adquireix les dades d'una antena de GPS està ndard. Aquest sistema en certs moments té pèrdua total de senyal GPS o rep senyal amb pertorbacions, derivant en un mal posicionament. Nosaltres hem proposat una solució que utilitza les coordenades del GPS, el filtre Kalman per resoldre els problemes de pertorbacions de senyal, bases de dades digitals geogrà fiques per garantir la circulació del vehicle per sobre la carretera, i finalment combina la informació temporal de posicions anteriors i la de les bases de dades per posicionar el vehicle quan hi ha pèrdua total de senyal. Els experiments realitzats ens indiquen que s'obté una millora del posicionement.
Resumo:
This paper extends multivariate Granger causality to take into account the subspacesalong which Granger causality occurs as well as long run Granger causality. The propertiesof these new notions of Granger causality, along with the requisite restrictions, are derivedand extensively studied for a wide variety of time series processes including linear invertibleprocess and VARMA. Using the proposed extensions, the paper demonstrates that: (i) meanreversion in L2 is an instance of long run Granger non-causality, (ii) cointegration is a specialcase of long run Granger non-causality along a subspace, (iii) controllability is a special caseof Granger causality, and finally (iv) linear rational expectations entail (possibly testable)Granger causality restriction along subspaces.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
The state-space approach is used to evaluate the relation between soil physical and chemical properties in an area cultivated with sugarcane. The experiment was carried out on a Rhodic Kandiudalf in Piracicaba, State of São Paulo, Brazil. Sugarcane was planted on an area of 0.21 ha i.e., in 15 rows 100 m long, spaced 1.4 m. Soil water content, soil organic matter, clay content and aggregate stability were sampled along a transect of 84 points, meter by meter. The state-space approach is used to evaluate how the soil water content is affected by itself and by soil organic matter, clay content, and aggregate stability of neighboring locations, in different combinations, aiming to contribute to a better understanding of the relation among these variables in the soil. Results show that soil water contents were successfully estimated by this approach. Best performances were found when the estimate of soil water content at locations i was related to soil water content, clay content and aggregate stability at locations i-1. Results also indicate that this state-space model using all series describes the soil water content better than any equivalent multiple regression equation.
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.