987 resultados para N Euclidean algebra
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
La teor\'\ı a de Morales–Ramis es la teor\'\ı a de Galois en el contextode los sistemas din\'amicos y relaciona dos tipos diferentes de integrabilidad:integrabilidad en el sentido de Liouville de un sistema hamiltonianoe integrabilidad en el sentido de la teor\'\ı a de Galois diferencial deuna ecuaci\'on diferencial. En este art\'\i culo se presentan algunas aplicacionesde la teor\'\i a de Morales–Ramis en problemas de no integrabilidadde sistemas hamiltonianos cuya ecuaci\'on variacional normal a lo largode una curva integral particular es una ecuaci\'on diferencial lineal desegundo orden con coeficientes funciones racionales. La integrabilidadde la ecuaci\'on variacional normal es analizada mediante el algoritmode Kovacic.
Resumo:
We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.
Resumo:
This paper examines competition in the standard one-dimensional Downsian model of two-candidate elections, but where one candidate (A) enjoys an advantage over the other candidate (D). Voters' preferences are Euclidean, but any voter will vote for candidate A over candidate D unless D is closer to her ideal point by some fixed distance \delta. The location of the median voter's ideal point is uncertain, and its distribution is commonly known by both candidates. The candidates simultaneously choose locations to maximize the probability of victory. Pure strategy equilibria often fails to exist in this model, except under special conditions about \delta and the distribution of the median ideal point. We solve for the essentially unique symmetric mixed equilibrium, show that candidate A adopts more moderate policies than candidate D, and obtain some comparative statics results about the probability of victory and the expected distance between the two candidates' policies.
Resumo:
This paper provides an explicit cofibrant resolution of the operad encoding Batalin-Vilkovisky algebras. Thus it defines the notion of homotopy Batalin-Vilkovisky algebras with the required homotopy properties. To define this resolution we extend the theory of Koszul duality to operads and properads that are defind by quadratic and linear relations. The operad encoding Batalin-Vilkovisky algebras is shown to be Koszul in this sense. This allows us to prove a Poincare-Birkhoff-Witt Theorem for such an operad and to give an explicit small quasi-free resolution for it. This particular resolution enables us to describe the deformation theory and homotopy theory of BV-algebras and of homotopy BV-algebras. We show that any topological conformal field theory carries a homotopy BV-algebra structure which lifts the BV-algebra structure on homology. The same result is proved for the singular chain complex of the double loop space of a topological space endowed with an action of the circle. We also prove the cyclic Deligne conjecture with this cofibrant resolution of the operad BV. We develop the general obstruction theory for algebras over the Koszul resolution of a properad and apply it to extend a conjecture of Lian-Zuckerman, showing that certain vertex algebras have an explicit homotopy BV-algebra structure.
Resumo:
We construct spectral sequences in the framework of Baues-Wirsching cohomology and homology for functors between small categories and analyze particular cases including Grothendieck fibrations. We also give applications to more classical cohomology and homology theories including Hochschild-Mitchell cohomology and those studied before by Watts, Roos, Quillen and others
Resumo:
We present formulas for computing the resultant of sparse polyno- mials as a quotient of two determinants, the denominator being a minor of the numerator. These formulas extend the original formulation given by Macaulay for homogeneous polynomials.
Resumo:
Let I be an ideal in a local Cohen-Macaulay ring (A, m). Assume I to be generically a complete intersection of positive height. We compute the depth of the Rees algebra and the form ring of I when the analytic deviation of I equals one and its reduction number is also at most one. The formu- las we obtain coincide with the already known formulas for almost complete intersection ideals.
Resumo:
Farm planning requires an assessment of the soil class. Research suggest that the Diagnosis and Recommendation Integrated System (DRIS) has the capacity to evaluate the nutritional status of coffee plantations, regardless of environmental conditions. Additionally, the use of DRIS could reduce the costs for farm planning. This study evaluated the relationship between the soil class and nutritional status of coffee plants (Coffea canephora Pierre) using the Critical Level (CL) and DRIS methods, based on two multivariate statistical methods (discriminant and multidimensional scaling analyses). During three consecutive years, yield and foliar concentration of nutrients (N, P, K, Ca, Mg, S, B, Zn, Mn, Fe and Cu) were obtained from coffee plantations cultivated in Espírito Santo state. Discriminant analysis showed that the soil class was an important factor determining the nutritional status of the coffee plants. The grouping separation by the CL method was not as effective as the DRIS one. The bidimensional analysis of Euclidean distances did not show the same relationship between plant nutritional status and soil class. Multidimensional scaling analysis by the CL method indicated that 93.3 % of the crops grouped into one cluster, whereas the DRIS method split the fields more evenly into three clusters. The DRIS method thus proved to be more consistent than the CL method for grouping coffee plantations by soil class.
Resumo:
A statistical methodology for the objective comparison of LDI-MS mass spectra of blue gel pen inks was evaluated. Thirty-three blue gel pen inks previously studied by RAMAN were analyzed directly on the paper using both positive and negative mode. The obtained mass spectra were first compared using relative areas of selected peaks using the Pearson correlation coefficient and the Euclidean distance. Intra-variability among results from one ink and inter-variability between results from different inks were compared in order to choose a differentiation threshold minimizing the rate of false negative (i.e. avoiding false differentiation of the inks). This yielded a discriminating power of up to 77% for analysis made in the negative mode. The whole mass spectra were then compared using the same methodology, allowing for a better DP in the negative mode of 92% using the Pearson correlation on standardized data. The positive mode results generally yielded a lower differential power (DP) than the negative mode due to a higher intra-variability compared to the inter-variability in the mass spectra of the ink samples.
Resumo:
The complex relationship between structural and functional connectivity, as measured by noninvasive imaging of the human brain, poses many unresolved challenges and open questions. Here, we apply analytic measures of network communication to the structural connectivity of the human brain and explore the capacity of these measures to predict resting-state functional connectivity across three independently acquired datasets. We focus on the layout of shortest paths across the network and on two communication measures-search information and path transitivity-which account for how these paths are embedded in the rest of the network. Search information is an existing measure of information needed to access or trace shortest paths; we introduce path transitivity to measure the density of local detours along the shortest path. We find that both search information and path transitivity predict the strength of functional connectivity among both connected and unconnected node pairs. They do so at levels that match or significantly exceed path length measures, Euclidean distance, as well as computational models of neural dynamics. This capacity suggests that dynamic couplings due to interactions among neural elements in brain networks are substantially influenced by the broader network context adjacent to the shortest communication pathways.
Resumo:
Through an imaginary change of coordinates in the Galilei algebra in 4 space dimensions and making use of an original idea of Dirac and Lvy-Leblond, we are able to obtain the relativistic equations of Dirac and of Bargmann and Wigner starting with the (Galilean-invariant) Schrdinger equation.