25 resultados para Fibonacci series and golden ratio
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Intuitively, music has both predictable and unpredictable components. In this work we assess this qualitative statement in a quantitative way using common time series models fitted to state-of-the-art music descriptors. These descriptors cover different musical facets and are extracted from a large collection of real audio recordings comprising a variety of musical genres. Our findings show that music descriptor time series exhibit a certain predictability not only for short time intervals, but also for mid-term and relatively long intervals. This fact is observed independently of the descriptor, musical facet and time series model we consider. Moreover, we show that our findings are not only of theoretical relevance but can also have practical impact. To this end we demonstrate that music predictability at relatively long time intervals can be exploited in a real-world application, namely the automatic identification of cover songs (i.e. different renditions or versions of the same musical piece). Importantly, this prediction strategy yields a parameter-free approach for cover song identification that is substantially faster, allows for reduced computational storage and still maintains highly competitive accuracies when compared to state-of-the-art systems.
Resumo:
Minkowski's ?(x) function can be seen as the confrontation of two number systems: regular continued fractions and the alternated dyadic system. This way of looking at it permits us to prove that its derivative, as it also happens for many other non-decreasing singular functions from [0,1] to [0,1], when it exists can only attain two values: zero and infinity. It is also proved that if the average of the partial quotients in the continued fraction expansion of x is greater than k* =5.31972, and ?'(x) exists then ?'(x)=0. In the same way, if the same average is less than k**=2 log2(F), where F is the golden ratio, then ?'(x)=infinity. Finally some results are presented concerning metric properties of continued fraction and alternated dyadic expansions.
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
I construct "homogeneous" series of GVA at current and constant prices, employment and population for the Spain and its regions covering the period 1955-2007. The series are obtained by linking the Regional Accounts of the National Statistical Institute with the series constructed by Julio Alcaide and his team for the BBVA Foundation. The "switching point" at which this last source stops being used as a reference to construct the linked series is determined using a procedure that allows me to estimate which of the two competing series would produce an estimator with the lowest MSE when it is used as dependent variable in a regression on an arbitrary independent variable. To the extent that it is possible, the difference between the two series found at the point of linkage is distributed between the initial levels of the older series and its subsequent growth using external estimates of the relevant variables at the beginning of the sample period.
Resumo:
Aquesta tesi explora la possibilitat de fer servir enllaços inductius per a una aplicació de l’automòbil on el cablejat entre la centraleta (ECU) i els sensors o detectors és difícil o impossible. S’han proposat dos mètodes: 1) el monitoratge de sensors commutats (dos possibles estats) via acoblament inductiu i 2) la transmissió mitjançant el mateix principi físic de la potència necessària per alimentar els sensors autònoms remots. La detecció d'ocupació i del cinturó de seguretat per a seients desmuntables pot ser implementada amb sistemes sense fils passius basats en circuits ressonants de tipus LC on l'estat dels sensors determina el valor del condensador i, per tant, la freqüència de ressonància. Els canvis en la freqüència són detectats per una bobina situada en el terra del vehicle. S’ha conseguit provar el sistema en un marge entre 0.5 cm i 3 cm. Els experiments s’han dut a terme fent servir un analitzador d’impedàncies connectat a una bobina primària i sensors comercials connectats a un circuit remot. La segona proposta consisteix en transmetre remotament la potència des d’una bobina situada en el terra del vehicle cap a un dispositiu autònom situat en el seient. Aquest dispositiu monitorarà l'estat dels detectors (d'ocupació i de cinturó) i transmetrà les dades mitjançant un transceptor comercial de radiofreqüència o pel mateix enllaç inductiu. S’han avaluat les bobines necessàries per a una freqüència de treball inferior a 150 kHz i s’ha estudiat quin és el regulador de tensió més apropiat per tal d’aconseguir una eficiència global màxima. Quatre tipus de reguladors de tensió s’han analitzat i comparat des del punt de vista de l’eficiència de potència. Els reguladors de tensió de tipus lineal shunt proporcionen una eficiència de potència millor que les altres alternatives, els lineals sèrie i els commutats buck o boost. Les eficiències aconseguides han estat al voltant del 40%, 25% i 10% per les bobines a distàncies 1cm, 1.5cm, i 2cm. Les proves experimentals han mostrat que els sensors autònoms han estat correctament alimentats fins a distàncies de 2.5cm.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
I describe some of the features that characterize the activity and migration of Cory’s shearwater during approximately one year. I also explore the influence of Moon, photoperiod, geographic position and life-history stage on the resulting patterns and the periodicity of the latter. I have principally used time series and regression analysis. Its use here is one of the first applications to the analysis of logger data in seabirds. An intriguing finding of this work is the lunar periodicity that pervades the annual cycle of this species.
Resumo:
The contributions of the correlated and uncorrelated components of the electron-pair density to atomic and molecular intracule I(r) and extracule E(R) densities and its Laplacian functions ∇2I(r) and ∇2E(R) are analyzed at the Hartree-Fock (HF) and configuration interaction (CI) levels of theory. The topologies of the uncorrelated components of these functions can be rationalized in terms of the corresponding one-electron densities. In contrast, by analyzing the correlated components of I(r) and E(R), namely, IC(r) and EC(R), the effect of electron Fermi and Coulomb correlation can be assessed at the HF and CI levels of theory. Moreover, the contribution of Coulomb correlation can be isolated by means of difference maps between IC(r) and EC(R) distributions calculated at the two levels of theory. As application examples, the He, Ne, and Ar atomic series, the C2-2, N2, O2+2 molecular series, and the C2H4 molecule have been investigated. For these atoms and molecules, it is found that Fermi correlation accounts for the main characteristics of IC(r) and EC(R), with Coulomb correlation increasing slightly the locality of these functions at the CI level of theory. Furthermore, IC(r), EC(R), and the associated Laplacian functions, reveal the short-ranged nature and high isotropy of Fermi and Coulomb correlation in atoms and molecules
Resumo:
A series of new benzolactam derivatives was synthesized and the derivatives were evaluated for theiraffinities at the dopamine D1, D2, and D3 receptors. Some of these compounds showed high D2 and/orD3 affinity and selectivity over the D1 receptor. The SAR study of these compounds revealed structuralcharacteristics that decisively influenced their D2 and D3 affinities. Structural models of the complexesbetween some of the most representative compounds of this series and the D2 and D3 receptors wereobtained with the aim of rationalizing the observed experimental results. Moreover, selected compoundsshowed moderate binding affinity on 5-HT2A which could contribute to reducing the occurrence of extrapyramidalside effects as potential antipsychotics.
Resumo:
When did overseas trade start to matter for living standards? Traditional real-wage indices suggest that living standards in Europe stagnated before 1800. In this paper, we argue thatwelfare rose substantially, but surreptitiously, because of an influx of new goods as a result ofoverseas trade. Colonial luxuries such as tea, coffee, and sugar transformed European diets afterthe discovery of America and the rounding of the Cape of Good Hope. These goods became household items in many countries by the end of the 18th century. We use three different methodsto calculate welfare gains based on price data and the rate of adoption of these new colonialgoods. Our results suggest that by 1800, the average Englishman would have been willing to forego 10% or more of his income in order to maintain access to sugar and tea alone. These findings are robust to a wide range of alternative assumptions, data series, and valuation methods.
Resumo:
Sparus aurata larvae reared under controlled water-temperature conditions during the first 24 days after hatching displayed a linear relationship between age (t) and standard length (SL): SL = 2.68 + 0.19 t (r2 = 0.91l). Increments were laid down in the sagittae with daily periodicity starting on day of hatching. Standard length (SL) and sagittae radius (OR) were correlated: SL(mm) = 2.65 + 0.012 OR(mm). The series of measurements of daily growth increment widths (DWI), food density and water temperature were analyzed by means of time series analysis. The DWI series were strongly autocorrelated, the growth on any one day was dependent upon growth on the previous day. Time series of water temperatures showed, as expected, a random pattern of variation, while food consumed daily was a function of food consumed the two previous days. The DWI series and the food density were correlated positively at lags 1 and 2. The results provided evidence of the importance of food intake upon the sagittae growth when temperature is optimal (20ºC). Sagittae growth was correlated with growth on the previous day, so this should be taken into account when fish growth is derived from sagittae growth rates.
Resumo:
The enhanced flow in carbon nanotubes is explained using a mathematical model that includes a depletion layer with reduced viscosity near the wall. In the limit of large tubes the model predicts no noticeable enhancement. For smaller tubes the model predicts enhancement that increases as the radius decreases. An analogy between the reduced viscosity and slip-length models shows that the term slip-length is misleading and that on surfaces which are smooth at the nanoscale it may be thought of as a length-scale associated with the size of the depletion region and viscosity ratio. The model therefore provides a physical interpretation of the classical Navier slip condition and explains why `slip-lengths' may be greater than the tube radius.
Resumo:
En aquest treball s'amplia la implementació en Java de les estructures de dades iniciada per Esteve Mariné, utilitzant el seu disseny bàsic. Concretament, s'ha fet la programació de les estructures de a) classes disjuntes, utilitzant els algorismes de llistes encadenades i amb estructura d'arbre, b) monticles, amb els algorismes binari, binomial i de Fibonacci, i c) arbres de recerca basats en l'algorisme d'arbre binari vermell-negre, el qual complementa els dos ja existents amb algorismes d'encadenaments i AVL. Per a examinar l'evolució de les estructures, s'ha preparat un visualitzador gràfic interactiu amb l'usuari que permet fer les operacions bàsiques de l'estructura. Amb aquest entorn és possible desar les estructures, tornar a reproduir-les i desfer i tornar a repetir les operacions fetes sobre l'estructura. Finalment, aporta una metodologia, amb visualització mitjançant gràfics, de l'avaluació comparativa dels algorismes implementats, que permet modificar els paràmetres d'avaluació com ara nombre d'elements que s'han de tractar, algorismes que s'han de comparar i nombre de repeticions. Les dades obtingudes es poden exportar per a analitzar-les posteriorment.
Resumo:
In most psychological tests and questionnaires, a test score is obtained bytaking the sum of the item scores. In virtually all cases where the test orquestionnaire contains multidimensional forced-choice items, this traditionalscoring method is also applied. We argue that the summation of scores obtained with multidimensional forced-choice items produces uninterpretabletest scores. Therefore, we propose three alternative scoring methods: a weakand a strict rank preserving scoring method, which both allow an ordinalinterpretation of test scores; and a ratio preserving scoring method, whichallows a proportional interpretation of test scores. Each proposed scoringmethod yields an index for each respondent indicating the degree to whichthe response pattern is inconsistent. Analysis of real data showed that withrespect to rank preservation, the weak and strict rank preserving methodresulted in lower inconsistency indices than the traditional scoring method;with respect to ratio preservation, the ratio preserving scoring method resulted in lower inconsistency indices than the traditional scoring method
Resumo:
Alfréd Rényi, in a paper of 1962, A new approach to the theory ofEngel's series, proposed a problem related to the growth of theelements of an Engel's series. In this paper, we reformulate andsolve Rényi's problem for both, Engel's series and Pierceexpansions.