396 resultados para Sòls -- Models matemàtics
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
Our work is concerned with user modelling in open environments. Our proposal then is the line of contributions to the advances on user modelling in open environments thanks so the Agent Technology, in what has been called Smart User Model. Our research contains a holistic study of User Modelling in several research areas related to users. We have developed a conceptualization of User Modelling by means of examples from a broad range of research areas with the aim of improving our understanding of user modelling and its role in the next generation of open and distributed service environments. This report is organized as follow: In chapter 1 we introduce our motivation and objectives. Then in chapters 2, 3, 4 and 5 we provide the state-of-the-art on user modelling. In chapter 2, we give the main definitions of elements described in the report. In chapter 3, we present an historical perspective on user models. In chapter 4 we provide a review of user models from the perspective of different research areas, with special emphasis on the give-and-take relationship between Agent Technology and user modelling. In chapter 5, we describe the main challenges that, from our point of view, need to be tackled by researchers wanting to contribute to advances in user modelling. From the study of the state-of-the-art follows an exploratory work in chapter 6. We define a SUM and a methodology to deal with it. We also present some cases study in order to illustrate the methodology. Finally, we present the thesis proposal to continue the work, together with its corresponding work scheduling and temporalisation
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
Calculating explicit closed form solutions of Cournot models where firms have private information about their costs is, in general, very cumbersome. Most authors consider therefore linear demands and constant marginal costs. However, within this framework, the nonnegativity constraint on prices (and quantities) has been ignored or not properly dealt with and the correct calculation of all Bayesian Nash equilibria is more complicated than expected. Moreover, multiple symmetric and interior Bayesianf equilibria may exist for an open set of parameters. The reason for this is that linear demand is not really linear, since there is a kink at zero price: the general ''linear'' inverse demand function is P (Q) = max{a - bQ, 0} rather than P (Q) = a - bQ.
Resumo:
We present a detailed analytical and numerical study of the avalanche distributions of the continuous damage fiber bundle model CDFBM . Linearly elastic fibers undergo a series of partial failure events which give rise to a gradual degradation of their stiffness. We show that the model reproduces a wide range of mechanical behaviors. We find that macroscopic hardening and plastic responses are characterized by avalanche distributions, which exhibit an algebraic decay with exponents between 5/2 and 2 different from those observed in mean-field fiber bundle models. We also derive analytically the phase diagram of a family of CDFBM which covers a large variety of potential avalanche size distributions. Our results provide a unified view of the statistics of breaking avalanches in fiber bundle models
Resumo:
Es tracta d'un projecte que proposa una aplicació per al calibratge automàtic de models P-sistema. Per a fer-ho primer es farà un estudi sobre els models P-sistema i el procediment seguit pels investigadors per desenvolupar aquest tipus de models. Es desenvoluparà una primera solució sèrie per al problema, i s'analitzaran els seus punts febles. Seguidament es proposarà una versió paral·lela que millori significativament el temps d'execució, tot mantenint una alta eficiència i escalabilitat.
Resumo:
En aquest projecte, tractarem de crear una aplicació que ens permeti d'una forma ràpida i eficient, el processament dels resultats obtinguts per un eina de simulació d'ecosistemes naturals anomenada PlinguaCore .L'objectiu d'aquest tractament és doble. En primer lloc, dissenyar una API que ens permeti de forma eficient processar la gran quantitat de dades generades per simulador d'ecosistemes PlinguaCore. En segon lloc, fer que aquesta API es pugui integrar en altres aplicacions, tant de tractament de dades, com de cal·libració dels models.
Resumo:
The radiation distribution function used by Domínguez and Jou [Phys. Rev. E 51, 158 (1995)] has been recently modified by Domínguez-Cascante and Faraudo [Phys. Rev. E 54, 6933 (1996)]. However, in these studies neither distribution was written in terms of directly measurable quantities. Here a solution to this problem is presented, and we also propose an experiment that may make it possible to determine the distribution function of nonequilibrium radiation experimentally. The results derived do not depend on a specific distribution function for the matter content of the system
Resumo:
Aquest projecte es basa en obtenir informació dels sòls d’una zona representativa de la comarca de La Selva on hi trobàvem sòls amb diferents usos i diferents orígens geològics. Es pretén conèixer les propietats físico-químiques d’aquests sòls i la contribució dels diferents usos i orígens geològics del sòl respecte les emissions de diòxid de carboni. També s'avaluen les classes de capacitat agrològica i el coeficient de mineralització del carboni
Resumo:
El projecte consisteix en realitzar l’avaluació dels sòls del municipi de Sant Gregori i la seva posterior classificació per Capacitat Agrològica. Aquest projecte es va portar a terme per pal•liar la manca d’informació sobre sòls que hi ha nivell local
Resumo:
Estudi realitzat a partir d’una estada a la Stanford University School of Medicine. Division of Radiation Oncology, Estats Units, entre 2010 i 2012. Durant els dos anys de beca postdoctoral he estat treballant en dos projectes diferents. En primer lloc, i com a continuació d'estudis previs del grup, volíem estudiar la causa de les diferències en nivells d'hipòxia que havíem observat en models de càncer de pulmó. La nostra hipòtesi es basava en el fet que aquestes diferències es devien a la funcionalitat de la vasculatura. Vam utilitzar dos models preclínics: un en què els tumors es formaven espontàniament als pulmons i l'altre on nosaltres injectàvem les cèl•lules de manera subcutània. Vam utilitzar tècniques com la ressonància magnètica dinàmica amb agent de contrast (DCE-MRI) i l'assaig de perfusió amb el Hoeschst 33342 i ambdues van demostrar que la funcionalitat de la vasculatura dels tumors espontanis era molt més elevada comparada amb la dels tumors subcutanis. D'aquest estudi, en podem concloure que les diferències en els nivells d'hipòxia en els diferents models tumorals de càncer de pulmó podrien ser deguts a la variació en la formació i funcionalitat de la vasculatura. Per tant, la selecció de models preclínics és essencial, tant pels estudi d'hipòxia i angiogènesi, com per a teràpies adreçades a aquests fenòmens. L'altre projecte que he estat desenvolupant es basa en l'estudi de la radioteràpia i els seus possibles efectes a l’hora de potenciar l'autoregeneració del tumor a partir de les cèl•lules tumorals circulants (CTC). Aquest efecte s'ha descrit en alguns models tumorals preclínics. Per tal de dur a terme els nostres estudis, vam utilitzar una línia tumoral de càncer de mama de ratolí, marcada permanentment amb el gen de Photinus pyralis o sense marcar i vam fer estudis in vitro i in vivo. Ambdós estudis han demostrat que la radiació tumoral promou la invasió cel•lular i l'autoregeneració del tumor per CTC. Aquest descobriment s'ha de considerar dins d'un context de radioteràpia clínica per tal d'aconseguir el millor tractament en pacients amb nivells de CTC elevats.
Resumo:
The use of cannabis sativa preparations as recreational drugs can be traced back to the earliest civilizations. However, animal models of cannabinoid addiction allowing the exploration of neural correlates of cannabinoid abuse have been developed only recently. We review these models and the role of the CB1 cannabinoid receptor, the main target of natural cannabinoids, and its interaction with opioid and dopamine transmission in reward circuits. Extensive reviews on the molecular basis of cannabinoid action are available elsewhere (Piomelli et al., 2000;Schlicker and Kathmann, 2001).