939 resultados para Return-based pricing kernel
Resumo:
This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.
Resumo:
This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
We introduce a new dynamic trading strategy based on the systematic misspricing of U.S. companies sponsoring Defined Benefit pension plans. This portfolio produces an average return of 1.51% monthly between 1989 and 2004, with a Sharpe Ratio of 0.26. The returns of the strategy are not explained by those of primary assets. These returns are not related to those of benchmarks in the alternative investments industry either. Hence, we are in the presence of a "pure alpha" strategy that can be ported into a large variety of portfolios to significantly enhance their performance.
Resumo:
Introduction.- Knowledge of predictors of an unfavourable outcome, e.g. non-return to work after an injury enables to identify patients at risk and to target interventions for modifiable predictors. It has been recently shown that INTERMED; a tool to measure biopsychosocial complexity in four domains (biologic, psychologic, social and care, with a score between 0-60 points) can be useful in this context. The aim of this study was to set up a predictive model for non-return to work using INTERMED in patients in vocational rehabilitation after orthopaedic injury.Patients and methods.- In this longitudinal prospective study, the cohort consisted of 2156 consecutively included inpatients with orthopaedic trauma attending a rehabilitation hospital after a work, traffic or sport related injury. Two years after discharge, a questionnaire regarding return to work was sent (1502 returned their questionnaires). In addition to INTERMED, 18 predictors known at baseline of the rehabilitation were selected based on previous research. A multivariable logistic regression was performed.Results.- In the multivariate model, not-returning to work at 2 years was significantly predicted by the INTERMED: odds-ratio (OR) 1.08 (95% confidence interval, CI [1.06; 1.11]) for a one point increase in scale; by qualified work-status before the injury OR = 0.74, CI (0.54; 0.99), by using French as preferred language OR = 0.60, CI (0.45; 0.80), by upper-extremity injury OR = 1.37, CI (1.03; 1.81), by higher education (> 9 years) OR = 0.74, CI (0.55; 1.00), and by a 10 year increase in age OR = 1.15, CI (1.02; 1.29). The area under the receiver-operator-characteristics curve (ROC)-curve was 0.733 for the full model (INTERMED plus 18 variables).Discussion.- These results confirm that the total score of the INTERMED is a significant predictor for return to work. The full model with 18 predictors combined with the total score of INTERMED has good predictive value. However, the number of variables (19) to measure is high for the use as screening tool in a clinic.
Resumo:
ABSTRACT : Research in empirical asset pricing has pointed out several anomalies both in the cross section and time series of asset prices, as well as in investors' portfolio choice. This dissertation aims to discover the forces driving some of these "puzzling" asset pricing dynamics and portfolio decisions observed in the financial market. Through the dissertation I construct and study dynamic general equilibrium models of heterogeneous investors in the presence of frictions and evaluate quantitatively their implications for financial-market asset prices and portfolio choice. I also explore the potential roots of puzzles in international finance. Chapter 1 shows that, by introducing jointly endogenous no-default type of borrowing constraints and heterogeneous beliefs in a dynamic general-equilibrium economy, many empirical features of stock return volatility can be reproduced. While most of the research on stock return volatility is empirical, this paper provides a theoretical framework that is able to reproduce simultaneously the cross section and time series stylized facts concerning stock returns and their volatility. In contrast to the existing theoretical literature related to stock return volatility, I don't impose persistence or regimes in any of the exogenous state variables or in preferences. Volatility clustering, asymmetry in the stock return-volatility relationship, and pricing of multi-factor volatility components in the cross section all arise endogenously as a consequence of the feedback between the binding of no-default constraints and heterogeneous beliefs. Chapters 2 and 3 explore the implications of differences of opinion across investors in different countries for international asset pricing anomalies. Chapter 2 demonstrates that several international finance "puzzles" can be reproduced by a single risk factor which captures heterogeneous beliefs across international investors. These puzzles include: (i) home equity preference; (ii) the dependence of firm returns on local and foreign factors; (iii) the co-movement of returns and international capital flows; and (iv) abnormal returns around foreign firm cross-listing events in the local market. These are reproduced in a setup with symmetric information and in a perfectly integrated world with multiple countries and independent processes producing the same good. Chapter 3 shows that by extending this framework to multiple goods and correlated production processes; the "forward premium puzzle" arises naturally as a compensation for the heterogeneous expectations about the depreciation of the exchange rate held by international investors. Chapters 2 and 3 propose differences of opinion across international investors as the potential resolution of several international finance `puzzles'. In a globalized world where both capital and information flow freely across countries, this explanation seems more appealing than existing asymmetric information or segmented markets theories aiming to explain international finance puzzles.
Resumo:
Knowledge on the factors influencing water erosion is fundamental for the choice of the best land use practices. Rainfall, expressed by rainfall erosivity, is one of the most important factors of water erosion. The objective of this study was to determine rainfall erosivity and the return period of rainfall in the Coastal Plains region, near Aracruz, a town in the state of Espírito Santo, Brazil, based on available data. Rainfall erosivity was calculated based on historic rainfall data, collected from January 1998 to July 2004 at 5 min intervals, by automatic weather stations of the Aracruz Cellulose S.A company. A linear regression with individual rainfall and erosivity data was fit to obtain an equation that allowed data extrapolation to calculate individual erosivity for a 30-year period. Based on this data the annual average rainfall erosivity in Aracruz was 8,536 MJ mm ha-1 h-1 yr-1. Of the total annual rainfall erosivity 85 % was observed in the most critical period October to March. Annual erosive rains accounted for 38 % of the events causing erosion, although the runoff volume represented 88 % of the total. The annual average rainfall erosivity return period was estimated to be 3.4 years.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Newborn neurons are generated in the adult hippocampus from a pool of self-renewing stem cells located in the subgranular zone (SGZ) of the dentate gyrus. Their activation, proliferation, and maturation depend on a host of environmental and cellular factors but, until recently, the contribution of local neuronal circuitry to this process was relatively unknown. In their recent publication, Song and colleagues have uncovered a novel circuit-based mechanism by which release of the neurotransmitter, γ-aminobutyric acid (GABA), from parvalbumin-expressing (PV) interneurons, can hold radial glia-like (RGL) stem cells of the adult SGZ in a quiescent state. This tonic GABAergic signal, dependent upon the activation of γ(2) subunit-containing GABA(A) receptors of RGL stem cells, can thus prevent their proliferation and subsequent maturation or return them to quiescence if previously activated. PV interneurons are thus capable of suppressing neurogenesis during periods of high network activity and facilitating neurogenesis when network activity is low.
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
Diplomityössä on tutkittu reaaliaikaisen toimintolaskennan toteuttamista suomalaisen lasersiruja valmistavan PK-yrityksen tietojärjestelmään. Lisäksi on tarkasteltu toimintolaskennan vaikutuksia operatiiviseen toimintaan sekä toimintojen johtamiseen. Työn kirjallisuusosassa on käsitelty kirjallisuuslähteiden perusteella toimintolaskennan teorioita, laskentamenetelmiä sekä teknisessä toteutuksessa käytettyjä teknologioita. Työn toteutusosassa suunniteltiin ja toteutettiin WWW-pohjainen toimintolaskentajärjestelmä case-yrityksen kustannuslaskennan sekä taloushallinnon avuksi. Työkalu integroitiin osaksi yrityksen toiminnanohjaus- sekä valmistuksenohjausjärjestelmää. Perinteisiin toimintolaskentamallien tiedonkeruujärjestelmiin verrattuna case-yrityksessä syötteet toimintolaskentajärjestelmälle tulevat reaaliaikaisesti osana suurempaa tietojärjestelmäintegraatiota.Diplomityö pyrkii luomaan suhteen toimintolaskennan vaatimusten ja tietokantajärjestelmien välille. Toimintolaskentajärjestelmää yritys voi hyödyntää esimerkiksi tuotteiden hinnoittelussa ja kustannuslaskennassa näkemällä tuotteisiin liittyviä kustannuksia eri näkökulmista. Päätelmiä voidaan tehdä tarkkaan kustannusinformaatioon perustuen sekä määrittää järjestelmän tuottaman datan perusteella, onko tietyn projektin, asiakkuuden tai tuotteen kehittäminen taloudellisesti kannattavaa.
Resumo:
Tämä diplomityö käsittelee työkaluja, jotka on suunniteltu kustannusten ennakointiin ja hinnan asetantaan. Aluksi on käyty läpi perinteisen ja toimintoperusteisen kustannuslaskennan perusteita. Näiden menetelmien välisiä eroja on tarkasteltu ja toimintoperusteisen kustannuslaskennan paremmin sopivuus nykypäivän yrityksille on perusteltu. Toisena käsitellään hinnoittelu. Hinnan merkitys, hinnoittelumenetelmät ja päätös lopullisesta hinnasta on käyty läpi. Hinnoittelun jälkeen esitellään kustannusjärjestelmät ja kustannusten arviointi. Nämä asiat todistavat, että tarkat kustannusarviot ovat elintärkeitä yritykselle. Tuotteen kustannusarviointi, hinnan asetanta ja tarjoaminen ovat erittäin merkityksellisiä asioita ottaen huomioon koko projektin elinkaaren ja tulevat tuotot. Nykyään on yleistä käyttää työkaluja kustannusarvioinnissa ja joskus myös hinnoittelussa. Työkalujen luotettavuus on tiedettävä, ennenkuin työkalut otetaan käyttöön. Myös työkalujen käyttäjät täytyy perehdyttää hyvin. Muuten yritys todennäköisesti kohtaa odottamattomia ja epämiellyttäviä yllätyksiä.