866 resultados para Reproducing Kernel
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
A partir d'un terrain ethnographique réalisé au sein d'une équipe mobile de soins palliatifs d'un hôpital universitaire, cette thèse de doctorat porte sur les médicaments dans le contexte de la fin de vie. Au carrefour d'une socio-anthropologie de la maladie grave, du mourir et des médicaments, elle interroge les rapports à la morphine, ainsi qu'à certains psychotropes et sédatifs utilisés en soins palliatifs. Entre temporalité vécue et temporalité institutionnelle, les manières d'investir le temps lorsqu'il est compté, y sont centrales. Dans une dimension microsociale, les résultats montrent que l'introduction de certains médicaments comme la morphine et l'entrée en scène d'une équipe mobile de soins palliatifs sont des points de repère et peuvent sonner comme une annonce, sorte de sanction, dans la trajectoire incertaine de la personne malade. En outre, les médicaments permettent d'agir sur « le temps qui reste » en plus de soulager les symptômes lorsque la maladie grave bascule en maladie incurable. Ils font l'objet d'usages détournés du but initial de soulagement des symptômes pour repousser, altérer ou accélérer la mort dans une perspective de maîtrise de sa fin de vie. Dans une dimension mésosociale, ce travail considère les médicaments à la base d'échanges entre groupements professionnels sur fond d'institutionnalisation des soins palliatifs par rapport à d'autres segments de la médecine actifs dans la gestion de la fin de vie. Dans une médecine caractérisée par l'incertitude et les décisions -avec une teinte toute particulière en Suisse où le suicide assisté est toléré - les médicaments en soins palliatifs peuvent être considérés comme des instruments de mort, qu'ils soient redoutés ou recherchés. Interrogeant les risques de reproduire un certain nombre d'inégalités de traitements à l'approche de la mort, qui s'accentuent dans un contexte de plus en plus favorable aux pratiques euthanasiques, ce travail se propose, en définitive, de discuter le temps contraint de la mort dans les institutions hospitalo-universitaires, entre acharnement et abstention thérapeutique.¦-¦Based on ethnographie fieldwork conducted within a palliative care mobile team in an academic hospital, this doctoral thesis focuses on medicines used in end of life contexts. At the intersection of a socio-anthropology of illness, dying and pharmaceuticals, the relations to morphine, as well as to some psychotropic and sedative drugs used in palliative care are questioned. Between "lived" experiences of temporality and institutional temporality, the ways by which actors invest time when it is counted, appeared to be central. In a microsocial dimension, the results showed that introducing drugs such as morphine, as well as the arrival of a palliative care mobile team, are landmarks and sound like an announcement, a sort of sanction, during the uncertain trajectory of the ill person. In addition, medicines can act on "the remaining time" when severe illness shifts into incurable illness. Indeed, medicines are being diverted from the initial aim of symptom relief in order to defer, alter or hasten death in a perspective of control over one's death. In a mesosocial dimension, pharmaceuticals are seen as core to professional exchanges and to palliative care institutionalisation compared to other active medical segments in end of life care. In a medical context characterised by uncertainty and decision-taking-with a special shade in Switzerland where assisted suicide is tolerated - palliative medicines can be seen as instruments of death, whether sought or feared. Questioning the risks of reproducing treatment inequalities at the approach of death, which are accentuated in a context increasingly favorable to euthanasia practices, this study aims, ultimately, at discussing death's constrained time in academic hospitals, between therapeutic intervention and abstention.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
BACKGROUND Catecholaminergic polymorphic ventricular tachycardia (CPVT) is an arrhythmogenic disease for which electrophysiological studies (EPS) have shown to be of limited value.OBJECTIVE This study presents a CPVT family in which marked postpacing repolarization abnormalities during EPS were the only consistent phenotypic manifestation of ryanodine receptor (RyR2) mutation carriers.METHODS The study was prompted by the observation of transient marked QT prolongation preceding initiation of ventricular fibrillation during atrial fibrillation in a boy with a family history of sudden cardiac death (SCD). Family members underwent exercise and pharmacologic electrocardiographic testing with epinephrine, adenosine, and flecainide. Noninvasive clinical test results were normal in 10 patients evaluated, except for both epinephrine- and exercise-induced ventricular arrhythmias in 1. EPS included bursts of ventricular pacing and programmed ventricular extrastimulation reproducing short-long sequences. Genetic screening involved direct sequencing of genes involved in long QT syndrome as well as RyR2.RESULTS Six patients demonstrated a marked increase in QT interval only in the first beat after cessation of ventricular pacing and/or extrastimulation. All 6 patients were found to have a heterozygous missense mutation (M4109R) in RyR2. Two of them, presenting with aborted SCD, also had a second missense mutation (I406T- RyR2). Four family members without RyR2 mutations did not display prominent postpacing QT changes.CONCLUSION M4109R- RyR2 is associated with a high incidence of SCD. The contribution of I406T to the clinical phenotype is unclear. In contrast to exercise testing, marked postpacing repolarization changes in a single beat accurately predicted carriers of M4109R- RyR2 in this family.
Resumo:
We study the phonon dispersion, cohesive and thermal properties of raxe gas solids Ne, Ar, Kr, and Xe, using a variety of potentials obtained from different approaches; such as, fitting to crystal properties, purely ab initio calculations for molecules and dimers or ab initio calculations for solid crystalline phase, a combination of ab initio calculations and fitting to either gas phase data or sohd state properties. We explore whether potentials derived with a certain approaxih have any obvious benefit over the others in reproducing the solid state properties. In particular, we study phonon dispersion, isothermal ajid adiabatic bulk moduli, thermal expansion, and elastic (shear) constants as a function of temperatiue. Anharmonic effects on thermal expansion, specific heat, and bulk moduli have been studied using A^ perturbation theory in the high temperature limit using the neaxest-neighbor central force (nncf) model as developed by Shukla and MacDonald [4]. In our study, we find that potentials based on fitting to the crystal properties have some advantage, particularly for Kr and Xe, in terms of reproducing the thermodynamic properties over an extended range of temperatiures, but agreement with the phonon frequencies with the measured values is not guaranteed. For the lighter element Ne, the LJ potential which is based on fitting to the gas phase data produces best results for the thermodynamic properties; however, the Eggenberger potential for Ne, where the potential is based on combining ab initio quantum chemical calculations and molecular dynamics simulations, produces results that have better agreement with the measured dispersion, and elastic (shear) values. For At, the Morse-type potential, which is based on M0ller-Plesset perturbation theory to fourth order (MP4) ab initio calculations, yields the best results for the thermodynamic properties, elastic (shear) constants, and the phonon dispersion curves.
Resumo:
Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .
Resumo:
This qualitative research study explores how teachers who write social justicefocused curriculum support resources conceptualize curriculum and social justice. Curriculum used in schools reflects underlying assumptions and choices about what knowledge is valuable. Class-based, cultural, racial, and religious stereotypes are reinforced in schooling contexts. Are the resources teachers create, select, and use to promote social justice reproducing and reinforcing forms of oppression? Why do teachers pursue social justice through curriculum writing? What are their hopes for this work? Exploring how Teachers' beliefs and values influence cy.rriculum writing engages the teachers writing and using curriculum support resources in critical reflective thought about their experiences and efforts to promote social justice. Individual and focus group interviews were conducted with four teacher-curriculum writers from Ontario schools. In theorizing my experiences as a teacher-curriculum writer, I reversed roles and participated in individual interviews. I employed a critical feminist lens to analyze the qualitati ve data. The participants' identities influenced how they understand social justice and write curriculum. Their understandings of injustices, either personal or gathered through students, family members, or oth.e. r teachers, influenced their curriculum writing . The teacher-curriculum writers in the study believed all teachers need critical understandings of curriculum and social justice. The participants made a case for representation from historically disadvantaged and underrepresented groups on curriculum writing teams. In an optimistic conclusion, the possibility of a considerate curriculum is proposed as a way to engage the public in working with teachers for social justice.
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
This research assesses the various aspects of Child and Youth Care (CYC) work and how relationships between child or youth and care provider are limited and constricted within greater political, social and historical contexts. Specifically, this research takes place internationally in Rio de Janeiro, Brazil within a favela (slum) and unveils the entangled and complex relationship that I, not only as an ethnographer, but also as a CYC worker had with the many young people that I encountered. It will address a variety of theories that demonstrate the potentials of reproducing oppressive relationships, and argue that it is imperative for CYC workers to critically reflect on the greater contexts in which their work is situated in order to gain forces with those young people whom they are attempting to serve.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function.
Resumo:
We examine the relationship between the risk premium on the S&P 500 index return and its conditional variance. We use the SMEGARCH - Semiparametric-Mean EGARCH - model in which the conditional variance process is EGARCH while the conditional mean is an arbitrary function of the conditional variance. For monthly S&P 500 excess returns, the relationship between the two moments that we uncover is nonlinear and nonmonotonic. Moreover, we find considerable persistence in the conditional variance as well as a leverage effect, as documented by others. Moreover, the shape of these relationships seems to be relatively stable over time.