142 resultados para Fonctions Analytiques
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
La sialylation des N-glycanes du fragment Fc des immunogobulines G (IgG) est une modification peu fréquente des IgG humaines. Pourtant, elle est l’objet de beaucoup d’attention depuis que deux articles fondateurs ont été publiés, qui montrent l’un que la sialylation des IgG diminue leur capacité à déclencher la cytotoxicité cellulaire dépendant de l’anticorps (ADCC), et l’autre que les IgG sialylées en α2,6 seraient la fraction efficace des IgG intraveineuses (IgIV) anti-inflammatoires. Les anticorps monoclonaux thérapeutiques, qui sont le plus souvent des IgG recombinantes produites en culture de cellules de mammifère, connaissent depuis la fin des années 90 un succès et une croissance phénoménaux sur le marché pharmaceutique. La maîtrise de la N-glycosylation du Fc des IgG est une clé de l’efficacité des anticorps monoclonaux. Si les IgG sialylées sont des molécules peu fréquentes in vivo, elles sont très rares en culture cellulaire. Dans cette étude, nous avons développé une méthode de production d’IgG avec une sialylation de type humain en cellules CHO. Nous avons travaillé principalement sur la mise au point d’une stratégie de production d’IgG sialylées par co-expression transitoire d’une IgG1 avec la β1,4-galactosyltransférase I (β4GTI) et la β-galactoside-α2,6-sialyltransférase I (ST6GalI). Nous avons montré que cette méthode permettait d’enrichir l’IgG1 en glycane fucosylé di-galactosylé mono-α2,6-sialylé G2FS(6)1, qui est le glycane sialylé présent sur les IgG humaines. Nous avons ensuite adapté cette méthode à la production d’IgG présentant des profils de glycosylation riches en acides sialiques, riches en galactose terminal, et/ou appauvris en fucosylation. L’analyse des profils de glycosylation obtenus par la co-expression de diverses combinaisons enzymatiques avec l’IgG1 native ou une version mutante de l’IgG1 (F243A), a permis de discuter des influences respectives de la sous-galactosylation des IgG1 en CHO et des contraintes structurales du Fc dans la limitation de la sialylation des IgG en CHO. Nous avons ensuite utilisé les IgG1 produites avec différents profils de glycosylation afin d’évaluer l’impact de la sialylation α2,6 sur l’interaction de l’IgG avec le récepteur FcγRIIIa, principal récepteur impliqué dans la réponse ADCC. Nous avons montré que la sialylation α2,6 augmentait la stabilité du complexe formé par l’IgG avec le FcγRIIIa, mais que ce bénéfice n’était pas directement traduit par une augmentation de l’efficacité ADCC de l’anticorps. Enfin, nous avons débuté le développement d’une plateforme d’expression stable d’IgG sialylées compatible avec une production à l’échelle industrielle. Nous avons obtenu une lignée capable de produire des IgG enrichies en G2FS(6)1 à hauteur de 400 mg/L. Cette étude a contribué à une meilleure compréhension de l’impact de la sialylation sur les fonctions effectrices des IgG, et a permis d’augmenter la maîtrise des techniques de modulation du profil de glycosylation des IgG en culture cellulaire.
Resumo:
Selon la philosophie de Katz et Sarnak, la distribution des zéros des fonctions $L$ est prédite par le comportement des valeurs propres de matrices aléatoires. En particulier, le comportement des zéros près du point central révèle le type de symétrie de la famille de fonctions $L$. Une fois la symétrie identifiée, la philosophie de Katz et Sarnak conjecture que plusieurs statistiques associées aux zéros seront modélisées par les valeurs propres de matrices aléatoires du groupe correspondant. Ce mémoire étudiera la distribution des zéros près du point central de la famille des courbes elliptiques sur $\mathbb{Q}[i]$. Brumer a effectué ces calculs en 1992 sur la famille de courbes elliptiques sur $\mathbb{Q}$. Les nouvelles problématiques reliées à la généralisation de ses travaux vers un corps de nombres seront mises en évidence
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
We analyze an alternative to the standard rationalizability requirement for observed choices by considering non-deteriorating selections. A selection function is a generalization of a choice function where selected alternatives may depend on a reference (or status quo) alternative in addition to the set of feasible options. A selection function is non-deteriorating if there exists an ordering over the universal set of alternatives such that the selected alternatives are at least as good as the reference option. We characterize non-deteriorating selection functions in an abstract framework and in an economic environment.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
In this paper, we provide both qualitative and quantitative measures of the cost of measuring the integrated volatility by the realized volatility when the frequency of observation is fixed. We start by characterizing for a general diffusion the difference between the realized and the integrated volatilities for a given frequency of observations. Then, we compute the mean and variance of this noise and the correlation between the noise and the integrated volatility in the Eigenfunction Stochastic Volatility model of Meddahi (2001a). This model has, as special examples, log-normal, affine, and GARCH diffusion models. Using some previous empirical works, we show that the standard deviation of the noise is not negligible with respect to the mean and the standard deviation of the integrated volatility, even if one considers returns at five minutes. We also propose a simple approach to capture the information about the integrated volatility contained in the returns through the leverage effect.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
In spatial environments, we consider social welfare functions satisfying Arrow's requirements. i.e., weak Pareto and independence of irrelevant alternatives. When the policy space os a one-dimensional continuum, such a welfare function is determined by a collection of 2n strictly quasi-concave preferences and a tie-breaking rule. As a corrollary, we obtain that when the number of voters is odd, simple majority voting is transitive if and only if each voter's preference is strictly quasi-concave. When the policy space is multi-dimensional, we establish Arrow's impossibility theorem. Among others, we show that weak Pareto, independence of irrelevant alternatives, and non-dictatorship are inconsistent if the set of alternatives has a non-empty interior and it is compact and convex.