889 resultados para Affine functions
Resumo:
Cette thèse présente une étude dans divers domaines de l'informatique théorique de modèles de calculs combinant automates finis et contraintes arithmétiques. Nous nous intéressons aux questions de décidabilité, d'expressivité et de clôture, tout en ouvrant l'étude à la complexité, la logique, l'algèbre et aux applications. Cette étude est présentée au travers de quatre articles de recherche. Le premier article, Affine Parikh Automata, poursuit l'étude de Klaedtke et Ruess des automates de Parikh et en définit des généralisations et restrictions. L'automate de Parikh est un point de départ de cette thèse; nous montrons que ce modèle de calcul est équivalent à l'automate contraint que nous définissons comme un automate qui n'accepte un mot que si le nombre de fois que chaque transition est empruntée répond à une contrainte arithmétique. Ce modèle est naturellement étendu à l'automate de Parikh affine qui effectue une opération affine sur un ensemble de registres lors du franchissement d'une transition. Nous étudions aussi l'automate de Parikh sur lettres: un automate qui n'accepte un mot que si le nombre de fois que chaque lettre y apparaît répond à une contrainte arithmétique. Le deuxième article, Bounded Parikh Automata, étudie les langages bornés des automates de Parikh. Un langage est borné s'il existe des mots w_1, w_2, ..., w_k tels que chaque mot du langage peut s'écrire w_1...w_1w_2...w_2...w_k...w_k. Ces langages sont importants dans des domaines applicatifs et présentent usuellement de bonnes propriétés théoriques. Nous montrons que dans le contexte des langages bornés, le déterminisme n'influence pas l'expressivité des automates de Parikh. Le troisième article, Unambiguous Constrained Automata, introduit les automates contraints non ambigus, c'est-à-dire pour lesquels il n'existe qu'un chemin acceptant par mot reconnu par l'automate. Nous montrons qu'il s'agit d'un modèle combinant une meilleure expressivité et de meilleures propriétés de clôture que l'automate contraint déterministe. Le problème de déterminer si le langage d'un automate contraint non ambigu est régulier est montré décidable. Le quatrième article, Algebra and Complexity Meet Contrained Automata, présente une étude des représentations algébriques qu'admettent les automates contraints et les automates de Parikh affines. Nous déduisons de ces caractérisations des résultats d'expressivité et de complexité. Nous montrons aussi que certaines hypothèses classiques en complexité computationelle sont reliées à des résultats de séparation et de non clôture dans les automates de Parikh affines. La thèse est conclue par une ouverture à un possible approfondissement, au travers d'un certain nombre de problèmes ouverts.
Resumo:
In this work we present a proposal to contribute to the teaching and learning of affine function in the first year of high school having as prerequisite mathematical knowledge of basic education. The proposal focuses on some properties, special cases and applications of affine functions in order to show the importance of the demonstrations while awaken student interest by showing how this function is important to solve everyday problems
Resumo:
We consider a recently proposed finite-element space that consists of piecewise affine functions with discontinuities across a smooth given interface Γ (a curve in two dimensions, a surface in three dimensions). Contrary to existing extended finite element methodologies, the space is a variant of the standard conforming Formula space that can be implemented element by element. Further, it neither introduces new unknowns nor deteriorates the sparsity structure. It is proved that, for u arbitrary in Formula, the interpolant Formula defined by this new space satisfies Graphic where h is the mesh size, Formula is the domain, Formula, Formula, Formula and standard notation has been adopted for the function spaces. This result proves the good approximation properties of the finite-element space as compared to any space consisting of functions that are continuous across Γ, which would yield an error in the Formula-norm of order Graphic. These properties make this space especially attractive for approximating the pressure in problems with surface tension or other immersed interfaces that lead to discontinuities in the pressure field. Furthermore, the result still holds for interfaces that end within the domain, as happens for example in cracked domains.
Resumo:
We consider the Hamiltonian reduction of the two-loop Wess-Zumino-Novikov-Witten model (WZNW) based on an untwisted affine Kac-Moody algebra script Ĝ. The resulting reduced models, called Generalized Non-Abelian Conformal Affine Toda (G-CAT), are conformally invariant and a wide class of them possesses soliton solutions; these models constitute non-Abelian generalizations of the conformal affine Toda models. Their general solution is constructed by the Leznov-Saveliev method. Moreover, the dressing transformations leading to the solutions in the orbit of the vacuum are considered in detail, as well as the τ-functions, which are defined for any integrable highest weight representation of script Ĝ, irrespectively of its particular realization. When the conformal symmetry is spontaneously broken, the G-CAT model becomes a generalized affine Toda model, whose soliton solutions are constructed. Their masses are obtained exploring the spontaneous breakdown of the conformal symmetry, and their relation to the fundamental particle masses is discussed. We also introduce what we call the two-loop Virasoro algebra, describing extended symmetries of the two-loop WZNW models.
Resumo:
The solutions of a large class of hierarchies of zero-curvature equations that includes Toda- and KdV-type hierarchies are investigated. All these hierarchies are constructed from affine (twisted or untwisted) Kac-Moody algebras g. Their common feature is that they have some special vacuum solutions corresponding to Lax operators lying in some Abelian (up to the central term) subalgebra of g; in some interesting cases such subalgebras are of the Heisenberg type. Using the dressing transformation method, the solutions in the orbit of those vacuum solutions are constructed in a uniform way. Then, the generalized tau-functions for those hierarchies are defined as an alternative set of variables corresponding to certain matrix elements evaluated in the integrable highest-weight representations of g. Such definition of tau-functions applies for any level of the representation, and it is independent of its realization (vertex operator or not). The particular important cases of generalized mKdV and KdV hierarchies as well as the Abelian and non-Abelian affine Toda theories are discussed in detail. © 1997 American Institute of Physics.
Resumo:
Bosonized q-vertex operators related to the four-dimensional evaluation modules of the quantum affine superalgebra U-q[sl((2) over cap\1)] are constructed for arbitrary level k=alpha, where alpha not equal 0,-1 is a complex parameter appearing in the four-dimensional evaluation representations. They are intertwiners among the level-alpha highest weight Fock-Wakimoto modules. Screen currents which commute with the action of U-q[sl((2) over cap/1)] up to total differences are presented. Integral formulas for N-point functions of type I and type II q-vertex operators are proposed. (C) 2000 American Institute of Physics. [S0022-2488(00)00608-3].
Resumo:
Vectorial Boolean function, almost bent, almost perfect nonlinear, affine equivalence, CCZ-equivalence
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
We investigate the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. We then use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved).
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
One may construct, for any function on the integers, an irreducible module of level zero for affine sl(2) using the values of the function as structure constants. The modules constructed using exponential-polynomial functions realize the irreducible modules with finite-dimensional weight spaces in the category (O) over tilde of Chari. In this work, an expression for the formal character of such a module is derived using the highest weight theory of truncations of the loop algebra.
Resumo:
Multivariate Affine term structure models have been increasingly used for pricing derivatives in fixed income markets. In these models, uncertainty of the term structure is driven by a state vector, while the short rate is an affine function of this vector. The model is characterized by a specific form for the stochastic differential equation (SDE) for the evolution of the state vector. This SDE presents restrictions on its drift term which rule out arbitrages in the market. In this paper we solve the following inverse problem: Suppose the term structure of interest rates is modeled by a linear combination of Legendre polynomials with random coefficients. Is there any SDE for these coefficients which rules out arbitrages? This problem is of particular empirical interest because the Legendre model is an example of factor model with clear interpretation for each factor, in which regards movements of the term structure. Moreover, the Affine structure of the Legendre model implies knowledge of its conditional characteristic function. From the econometric perspective, we propose arbitrage-free Legendre models to describe the evolution of the term structure. From the pricing perspective, we follow Duffie et al. (2000) in exploring Legendre conditional characteristic functions to obtain a computational tractable method to price fixed income derivatives. Closing the article, the empirical section presents precise evidence on the reward of implementing arbitrage-free parametric term structure models: The ability of obtaining a good approximation for the state vector by simply using cross sectional data.
Resumo:
It is shown that the affine Toda models (AT) constitute a gauge fixed version of the conformal affine Toda model (CAT). This result enables one to map every solution of the AT models into an infinite number of solutions of the corresponding CAT models, each one associated to a point of the orbit of the conformal group. The Hirota τ-functions are introduced and soliton solutions for the AT and CAT models associated to SL̂ (r+1) and SP̂ (r) are constructed.
Resumo:
Purpose. To determine the mechanisms predisposing penile fracture as well as the rate of long-term penile deformity and erectile and voiding functions. Methods. All fractures were repaired on an emergency basis via subcoronal incision and absorbable suture with simultaneous repair of eventual urethral lesion. Patients' status before fracture and voiding and erectile functions at long term were assessed by periodic follow-up and phone call. Detailed history included cause, symptoms, and single-question self-report of erectile and voiding functions. Results. Among the 44 suspicious cases, 42 (95.4%) were confirmed, mean age was 34.5 years (range: 18-60), mean follow-up 59.3 months (range 9-155). Half presented the classical triad of audible crack, detumescence, and pain. Heterosexual intercourse was the most common cause (28 patients, 66.7%), followed by penile manipulation (6 patients, 14.3%), and homosexual intercourse (4 patients, 9.5%). Woman on top was the most common heterosexual position (n = 14, 50%), followed by doggy style (n = 8, 28.6%). Four patients (9.5%) maintained the cause unclear. Six (14.3%) patients had urethral injury and two (4.8%) had erectile dysfunction, treated by penile prosthesis and PDE-5i. No patient showed urethral fistula, voiding deterioration, penile nodule/curve or pain. Conclusions. Woman on top was the potentially riskiest sexual position (50%). Immediate surgical treatment warrants long-term very low morbidity.
Resumo:
Streptococcus sanguinis is a commensal pioneer colonizer of teeth and an opportunistic pathogen of infectious endocarditis. The establishment of S. sanguinis in host sites likely requires dynamic fitting of the cell wall in response to local stimuli. In this study, we investigated the two-component system (TCS) VicRK in S. sanguinis (VicRKSs), which regulates genes of cell wall biogenesis, biofilm formation, and virulence in opportunistic pathogens. A vicK knockout mutant obtained from strain SK36 (SKvic) showed slight reductions in aerobic growth and resistance to oxidative stress but an impaired ability to form biofilms, a phenotype restored in the complemented mutant. The biofilm-defective phenotype was associated with reduced amounts of extracellular DNA during aerobic growth, with reduced production of H2O2, a metabolic product associated with DNA release, and with inhibitory capacity of S. sanguinis competitor species. No changes in autolysis or cell surface hydrophobicity were detected in SKvic. Reverse transcription-quantitative PCR (RT-qPCR), electrophoretic mobility shift assays (EMSA), and promoter sequence analyses revealed that VicR directly regulates genes encoding murein hydrolases (SSA_0094, cwdP, and gbpB) and spxB, which encodes pyruvate oxidase for H2O2 production. Genes previously associated with spxB expression (spxR, ccpA, ackA, and tpK) were not transcriptionally affected in SKvic. RT-qPCR analyses of S. sanguinis biofilm cells further showed upregulation of VicRK targets (spxB, gbpB, and SSA_0094) and other genes for biofilm formation (gtfP and comE) compared to expression in planktonic cells. This study provides evidence that VicRKSs regulates functions crucial for S. sanguinis establishment in biofilms and identifies novel VicRK targets potentially involved in hydrolytic activities of the cell wall required for these functions.