941 resultados para Bivariate Hermite polynomials
Resumo:
In this paper, the residual Kullback–Leibler discrimination information measure is extended to conditionally specified models. The extension is used to characterize some bivariate distributions. These distributions are also characterized in terms of proportional hazard rate models and weighted distributions. Moreover, we also obtain some bounds for this dynamic discrimination function by using the likelihood ratio order and some preceding results.
Resumo:
In this paper, a family of bivariate distributions whose marginals are weighted distributions in the original variables is studied. The relationship between the failure rates of the derived and original models are obtained. These relationships are used to provide some characterizations of specific bivariate models
Characterizations of Bivariate Models Using Some Dynamic Conditional Information Divergence Measures
Resumo:
In this article, we study some relevant information divergence measures viz. Renyi divergence and Kerridge’s inaccuracy measures. These measures are extended to conditionally specifiedmodels and they are used to characterize some bivariate distributions using the concepts of weighted and proportional hazard rate models. Moreover, some bounds are obtained for these measures using the likelihood ratio order
Resumo:
In this work, we present a generic formula for the polynomial solution families of the well-known differential equation of hypergeometric type s(x)y"n(x) + t(x)y'n(x) - lnyn(x) = 0 and show that all the three classical orthogonal polynomial families as well as three finite orthogonal polynomial families, extracted from this equation, can be identified as special cases of this derived polynomial sequence. Some general properties of this sequence are also given.
Resumo:
This article surveys the classical orthogonal polynomial systems of the Hahn class, which are solutions of second-order differential, difference or q-difference equations. Orthogonal families satisfy three-term recurrence equations. Example applications of an algorithm to determine whether a three-term recurrence equation has solutions in the Hahn class - implemented in the computer algebra system Maple - are given. Modifications of these families, in particular associated orthogonal systems, satisfy fourth-order operator equations. A factorization of these equations leads to a solution basis.
Resumo:
Various results on parity of the number of irreducible factors of given polynomials over finite fields have been obtained in the recent literature. Those are mainly based on Swan’s theorem in which discriminants of polynomials over a finite field or the integral ring Z play an important role. In this paper we consider discriminants of the composition of some polynomials over finite fields. The relation between the discriminants of composed polynomial and the original ones will be established. We apply this to obtain some results concerning the parity of the number of irreducible factors for several special polynomials over finite fields.
Resumo:
Irreducible trinomials of given degree n over F_2 do not always exist and in the cases that there is no irreducible trinomial of degree n it may be effective to use trinomials with an irreducible factor of degree n. In this paper we consider some conditions under which irreducible polynomials divide trinomials over F_2. A condition for divisibility of self-reciprocal trinomials by irreducible polynomials over F_2 is established. And we extend Welch's criterion for testing if an irreducible polynomial divides trinomials x^m + x^s + 1 to the trinomials x^am + x^bs + 1.
Resumo:
Using the functional approach, we state and prove a characterization theorem for classical orthogonal polynomials on non-uniform lattices (quadratic lattices of a discrete or a q-discrete variable) including the Askey-Wilson polynomials. This theorem proves the equivalence between seven characterization properties, namely the Pearson equation for the linear functional, the second-order divided-difference equation, the orthogonality of the derivatives, the Rodrigues formula, two types of structure relations,and the Riccati equation for the formal Stieltjes function.
Resumo:
The aim of this work is to find simple formulas for the moments mu_n for all families of classical orthogonal polynomials listed in the book by Koekoek, Lesky and Swarttouw. The generating functions or exponential generating functions for those moments are given.
Resumo:
In this work, we have mainly achieved the following: 1. we provide a review of the main methods used for the computation of the connection and linearization coefficients between orthogonal polynomials of a continuous variable, moreover using a new approach, the duplication problem of these polynomial families is solved; 2. we review the main methods used for the computation of the connection and linearization coefficients of orthogonal polynomials of a discrete variable, we solve the duplication and linearization problem of all orthogonal polynomials of a discrete variable; 3. we propose a method to generate the connection, linearization and duplication coefficients for q-orthogonal polynomials; 4. we propose a unified method to obtain these coefficients in a generic way for orthogonal polynomials on quadratic and q-quadratic lattices. Our algorithmic approach to compute linearization, connection and duplication coefficients is based on the one used by Koepf and Schmersau and on the NaViMa algorithm. Our main technique is to use explicit formulas for structural identities of classical orthogonal polynomial systems. We find our results by an application of computer algebra. The major algorithmic tools for our development are Zeilberger’s algorithm, q-Zeilberger’s algorithm, the Petkovšek-van-Hoeij algorithm, the q-Petkovšek-van-Hoeij algorithm, and Algorithm 2.2, p. 20 of Koepf's book "Hypergeometric Summation" and it q-analogue.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table has n rows and m columns and all probabilities are non-null. This kind of table can be seen as an element in the simplex of n · m parts. In this context, the marginals are identified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclidean elements of the Aitchison geometry of the simplex can also be translated into the table of probabilities: subspaces, orthogonal projections, distances. Two important questions are addressed: a) given a table of probabilities, which is the nearest independent table to the initial one? b) which is the largest orthogonal projection of a row onto a column? or, equivalently, which is the information in a row explained by a column, thus explaining the interaction? To answer these questions three orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independent two-way tables and fully dependent tables representing row-column interaction. An important result is that the nearest independent table is the product of the two (row and column)-wise geometric marginal tables. A corollary is that, in an independent table, the geometric marginals conform with the traditional (arithmetic) marginals. These decompositions can be compared with standard log-linear models. Key words: balance, compositional data, simplex, Aitchison geometry, composition, orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure, contingency table
Resumo:
Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on binary measures of undesirable events and continuous measures of therapeutic benefit. The methods generalize earlier approaches where undesirable events and therapeutic benefit are both binary. A logistic regression model is used to model the binary responses, while a linear regression model is used to model the continuous responses. Prior distributions for the unknown model parameters are suggested. A gain function is discussed and an optional safety constraint is included. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.