57 resultados para Orthogonal polynomials of a discrete variable
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The main aim of this short paper is to advertize the Koosis theorem in the mathematical community, especially among those who study orthogonal polynomials. We (try to) do this by proving a new theorem about asymptotics of orthogonal polynomi- als for which the Koosis theorem seems to be the most natural tool. Namely, we consider the case when a SzegÄo measure on the unit circumference is perturbed by an arbitrary measure inside the unit disk and an arbitrary Blaschke sequence of point masses outside the unit disk.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
Projecte de recerca elaborat a partir d’una estada a l’Institut National de la Recherche Agronomique, França, entre 2007 i 2009. Saccharomyces cerevisiae ha estat el llevat utilitzat durant mil.lenis en l'elaboració de vins. Tot i així, es té poc coneixement sobre les pressions de selecció que han actuat en la modelització del genoma dels llevats vínics. S’ha seqüenciat el genoma d'una soca vínica comercial, EC1118, obtenint 31 supercontigs que cobreixen el 97% del genoma de la soca de referència, S288c. S’ha trobat que el genoma de la soca vínica es diferencia bàsicament en la possessió de 3 regions úniques que contenen 34 gens implicats en funcions claus per al procés fermentatiu. A banda, s’han dut a terme estudis de filogènia i synteny (ordre dels gens) que mostren que una d'aquestes tres regions és pròxima a una espècie relacionada amb el gènere Saccharomyces, mentre que les altres dos regions tenen un origen no-Saccharomyces. S’ha identificat mitjançant PCR i seqüenciació a Zygosaccharomyces bailii, una espècie contaminant de les fermentacions víniques, com a espècie donadora d'una de les dues regions. Les hibridacions naturals entre soques de diferents espècies dins del grup Saccharomyces sensu stricto ja han estat descrites. El treball és el primer que presenta hibridacions entre espècies Saccharomyces i no-Saccharomyces (Z. bailii, en aquest cas). També s’assenyala que les noves regions es troben freqüent i diferencialment presents entre els clades de S. cerevisiae, trobant-se de manera gairebé exclusiva en el grup de les soques víniques, suggerint que es tracta d'una adquisició recent de transferència gènica. En general, les dades demostren que el genoma de les soques víniques pateix una constant remodelació mitjançant l'adquisició de gens exògens. Els resultats suggereixen que aquests processos estan afavorits per la proximitat ecològica i estan implicats en l'adaptació molecular de les soques víniques a les condicions d'elevada concentració en sucres, poc nitrogen i elevades concentracions en etanol.
Resumo:
Vegeu el resum a l'inici del document de l'arxiu adjunt
Resumo:
We present I-band deep CCD exposures of the fields of galactic plane radio variables. An optical counterpart, based on positional coincidence, has been found for 15 of the 27 observed program objects. The Johnson I magnitude of the sources identified is in the range 18-21.
Resumo:
We study the scattering of a moving discrete breather (DB) on a junction in a Fermi-Pasta-Ulam chain consisting of two segments with different masses of the particles. We consider four distinct cases: (i) a light-heavy (abrupt) junction in which the DB impinges on the junction from the segment with lighter mass, (ii) a heavy-light junction, (iii) an up mass ramp in which the mass in the heavier segment increases continuously as one moves away from the junction point, and (iv) a down mass ramp. Depending on the mass difference and DB characteristics (frequency and velocity), the DB can either reflect from, or transmit through, or get trapped at the junction or on the ramp. For the heavy-light junction, the DB can even split at the junction into a reflected and a transmitted DB. The latter is found to subsequently split into two or more DBs. For the down mass ramp the DB gets accelerated in several stages, with accompanying radiation (phonons). These results are rationalized by calculating the Peierls-Nabarro barrier for the various cases. We also point out implications of our results in realistic situations such as electron-phonon coupled chains.
Resumo:
Cyclic peptides and peptoids were prepared using the thiolene Michael-type reaction. The linear precursors were provided with additional functional groups allowing for subsequent conjugation: an orthogonally protected thiol, a protected maleimide, or an alkyne. The functional group for conjugation was placed either within the cycle or in an external position. The click reactions employed for conjugation with suitably derivatized nucleoside or oligonucleotides were either cycloadditions (Diels-Alder, Cu(I)-catalyzed azide-alkyne) or the same Michael-type reaction as for cyclization.
Resumo:
A thermodynamically consistent damage model for the simulation of progressive delamination under variable mode ratio is presented. The model is formulated in the context of the Damage Mechanics. The constitutive equation that results from the definition of the free energy as a function of a damage variable is used to model the initiation and propagation of delamination. A new delamination initiation criterion is developed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation proposed accounts for crack closure effets avoiding interfacial penetration of two adjacent layers aftercomplete decohesion. The model is implemented in a finite element formulation. The numerical predictions given by the model are compared with experimental results
Resumo:
A continuous random variable is expanded as a sum of a sequence of uncorrelated random variables. These variables are principal dimensions in continuous scaling on a distance function, as an extension of classic scaling on a distance matrix. For a particular distance, these dimensions are principal components. Then some properties are studied and an inequality is obtained. Diagonal expansions are considered from the same continuous scaling point of view, by means of the chi-square distance. The geometric dimension of a bivariate distribution is defined and illustrated with copulas. It is shown that the dimension can have the power of continuum.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
This paper analyzes the nature of health care provider choice inthe case of patient-initiated contacts, with special reference toa National Health Service setting, where monetary prices are zeroand general practitioners act as gatekeepers to publicly financedspecialized care. We focus our attention on the factors that mayexplain the continuously increasing use of hospital emergencyvisits as opposed to other provider alternatives. An extendedversion of a discrete choice model of demand for patient-initiatedcontacts is presented, allowing for individual and town residencesize differences in perceived quality (preferences) betweenalternative providers and including travel and waiting time asnon-monetary costs. Results of a nested multinomial logit model ofprovider choice are presented. Individual choice betweenalternatives considers, in a repeated nested structure, self-care,primary care, hospital and clinic emergency services. Welfareimplications and income effects are analyzed by computingcompensating variations, and by simulating the effects of userfees by levels of income. Results indicate that compensatingvariation per visit is higher than the direct marginal cost ofemergency visits, and consequently, emergency visits do not appearas an inefficient alternative even for non-urgent conditions.
Resumo:
A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.