980 resultados para zeros of polynomials
Resumo:
One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
We present STAR results on the elliptic flow upsilon(2) Of charged hadrons, strange and multistrange particles from,root s(NN) = 200 GeV Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). The detailed study of the centrality dependence of upsilon(2) over a broad transverse momentum range is presented. Comparisons of different analysis methods are made in order to estimate systematic uncertainties. To discuss the nonflow effect, we have performed the first analysis Of upsilon(2) with the Lee-Yang zero method for K(S)(0) and A. In the relatively low PT region, P(T) <= 2 GeV/c, a scaling with m(T) - m is observed for identified hadrons in each centrality bin studied. However, we do not observe nu 2(p(T))) scaled by the participant eccentricity to be independent of centrality. At higher PT, 2 1 <= PT <= 6 GeV/c, V2 scales with quark number for all hadrons studied. For the multistrange hadron Omega, which does not suffer appreciable hadronic interactions, the values of upsilon(2) are consistent with both m(T) - m scaling at low p(T) and number-of-quark scaling at intermediate p(T). As a function ofcollision centrality, an increase of p(T)-integrated upsilon(2) scaled by the participant eccentricity has been observed, indicating a stronger collective flow in more central Au+Au collisions.
Resumo:
The problem of semialgebraic Lipschitz classification of quasihomogeneous polynomials on a Holder triangle is studied. For this problem, the ""moduli"" are described completely in certain combinatorial terms.
Resumo:
In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an accurate and efficient solution for the random transverse and angular displacement fields of uncertain Timoshenko beams. Approximate, numerical solutions are obtained using the Galerkin method and chaos polynomials. The Chaos-Galerkin scheme is constructed by respecting the theoretical conditions for existence and uniqueness of the solution. Numerical results show fast convergence to the exact solution, at excellent accuracies. The developed Chaos-Galerkin scheme accurately approximates the complete cumulative distribution function of the displacement responses. The Chaos-Galerkin scheme developed herein is a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Resumo:
The integral of the Wigner function of a quantum-mechanical system over a region or its boundary in the classical phase plane, is called a quasiprobability integral. Unlike a true probability integral, its value may lie outside the interval [0, 1]. It is characterized by a corresponding selfadjoint operator, to be called a region or contour operator as appropriate, which is determined by the characteristic function of that region or contour. The spectral problem is studied for commuting families of region and contour operators associated with concentric discs and circles of given radius a. Their respective eigenvalues are determined as functions of a, in terms of the Gauss-Laguerre polynomials. These polynomials provide a basis of vectors in a Hilbert space carrying the positive discrete series representation of the algebra su(1, 1) approximate to so(2, 1). The explicit relation between the spectra of operators associated with discs and circles with proportional radii, is given in terms of the discrete variable Meixner polynomials.
Resumo:
A fully explicit formula for the eigenvalues of Casimir invariants for U-q(gl(m/n)) is given which applies to all unitary irreps. This is achieved by making some interesting observations on atypicality indices for irreps occurring in the tensor product of unitary irreps of the same type. These results have applications in the determination of link polynomials arising from unitary irreps of U-q(gl(m/n)).
Resumo:
Resonance phenomena associated with the unimolecular dissociation of HO2 have been investigated quantum-mechanically by the Lanczos homogeneous filter diagonalization (LHFD) method. The calculated resonance energies, rates (widths), and product state distributions are compared to results from an autocorrelation function-based filter diagonalization (ACFFD) method. For calculating resonance wave functions via ACFFD, an analytical expression for the expansion coefficients of the modified Chebyshev polynomials is introduced. Both dissociation rates and product state distributions of O-2 show strong fluctuations, indicating the dissociation of HO2 is essentially irregular. (C) 2001 American Institute of Physics.
Resumo:
Time-dependent wavepacket evolution techniques demand the action of the propagator, exp(-iHt/(h)over-bar), on a suitable initial wavepacket. When a complex absorbing potential is added to the Hamiltonian for combating unwanted reflection effects, polynomial expansions of the propagator are selected on their ability to cope with non-Hermiticity. An efficient subspace implementation of the Newton polynomial expansion scheme that requires fewer dense matrix-vector multiplications than its grid-based counterpart has been devised. Performance improvements are illustrated with some benchmark one and two-dimensional examples. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A semi-analytical analysis of free vibration of plates with cross-sectional discontinuities due to abrupt changes in thickness is presented. A basic square element divided into suitable subdomains dependent upon the positions of these abrupt changes is used as the basic building element, Admissible functions that satisfy the essential or geometric boundary conditions are used to define the transverse deflection of each subdomain. Continuities in the displacement, slope, moment and higher derivatives between adjacent subdomains are enforced at the interconnecting edges. The resulting global energy functional from the proper assembly of the coupled strain and kinetic energy contributions of each subdomain is then minimized via the Ritz procedure to extract the frequencies and mode shapes. Contour plots of a range of new mode shapes are presented for the enhancement of understanding the dynamic behavior of this class of plates, (C) 2001 Elsevier Science Ltd, All rights reserved.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Computer simulation of dynamical systems involves a phase space which is the finite set of machine arithmetic. Rounding state values of the continuous system to this grid yields a spatially discrete dynamical system, often with different dynamical behaviour. Discretization of an invertible smooth system gives a system with set-valued negative semitrajectories. As the grid is refined, asymptotic behaviour of the semitrajectories follows probabilistic laws which correspond to a set-valued Markov chain, whose transition probabilities can be explicitly calculated. The results are illustrated for two-dimensional dynamical systems obtained by discretization of fractional linear transformations of the unit disc in the complex plane.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.