731 resultados para Szego polynomials
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..
Resumo:
Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.
Resumo:
By using a symbolic method, known in the literature as the classical umbral calculus, a symbolic representation of Lévy processes is given and a new family of time-space harmonic polynomials with respect to such processes, which includes and generalizes the exponential complete Bell polynomials, is introduced. The usefulness of time-space harmonic polynomials with respect to Lévy processes is that it is a martingale the stochastic process obtained by replacing the indeterminate x of the polynomials with a Lévy process, whereas the Lévy process does not necessarily have this property. Therefore to find such polynomials could be particularly meaningful for applications. This new family includes Hermite polynomials, time-space harmonic with respect to Brownian motion, Poisson-Charlier polynomials with respect to Poisson processes, Laguerre and actuarial polynomials with respect to Gamma processes , Meixner polynomials of the first kind with respect to Pascal processes, Euler, Bernoulli, Krawtchuk, and pseudo-Narumi polynomials with respect to suitable random walks. The role played by cumulants is stressed and brought to the light, either in the symbolic representation of Lévy processes and their infinite divisibility property, either in the generalization, via umbral Kailath-Segall formula, of the well-known formulae giving elementary symmetric polynomials in terms of power sum symmetric polynomials. The expression of the family of time-space harmonic polynomials here introduced has some connections with the so-called moment representation of various families of multivariate polynomials. Such moment representation has been studied here for the first time in connection with the time-space harmonic property with respect to suitable symbolic multivariate Lévy processes. In particular, multivariate Hermite polynomials and their properties have been studied in connection with a symbolic version of the multivariate Brownian motion, while multivariate Bernoulli and Euler polynomials are represented as powers of multivariate polynomials which are time-space harmonic with respect to suitable multivariate Lévy processes.
Resumo:
The main goal of this thesis is to understand and link together some of the early works by Michel Rumin and Pierre Julg. The work is centered around the so-called Rumin complex, which is a construction in subRiemannian geometry. A Carnot manifold is a manifold endowed with a horizontal distribution. If further a metric is given, one gets a subRiemannian manifold. Such data arise in different contexts, such as: - formulation of the second principle of thermodynamics; - optimal control; - propagation of singularities for sums of squares of vector fields; - real hypersurfaces in complex manifolds; - ideal boundaries of rank one symmetric spaces; - asymptotic geometry of nilpotent groups; - modelization of human vision. Differential forms on a Carnot manifold have weights, which produces a filtered complex. In view of applications to nilpotent groups, Rumin has defined a substitute for the de Rham complex, adapted to this filtration. The presence of a filtered complex also suggests the use of the formal machinery of spectral sequences in the study of cohomology. The goal was indeed to understand the link between Rumin's operator and the differentials which appear in the various spectral sequences we have worked with: - the weight spectral sequence; - a special spectral sequence introduced by Julg and called by him Forman's spectral sequence; - Forman's spectral sequence (which turns out to be unrelated to the previous one). We will see that in general Rumin's operator depends on choices. However, in some special cases, it does not because it has an alternative interpretation as a differential in a natural spectral sequence. After defining Carnot groups and analysing their main properties, we will introduce the concept of weights of forms which will produce a splitting on the exterior differential operator d. We shall see how the Rumin complex arises from this splitting and proceed to carry out the complete computations in some key examples. From the third chapter onwards we will focus on Julg's paper, describing his new filtration and its relationship with the weight spectral sequence. We will study the connection between the spectral sequences and Rumin's complex in the n-dimensional Heisenberg group and the 7-dimensional quaternionic Heisenberg group and then generalize the result to Carnot groups using the weight filtration. Finally, we shall explain why Julg required the independence of choices in some special Rumin operators, introducing the Szego map and describing its main properties.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
The diameters of traditional dish concentrators can reach several tens of meters, the construction of monolithic mirrors being difficult at these scales: cheap flat reflecting facets mounted on a common frame generally reproduce a paraboloidal surface. When a standard imaging mirror is coupled with a PV dense array, problems arise since the solar image focused is intrinsically circular. Moreover, the corresponding irradiance distribution is bell-shaped in contrast with the requirement of having all the cells under the same illumination. Mismatch losses occur when interconnected cells experience different conditions, in particular in series connections. In this PhD Thesis, we aim at solving these issues by a multidisciplinary approach, exploiting optical concepts and applications developed specifically for astronomical use, where the improvement of the image quality is a very important issue. The strategy we propose is to boost the spot uniformity acting uniquely on the primary reflector and avoiding the big mirrors segmentation into numerous smaller elements that need to be accurately mounted and aligned. In the proposed method, the shape of the mirrors is analytically described by the Zernike polynomials and its optimization is numerically obtained to give a non-imaging optics able to produce a quasi-square spot, spatially uniform and with prescribed concentration level. The freeform primary optics leads to a substantial gain in efficiency without secondary optics. Simple electrical schemes for the receiver are also required. The concept has been investigated theoretically modeling an example of CPV dense array application, including the development of non-optical aspects as the design of the detector and of the supporting mechanics. For the method proposed and the specific CPV system described, a patent application has been filed in Italy with the number TO2014A000016. The patent has been developed thanks to the collaboration between the University of Bologna and INAF (National Institute for Astrophysics).
Resumo:
The aim of this dissertation is to improve the knowledge of knots and links in lens spaces. If the lens space L(p,q) is defined as a 3-ball with suitable boundary identifications, then a link in L(p,q) can be represented by a disk diagram, i.e. a regular projection of the link on a disk. In this contest, we obtain a complete finite set of Reidemeister-type moves establishing equivalence, up to ambient isotopy. Moreover, the connections of this new diagram with both grid and band diagrams for links in lens spaces are shown. A Wirtinger-type presentation for the group of the link and a diagrammatic method giving the first homology group are described. A class of twisted Alexander polynomials for links in lens spaces is computed, showing its correlation with Reidemeister torsion. One of the most important geometric invariants of links in lens spaces is the lift in 3-sphere of a link L in L(p,q), that is the counterimage of L under the universal covering of L(p,q). Starting from the disk diagram of the link, we obtain a diagram of the lift in the 3-sphere. Using this construction it is possible to find different knots and links in L(p,q) having equivalent lifts, hence we cannot distinguish different links in lens spaces only from their lift. The two final chapters investigate whether several existing invariants for links in lens spaces are essential, i.e. whether they may assume different values on links with equivalent lift. Namely, we consider the fundamental quandle, the group of the link, the twisted Alexander polynomials, the Kauffman Bracket Skein Module and an HOMFLY-PT-type invariant.
Resumo:
After briefly discuss the natural homogeneous Lie group structure induced by Kolmogorov equations in chapter one, we define an intrinsic version of Taylor polynomials and Holder spaces in chapter two. We also compare our definition with others yet known in literature. In chapter three we prove an analogue of Taylor formula, that is an estimate of the remainder in terms of the homogeneous metric.
Resumo:
In my work I derive closed-form pricing formulas for volatility based options by suitably approximating the volatility process risk-neutral density function. I exploit and adapt the idea, which stands behind popular techniques already employed in the context of equity options such as Edgeworth and Gram-Charlier expansions, of approximating the underlying process as a sum of some particular polynomials weighted by a kernel, which is typically a Gaussian distribution. I propose instead a Gamma kernel to adapt the methodology to the context of volatility options. VIX vanilla options closed-form pricing formulas are derived and their accuracy is tested for the Heston model (1993) as well as for the jump-diffusion SVJJ model proposed by Duffie et al. (2000).
Resumo:
The excitation spectrum is one of the fundamental properties of every spatially extended system. The excitations of the building blocks of normal matter, i.e., protons and neutrons (nucleons), play an important role in our understanding of the low energy regime of the strong interaction. Due to the large coupling, perturbative solutions of quantum chromodynamics (QCD) are not appropriate to calculate long-range phenomena of hadrons. For many years, constituent quark models were used to understand the excitation spectra. Recently, calculations in lattice QCD make first connections between excited nucleons and the fundamental field quanta (quarks and gluons). Due to their short lifetime and large decay width, excited nucleons appear as resonances in scattering processes like pion nucleon scattering or meson photoproduction. In order to disentangle individual resonances with definite spin and parity in experimental data, partial wave analyses are necessary. Unique solutions in these analyses can only be expected if sufficient empirical information about spin degrees of freedom is available. The measurement of spin observables in pion photoproduction is the focus of this thesis. The polarized electron beam of the Mainz Microtron (MAMI) was used to produce high-intensity, polarized photon beams with tagged energies up to 1.47 GeV. A "frozen-spin" Butanol target in combination with an almost 4π detector setup consisting of the Crystal Ball and the TAPS calorimeters allowed the precise determination of the helicity dependence of the γp → π0p reaction. In this thesis, as an improvement of the target setup, an internal polarizing solenoid has been constructed and tested. A magnetic field of 2.32 T and homogeneity of 1.22×10−3 in the target volume have been achieved. The helicity asymmetry E, i.e., the difference of events with total helicity 1/2 and 3/2 divided by the sum, was determined from data taken in the years 2013-14. The subtraction of background events arising from nucleons bound in Carbon and Oxygen was an important part of the analysis. The results for the asymmetry E are compared to existing data and predictions from various models. The results show a reasonable agreement to the models in the energy region of the ∆(1232)-resonance but large discrepancies are observed for energy above 600 MeV. The expansion of the present data in terms of Legendre polynomials, shows the sensitivity of the data to partial wave amplitudes up to F-waves. Additionally, a first, preliminary multipole analysis of the present data together with other results from the Crystal Ball experiment has been as been performed.
Resumo:
Neurally adjusted ventilatory assist (NAVA) delivers airway pressure (P(aw)) in proportion to the electrical activity of the diaphragm (EAdi) using an adjustable proportionality constant (NAVA level, cm·H(2)O/μV). During systematic increases in the NAVA level, feedback-controlled down-regulation of the EAdi results in a characteristic two-phased response in P(aw) and tidal volume (Vt). The transition from the 1st to the 2nd response phase allows identification of adequate unloading of the respiratory muscles with NAVA (NAVA(AL)). We aimed to develop and validate a mathematical algorithm to identify NAVA(AL). P(aw), Vt, and EAdi were recorded while systematically increasing the NAVA level in 19 adult patients. In a multistep approach, inspiratory P(aw) peaks were first identified by dividing the EAdi into inspiratory portions using Gaussian mixture modeling. Two polynomials were then fitted onto the curves of both P(aw) peaks and Vt. The beginning of the P(aw) and Vt plateaus, and thus NAVA(AL), was identified at the minimum of squared polynomial derivative and polynomial fitting errors. A graphical user interface was developed in the Matlab computing environment. Median NAVA(AL) visually estimated by 18 independent physicians was 2.7 (range 0.4 to 5.8) cm·H(2)O/μV and identified by our model was 2.6 (range 0.6 to 5.0) cm·H(2)O/μV. NAVA(AL) identified by our model was below the range of visually estimated NAVA(AL) in two instances and was above in one instance. We conclude that our model identifies NAVA(AL) in most instances with acceptable accuracy for application in clinical routine and research.
Resumo:
A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.
Resumo:
We introduce a new type of filter approximation method and call it the Pascal filter, which we construct from the Pascal polynomials. The roll-off characteristics of the Pascal, Butterworth, and the Chebyshev filters are compared.
Resumo:
Custom modes at a wavelength of 1064 nm were generated with a deformable mirror. The required surface deformations of the adaptive mirror were calculated with the Collins integral written in a matrix formalism. The appropriate size and shape of the actuators as well as the needed stroke were determined to ensure that the surface of the controllable mirror matches the phase front of the custom modes. A semipassive bimorph adaptive mirror with five concentric ring-shaped actuators and one defocus actuator was manufactured and characterised. The surface deformation was modelled with the response functions of the adaptive mirror in terms of an expansion with Zernike polynomials. In the experiments the Nd:YAG laser crystal was quasi-CW pumped to avoid thermally induced distortions of the phase front. The adaptive mirror allows to switch between a super-Gaussian mode, a doughnut mode, a Hermite-Gaussian fundamental beam, multi-mode operation or no oscillation in real time during laser operation.