38 resultados para logarithmic sprayer
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
This paper is concerned with the modeling and analysis of quantum dissipation phenomena in the Schrödinger picture. More precisely, we do investigate in detail a dissipative, nonlinear Schrödinger equation somehow accounting for quantum Fokker–Planck effects, and how it is drastically reduced to a simpler logarithmic equation via a nonlinear gauge transformation in such a way that the physics underlying both problems keeps unaltered. From a mathematical viewpoint, this allows for a more achievable analysis regarding the local wellposedness of the initial–boundary value problem. This simplification requires the performance of the polar (modulus–argument) decomposition of the wavefunction, which is rigorously attained (for the first time to the best of our knowledge) under quite reasonable assumptions.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
Peer-reviewed
Resumo:
We consider multidimensional backward stochastic differential equations (BSDEs). We prove the existence and uniqueness of solutions when the coefficient grow super-linearly, and moreover, can be neither locally Lipschitz in the variable y nor in the variable z. This is done with super-linear growth coefficient and a p-integrable terminal condition (p & 1). As application, we establish the existence and uniqueness of solutions to degenerate semilinear PDEs with superlinear growth generator and an Lp-terminal data, p & 1. Our result cover, for instance, the case of PDEs with logarithmic nonlinearities.
Resumo:
Variational steepest descent approximation schemes for the modified Patlak-Keller-Segel equation with a logarithmic interaction kernel in any dimension are considered. We prove the convergence of the suitably interpolated in time implicit Euler scheme, defined in terms of the Euclidean Wasserstein distance, associated to this equation for sub-critical masses. As a consequence, we recover the recent result about the global in time existence of weak-solutions to the modified Patlak-Keller-Segel equation for the logarithmic interaction kernel in any dimension in the sub-critical case. Moreover, we show how this method performs numerically in one dimension. In this particular case, this numerical scheme corresponds to a standard implicit Euler method for the pseudo-inverse of the cumulative distribution function. We demonstrate its capabilities to reproduce easily without the need of mesh-refinement the blow-up of solutions for super-critical masses.
Resumo:
Les xarxes híbrides satèl·lit-terrestre ofereixen connectivitat a zones remotes i aïllades i permeten resoldre nombrosos problemes de comunicacions. No obstant, presenten diversos reptes, ja que realitzen la comunicació per un canal mòbil terrestre i un canal satèl·lit contigu. Un d'aquests reptes és trobar mecanismes per realitzar eficientment l'enrutament i el control de flux, de manera conjunta. L'objectiu d'aquest projecte és simular i estudiar algorismes existents que resolguin aquests problemes, així com proposar-ne de nous, mitjançant diverses tècniques d'optimització convexa. A partir de les simulacions realitzades en aquest estudi, s'han analitzat àmpliament els diversos problemes d'enrutament i control de flux, i s'han avaluat els resultats obtinguts i les prestacions dels algorismes emprats. En concret, s'han implementat de manera satisfactòria algorismes basats en el mètode de descomposició dual, el mètode de subgradient, el mètode de Newton i el mètode de la barrera logarítmica, entre d'altres, per tal de resoldre els problemes d'enrutament i control de flux plantejats.
Resumo:
Treball de recerca realitzat per un alumne d'ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit científic del Jovent l'any 2009. Aquest treball de recerca es basa en l'experimentació i, posteriorment, l'obtenció i anàlisi de resultats de l'experiment creador d'anells de Liesegang. Aquest experiment, consistent en la precipitació d'un compost en una base gelificada formant anells distanciats logarítmicament els uns dels altres, ha estat durant més d'un segle objecte d'investigació de moltíssims científics, els quals no han sabut mai treure'n una explicació lògica i raonable d'aquest rar comportament. L'autor ha pretès recrear els curiosos anells intentant formar-los amb diferents inhibidors i compostos als trobats en la bibliografia. Després de realitzar més d'una trentena d'experiments, s'ha realitzat una anàlisi exhaustiva dels resultats. Aquest apartat ha estat un dels més enriquidors, ja que s'han dut a terme en ell comparacions sorprenents i troballes molt curioses, com per exemple la similitud entre els anells de Liesegang i les estructures de Turing, la qual intenta explicar les formes presents en els ocels dels éssers vius; i l'aparició d'anells de Liesegang segons l’òptica visual, efecte inexistent en l’àmplia bibliografia consultada. A més a més, també s'han efectuat una sèrie d'estudis: un en què es confirmen les distàncies logarítmiques entre els anells i on es realitza una comparació entre les dades empíriques i el patró matemàtic; i un altre en què s'estudia el comportament dels anells al variar els factors que regulen la velocitat de reacció.
Resumo:
When dealing with sustainability we are concerned with the biophysical as well as the monetary aspects of economic and ecological interactions. This multidimensional approach requires that special attention is given to dimensional issues in relation to curve fitting practice in economics. Unfortunately, many empirical and theoretical studies in economics, as well as in ecological economics, apply dimensional numbers in exponential or logarithmic functions. We show that it is an analytical error to put a dimensional unit x into exponential functions ( a x ) and logarithmic functions ( x a log ). Secondly, we investigate the conditions of data sets under which a particular logarithmic specification is superior to the usual regression specification. This analysis shows that logarithmic specification superiority in terms of least square norm is heavily dependent on the available data set. The last section deals with economists’ “curve fitting fetishism”. We propose that a distinction be made between curve fitting over past observations and the development of a theoretical or empirical law capable of maintaining its fitting power for any future observations. Finally we conclude this paper with several epistemological issues in relation to dimensions and curve fitting practice in economics
Resumo:
In the present paper, we study the geometric discrepancy with respect to families of rotated rectangles. The well-known extremal cases are the axis-parallel rectangles (logarithmic discrepancy) and rectangles rotated in all possible directions (polynomial discrepancy). We study several intermediate situations: lacunary sequences of directions, lacunary sets of finite order, and sets with small Minkowski dimension. In each of these cases, extensions of a lemma due to Davenport allow us to construct appropriate rotations of the integer lattice which yield small discrepancy.
Resumo:
We analyze the rate of convergence towards self-similarity for the subcritical Keller-Segel system in the radially symmetric two-dimensional case and in the corresponding one-dimensional case for logarithmic interaction. We measure convergence in Wasserstein distance. The rate of convergence towards self-similarity does not degenerate as we approach the critical case. As a byproduct, we obtain a proof of the logarithmic Hardy-Littlewood-Sobolev inequality in the one dimensional and radially symmetric two dimensional case based on optimal transport arguments. In addition we prove that the onedimensional equation is a contraction with respect to Fourier distance in the subcritical case.
Resumo:
This paper is to examine the proper use of dimensions and curve fitting practices elaborating on Georgescu-Roegen’s economic methodology in relation to the three main concerns of his epistemological orientation. Section 2 introduces two critical issues in relation to dimensions and curve fitting practices in economics in view of Georgescu-Roegen’s economic methodology. Section 3 deals with the logarithmic function (ln z) and shows that z must be a dimensionless pure number, otherwise it is nonsensical. Several unfortunate examples of this analytical error are presented including macroeconomic data analysis conducted by a representative figure in this field. Section 4 deals with the standard Cobb-Douglas function. It is shown that the operational meaning cannot be obtained for capital or labor within the Cobb-Douglas function. Section 4 also deals with economists "curve fitting fetishism". Section 5 concludes thispaper with several epistemological issues in relation to dimensions and curve fitting practices in economics.
Resumo:
In this paper the two main drawbacks of the heat balance integral methods are examined. Firstly we investigate the choice of approximating function. For a standard polynomial form it is shown that combining the Heat Balance and Refined Integral methods to determine the power of the highest order term will either lead to the same, or more often, greatly improved accuracy on standard methods. Secondly we examine thermal problems with a time-dependent boundary condition. In doing so we develop a logarithmic approximating function. This new function allows us to model moving peaks in the temperature profile, a feature that previous heat balance methods cannot capture. If the boundary temperature varies so that at some time t & 0 it equals the far-field temperature, then standard methods predict that the temperature is everywhere at this constant value. The new method predicts the correct behaviour. It is also shown that this function provides even more accurate results, when coupled with the new CIM, than the polynomial profile. Analysis primarily focuses on a specified constant boundary temperature and is then extended to constant flux, Newton cooling and time dependent boundary conditions.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
In this paper we examine the problem of compositional data from a different startingpoint. Chemical compositional data, as used in provenance studies on archaeologicalmaterials, will be approached from the measurement theory. The results will show, in avery intuitive way that chemical data can only be treated by using the approachdeveloped for compositional data. It will be shown that compositional data analysis is aparticular case in projective geometry, when the projective coordinates are in thepositive orthant, and they have the properties of logarithmic interval metrics. Moreover,it will be shown that this approach can be extended to a very large number ofapplications, including shape analysis. This will be exemplified with a case study inarchitecture of Early Christian churches dated back to the 5th-7th centuries AD