945 resultados para Stochastic integrals
Resumo:
O objetivo desse artigo é caracterizar as fontes da ineficiência técnica e alocativa em um conjunto de 308 beneficiários de um programa de reforma agrária de mercado, chamado Cédula da Terra;, distribuídos em cinco estados do Nordeste brasileiro. Estudos conduzidos por Buainain et al. (2002) mostraram existem poucas diferenças entre as características de beneficiários deste programa e dos programas tradicionais de reforma agrária por expropriação e que portanto, os resultados obtidos por este trabalho permitem visualizar as dificuldades enfrentadas pelos assentamentos no Brasil. Para medir eficiência, estimou-se uma função de produção potencial segundo a metodologia de Battese e Coelli (1995) e a partir disto, procurou-se explicar as razões da ineficiência (relativa) encontrada. Os resultados apontam para a existência de ineficiência técnica e alocativa que é identificada principalmente nas situações em que a presença de produção para consumo é elevada. Tratase de um resultado que revela a pouca maturidade da maioria dos lotes dos assentados do PCT e a dificuldade de superar as limitações impostas pela condição inicial de formação dos assentamentos de reforma agrária, principalmente na região nordeste do Brasil.
Resumo:
Making sure that causality be preserved by means of ''covariantizing'' the gauge-dependent singularity in the propagator of the vector potential A(mu)(x), we show that the evaluation of some basic one-loop light-cone integrals reproduce those results obtained through the Mandelstam-Leibbrandt prescription. Moreover, such a covariantization has the advantage of leading to simpler integrals to be performed in the cone variables (the bonus), although, of course, it introduces an additional alpha-parameter integral to be performed (the price to pay).
Resumo:
It is a well known result that the Feynman's path integral (FPI) approach to quantum mechanics is equivalent to Schrodinger's equation when we use as integration measure the Wiener-Lebesgue measure. This results in little practical applicability due to the great algebraic complexibity involved, and the fact is that almost all applications of (FPI) - ''practical calculations'' - are done using a Riemann measure. In this paper we present an expansion to all orders in time of FPI in a quest for a representation of the latter solely in terms of differentiable trajetories and Riemann measure. We show that this expansion agrees with a similar expansion obtained from Schrodinger's equation only up to first order in a Riemann integral context, although by chance both expansions referred to above agree for the free. particle and harmonic oscillator cases. Our results permit, from the mathematical point of view, to estimate the many errors done in ''practical'' calculations of the FPI appearing in the literature and, from the physical point of view, our results supports the stochastic approach to the problem.
Resumo:
Using the Langevin approach for stochastic processes, we study the renormalizability of the massive Thirring model. At finite fictitious time, we prove the absence of induced quadrilinear counterterms by verifying the cancellation of the divergencies of graphs with four external lines. This implies that the vanishing of the renormalization group beta function already occurs at finite times.
Resumo:
We study the 1/N expansion of field theories in the stochastic quantization method of Parisi and Wu using the supersymmetric functional approach. This formulation provides a systematic procedure to implement the 1/N expansion which resembles the ones used in the equilibrium. The 1/N perturbation theory for the nonlinear sigma-model in two dimensions is worked out as an example.
Resumo:
Minimizing the makespan of a flow-shop no-wait (FSNW) schedule where the processing times are randomly distributed is an important NP-Complete Combinatorial Optimization Problem. In spite of this, it can be found only in very few papers in the literature. By considering the Start Interval Concept, this problem can be formulated, in a practical way, in function of the probability of the success in preserve FSNW constraints for all tasks execution. With this formulation, for the particular case with 3 machines, this paper presents different heuristics solutions: by integrating local optimization steps with insertion procedures and by using genetic algorithms for search the solution space. Computational results and performance evaluations are commented. Copyright (C) 1998 IFAC.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
An economic model including the labor resource and the process stage configuration is proposed to design g charts allowing for all the design parameters to be varied in an adaptive way. A random shift size is considered during the economic design selection. The results obtained for a benchmark of 64 process stage scenarios show that the activities configuration and some process operating parameters influence the selection of the best control chart strategy: to model the random shift size, its exact distribution can be approximately fitted by a discrete distribution obtained from a relatively small sample of historical data. However, an accurate estimation of the inspection costs associated to the SPC activities is far from being achieved. An illustrative example shows the implementation of the proposed economic model in a real industrial case. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Wind-excited vibrations in the frequency range of 10 to 50 Hz due to vortex shedding often cause fatigue failures in the cables of overhead transmission lines. Damping devices, such as the Stockbridge dampers, have been in use for a long time for supressing these vibrations. The dampers are conveniently modelled by means of their driving point impedance, measured in the lab over the frequency range under consideration. The cables can be modelled as strings with additional small bending stiffness. The main problem in modelling the vibrations does however lay in the aerodynamic forces, which usually are approximated by the forces acting on a rigid cylinder in planar flow. In the present paper, the wind forces are represented by stochastic processes with arbitrary crosscorrelation in space; the case of a Kármán vortex street on a rigid cylinder in planar flow is contained as a limit case in this approach. The authors believe that this new view of the problem may yield useful results, particularly also concerning the reliability of the lines and the probability of fatigue damages. © 1987.
Resumo:
I analyze two inequalities on entropy and information, one due to von Neumann and a recent one to Schiffer, and show that the relevant quantities in these inequalities are related by special doubly stochastic matrices (DSM). I then use generalization of the first inequality to prove algebraically a generalization of Schiffer's inequality to arbitrary DSM. I also give a second interpretation to the latter inequality, determine its domain of applicability, and illustrate it by using Zeeman splitting. This example shows that symmetric (degenerate) systems have less entropy than the corresponding split systems, if compared at the same average energy. This seemingly counter-intuitive result is explained thermodynamically. © 1991.
Resumo:
The negative-dimensional integration method (NDIM) seems to be a very promising technique for evaluating massless and/or massive Feynman diagrams. It is unique in the sense that the method gives solutions in different regions of external momenta simultaneously. Moreover, it is a technique whereby the difficulties associated with performing parametric integrals in the standard approach are transferred to a simpler solving of a system of linear algebraic equations, thanks to the polynomial character of the relevant integrands. We employ this method to evaluate a scalar integral for a massless two-loop three-point vertex with all the external legs off-shell, and consider several special cases for it, yielding results, even for distinct simpler diagrams. We also consider the possibility of NDIM in non-covariant gauges such as the light-cone gauge and do some illustrative calculations, showing that for one-degree violation of covariance (i.e. one external, gauge-breaking, light-like vector n μ) the ensuing results are concordant with the ones obtained via either the usual dimensional regularization technique, or the use of the principal value prescription for the gauge-dependent pole, while for two-degree violation of covariance - i.e. two external, light-like vectors n μ, the gauge-breaking one, and (its dual) n * μ - the ensuing results are concordant with the ones obtained via causal constraints or the use of the so-called generalized Mandelstam-Leibbrandt prescription. © 1999 Elsevier Science B.V.
Resumo:
The negative-dimensional integration method (NDIM) is revealing itself as a very useful technique for computing massless and/or massive Feynman integrals, covariant and noncovanant alike. Up until now however, the illustrative calculations done using such method have been mostly covariant scalar integrals/without numerator factors. We show here how those integrals with tensorial structures also can be handled straightforwardly and easily. However, contrary to the absence of significant features in the usual approach, here the NDIM also allows us to come across surprising unsuspected bonuses. Toward this end, we present two alternative ways of working out the integrals and illustrate them by taking the easiest Feynman integrals in this category that emerge in the computation of a standard one-loop self-energy diagram. One of the novel and heretofore unsuspected bonuses is that there are degeneracies in the way one can express the final result for the referred Feynman integral.
Resumo:
Power-law distributions, i.e. Levy flights have been observed in various economical, biological, and physical systems in high-frequency regime. These distributions can be successfully explained via gradually truncated Levy flight (GTLF). In general, these systems converge to a Gaussian distribution in the low-frequency regime. In the present work, we develop a model for the physical basis for the cut-off length in GTLF and its variation with respect to the time interval between successive observations. We observe that GTLF automatically approach a Gaussian distribution in the low-frequency regime. We applied the present method to analyze time series in some physical and financial systems. The agreement between the experimental results and theoretical curves is excellent. The present method can be applied to analyze time series in a variety of fields, which in turn provide a basis for the development of further microscopic models for the system. © 2000 Elsevier Science B.V. All rights reserved.
Resumo:
In a paper presented a few years ago, de Lorenci et al. showed, in the context of canonical quantum cosmology, a model which allowed space topology changes. The purpose of this present work is to go a step further in that model, by performing some calculations only estimated there for several compact manifolds of constant negative curvature, such as the Weeks and Thurston spaces and the icosahedral hyperbolic space (Best space). ©2000 The American Physical Society.
Resumo:
Perturbative quantum gauge field theory as seen within the perspective of physical gauge choices such as the light-cone gauge entails the emergence of troublesome poles of the type (k · n)-α in the Feynman integrals. These come from the boson field propagator, where α = 1, 2, ⋯ and nμ is the external arbitrary four-vector that defines the gauge proper. This becomes an additional hurdle in the computation of Feynman diagrams, since any graph containing internal boson lines will inevitably produce integrands with denominators bearing the characteristic gauge-fixing factor. How one deals with them has been the subject of research over decades, and several prescriptions have been suggested and tried in the course of time, with failures and successes. However, a more recent development at this fronteer which applies the negative dimensional technique to compute light-cone Feynman integrals shows that we can altogether dispense with prescriptions to perform the calculations. An additional bonus comes to us attached to this new technique, in that not only it renders the light-cone prescriptionless but, by the very nature of it, it can also dispense with decomposition formulas or partial fractioning tricks used in the standard approach to separate pole products of the type (k · n)-α[(k - p) · n]-β (β = 1, 2, ⋯). In this work we demonstrate how all this can be done.