958 resultados para Sequent Calculus
Resumo:
For one-dimensional flexible objects such as ropes, chains, hair, the assumption of constant length is realistic for large-scale 3D motion. Moreover, when the motion or disturbance at one end gradually dies down along the curve defining the one-dimensional flexible objects, the motion appears ``natural''. This paper presents a purely geometric and kinematic approach for deriving more natural and length-preserving transformations of planar and spatial curves. Techniques from variational calculus are used to determine analytical conditions and it is shown that the velocity at any point on the curve must be along the tangent at that point for preserving the length and to yield the feature of diminishing motion. It is shown that for the special case of a straight line, the analytical conditions lead to the classical tractrix curve solution. Since analytical solutions exist for a tractrix curve, the motion of a piecewise linear curve can be solved in closed-form and thus can be applied for the resolution of redundancy in hyper-redundant robots. Simulation results for several planar and spatial curves and various input motions of one end are used to illustrate the features of motion damping and eventual alignment with the perturbation vector.
Resumo:
In this paper, a fractional order proportional-integral controller is developed for a miniature air vehicle for rectilinear path following and trajectory tracking. The controller is implemented by constructing a vector field surrounding the path to be followed, which is then used to generate course commands for the miniature air vehicle. The fractional order proportional-integral controller is simulated using the fundamentals of fractional calculus, and the results for this controller are compared with those obtained for a proportional controller and a proportional integral controller. In order to analyze the performance of the controllers, four performance metrics, namely (maximum) overshoot, control effort, settling time and integral of the timed absolute error cost, have been selected. A comparison of the nominal as well as the robust performances of these controllers indicates that the fractional order proportional-integral controller exhibits the best performance in terms of ITAE while showing comparable performances in all other aspects.
Resumo:
Infinite arrays of coupled two-state stochastic oscillators exhibit well-defined steady states. We study the fluctuations that occur when the number N of oscillators in the array is finite. We choose a particular form of global coupling that in the infinite array leads to a pitchfork bifurcation from a monostable to a bistable steady state, the latter with two equally probable stationary states. The control parameter for this bifurcation is the coupling strength. In finite arrays these states become metastable: The fluctuations lead to distributions around the most probable states, with one maximum in the monostable regime and two maxima in the bistable regime. In the latter regime, the fluctuations lead to transitions between the two peak regions of the distribution. Also, we find that the fluctuations break the symmetry in the bimodal regime, that is, one metastable state becomes more probable than the other, increasingly so with increasing array size. To arrive at these results, we start from microscopic dynamical evolution equations from which we derive a Langevin equation that exhibits an interesting multiplicative noise structure. We also present a master equation description of the dynamics. Both of these equations lead to the same Fokker-Planck equation, the master equation via a 1/N expansion and the Langevin equation via standard methods of Ito calculus for multiplicative noise. From the Fokker-Planck equation we obtain an effective potential that reflects the transition from the monomodal to the bimodal distribution as a function of a control parameter. We present a variety of numerical and analytic results that illustrate the strong effects of the fluctuations. We also show that the limits N -> infinity and t -> infinity(t is the time) do not commute. In fact, the two orders of implementation lead to drastically different results.
Resumo:
In gross motion of flexible one-dimensional (1D) objects such as cables, ropes, chains, ribbons and hair, the assumption of constant length is realistic and reasonable. The motion of the object also appears more natural if the motion or disturbance given at one end attenuates along the length of the object. In an earlier work, variational calculus was used to derive natural and length-preserving transformation of planar and spatial curves and implemented for flexible 1D objects discretized with a large number of straight segments. This paper proposes a novel idea to reduce computational effort and enable real-time and realistic simulation of the motion of flexible 1D objects. The key idea is to represent the flexible 1D object as a spline and move the underlying control polygon with much smaller number of segments. To preserve the length of the curve to within a prescribed tolerance as the control polygon is moved, the control polygon is adaptively modified by subdivision and merging. New theoretical results relating the length of the curve and the angle between the adjacent segments of the control polygon are derived for quadratic and cubic splines. Depending on the prescribed tolerance on length error, the theoretical results are used to obtain threshold angles for subdivision and merging. Simulation results for arbitrarily chosen planar and spatial curves whose one end is subjected to generic input motions are provided to illustrate the approach. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
How can networking affect the turnout in an election? We present a simple model to explain turnout as a result of a dynamic process of formation of the intention to vote within Erdös-Renyi random networks. Citizens have fixed preferences for one of two parties and are embedded in a given social network. They decide whether or not to vote on the basis of the attitude of their immediate contacts. They may simply follow the behavior of the majority (followers) or make an adaptive local calculus of voting (Downsian behavior). So they either have the intention of voting when the majority of their neighbors are willing to vote too, or they vote when they perceive in their social neighborhood that elections are "close". We study the long run average turnout, interpreted as the actual turnout observed in an election. Depending on the combination of values of the two key parameters, the average connectivity and the probability of behaving as a follower or in a Downsian fashion, the system exhibits monostability (zero turnout), bistability (zero turnout and either moderate or high turnout) or tristability (zero, moderate and high turnout). This means, in particular, that for a wide range of values of both parameters, we obtain realistic turnout rates, i.e. between 50% and 90%.
Resumo:
Various families of exact solutions to the Einstein and Einstein-Maxwell field equations of General Relativity are treated for situations of sufficient symmetry that only two independent variables arise. The mathematical problem then reduces to consideration of sets of two coupled nonlinear differential equations.
The physical situations in which such equations arise include: a) the external gravitational field of an axisymmetric, uncharged steadily rotating body, b) cylindrical gravitational waves with two degrees of freedom, c) colliding plane gravitational waves, d) the external gravitational and electromagnetic fields of a static, charged axisymmetric body, and e) colliding plane electromagnetic and gravitational waves. Through the introduction of suitable potentials and coordinate transformations, a formalism is presented which treats all these problems simultaneously. These transformations and potentials may be used to generate new solutions to the Einstein-Maxwell equations from solutions to the vacuum Einstein equations, and vice-versa.
The calculus of differential forms is used as a tool for generation of similarity solutions and generalized similarity solutions. It is further used to find the invariance group of the equations; this in turn leads to various finite transformations that give new, physically distinct solutions from old. Some of the above results are then generalized to the case of three independent variables.
Resumo:
The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.
In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.
N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.
Resumo:
Nesta dissertação é apresentada uma modelagem analítica para o processo evolucionário formulado pela Teoria da Evolução por Endossimbiose representado através de uma sucessão de estágios envolvendo diferentes interações ecológicas e metábolicas entre populações de bactérias considerando tanto a dinâmica populacional como os processos produtivos dessas populações. Para tal abordagem é feito uso do sistema de equações diferenciais conhecido como sistema de Volterra-Hamilton bem como de determinados conceitos geométricos envolvendo a Teoria KCC e a Geometria Projetiva. Os principais cálculos foram realizados pelo pacote de programação algébrica FINSLER, aplicado sobre o MAPLE.
Resumo:
A raspagem subgengival e o alisamento radicular constituem o "padrão ouro" e o tratamento de eleição para a periodontite; porém, é um procedimento difícil de ser executado, que requer um intenso treinamento e que pode expor a dentina, causando hipersensibilidade dentinária pela remoção excessiva de cemento, ou produzir defeitos, como sulcos e ranhuras, além de deixar cálculo residual e não conseguir atingir toda as superfície radicular. Recentemente, um gel a base de papaína e cloramina foi introduzido no mercado (Papacárie), utilizado no tratamento da remoção de dentina cariada. Este gel poderia auxiliar na remoção do cálculo subgengival com menor desgaste do cemento. O objetivo deste trabalho foi comparar a eficácia e analisar a superfície radicular na utilização de um gel à base de papaína e cloramina, associado ao alisamento radicular, na região subgengival. Após receberem instruções de higiene oral, raspagem supragengival e polimento coronário, 18 pacientes com periodontite crônica, 6 mulheres e 12 homens, com idade média de 51 anos (8) foram tratados num modelo de boca dividida. O tratamento-teste foi constituído pela aplicação do gel na área subgengival por 1 min., seguida pelo alisamento radicular; o tratamento-controle foi constituído pela raspagem subgengival e alisamento radiculares. A terapia foi executada por 3 operadoras e os exames inicial, de 28 dias e 3 meses, foram realizados por um único examinador. Quatro dentes nunca tratados de dois outros pacientes (2 incisivos centrais inferiores e 2 premolares), com indicação para extração, foram submetidos ao tratamento teste e controle e, após a exodontia, analisados em microscopia eletrônica de varredura (MEV). Ao longo dos 3 meses, os resultados demonstraram significativa melhora nos parâmetros clínicos: sangramento à sondagem, profundidade de bolsa e ganho de inserção, tanto no lado-teste, como no lado-controle, principalmente aos 28 dias; mas não foi observada significância estatística quando ambas as formas de terapia foram comparadas. O índice de placa médio permaneceu alto ao longo do estudo. A análise do MEV demonstrou que o tratamento-teste deixou uma maior quantidade de cálculo residual sobre a superfície radicular; porém, áreas livres de cálculo também foram observadas. No tratamento-controle, verificaram-se regiões mais profundas não atingidas pelas curetas, áreas livres de cálculo e um sulco produzido pela cureta. Concluiu-se que tanto o tratamento-teste, como o controle, foram eficazes no tratamento da periodontite crônica nos 3 meses observados.
Resumo:
This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.
Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.
Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.
Resumo:
超分辨技术因其可以超越经典的衍射极限而为人们所熟知.并且.在光存储和共焦扫描成像系统中有着广泛的应用。把由两个偏振器和一个圆对称的双折射元件组成的径向双折射滤波器引入超分辨技术,借助琼斯算法推导出其光瞳函数的表达式。由分析得出通过改变径向双折射滤波器中偏振器的偏振方向和双折射元件的主轴之间的夹角,即可实现光学系统的横向超分辨或轴向超分辨。同时对评价该器件超分辨性能的参量第一零点比、斯特尔比和旁瓣强度抑制比做了详细的讨论。该滤波器用于超分辨技术的优点在于其制作不涉及相位的变化而比较简单,且费用比较低。缺点是
Resumo:
The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.
Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.
Two methods are employed for optimization:
(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.
(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.
Resumo:
This thesis is an investigation into the nature of data analysis and computer software systems which support this activity.
The first chapter develops the notion of data analysis as an experimental science which has two major components: data-gathering and theory-building. The basic role of language in determining the meaningfulness of theory is stressed, and the informativeness of a language and data base pair is studied. The static and dynamic aspects of data analysis are then considered from this conceptual vantage point. The second chapter surveys the available types of computer systems which may be useful for data analysis. Particular attention is paid to the questions raised in the first chapter about the language restrictions imposed by the computer system and its dynamic properties.
The third chapter discusses the REL data analysis system, which was designed to satisfy the needs of the data analyzer in an operational relational data system. The major limitation on the use of such systems is the amount of access to data stored on a relatively slow secondary memory. This problem of the paging of data is investigated and two classes of data structure representations are found, each of which has desirable paging characteristics for certain types of queries. One representation is used by most of the generalized data base management systems in existence today, but the other is clearly preferred in the data analysis environment, as conceptualized in Chapter I.
This data representation has strong implications for a fundamental process of data analysis -- the quantification of variables. Since quantification is one of the few means of summarizing and abstracting, data analysis systems are under strong pressure to facilitate the process. Two implementations of quantification are studied: one analagous to the form of the lower predicate calculus and another more closely attuned to the data representation. A comparison of these indicates that the use of the "label class" method results in orders of magnitude improvement over the lower predicate calculus technique.
Resumo:
Como eventos de fissão induzida por nêutrons não ocorrem nas regiões nãomultiplicativas de reatores nucleares, e.g., moderador, refletor, e meios estruturais, essas regiões não geram potência e a eficiência computacional dos cálculos globais de reatores nucleares pode portanto ser aumentada eliminando os cálculos numéricos explícitos no interior das regiões não-multiplicativas em torno do núcleo ativo. É discutida nesta dissertação a eficiência computacional de condições de contorno aproximadas tipo albedo na formulação de ordenadas discretas (SN) para problemas de autovalor a dois grupos de energia em geometria bidimensional cartesiana. Albedo, palavra de origem latina para alvura, foi originalmente definido como a fração da luz incidente que é refletida difusamente por uma superfície. Esta palavra latina permaneceu como o termo científico usual em astronomia e nesta dissertação este conceito é estendido para reflexão de nêutrons. Este albedo SN nãoconvencional substitui aproximadamente a região refletora em torno do núcleo ativo do reator, pois os termos de fuga transversal são desprezados no interior do refletor. Se o problema, em particular, não possui termos de fuga transversal, i.e., trata-se de um problema unidimensional, então as condições de contorno albedo, como propostas nesta dissertação, são exatas. Por eficiência computacional entende-se analisar a precisão dos resultados numéricos em comparação com o tempo de execução computacional de cada simulação de um dado problema-modelo. Resultados numéricos para dois problemas-modelo com de simetria são considerados para ilustrar esta análise de eficiência.
Resumo:
[ES]En este trabajo de fin de grado se realiza el cálculo de bases de pilares metálicos en sus diferentes configuraciones dependiendo del esfuerzo axil, cortante y momento flector aplicados en base a la normativa actual. Tras esto, se desarrolla un software donde, de una forma sencilla e intuitiva, se puede evaluar si el predimensionamiento de las bases es correcto o no.