971 resultados para Supersymmetric formalism
Resumo:
This paper tackles the problem of computing smooth, optimal trajectories on the Euclidean group of motions SE(3). The problem is formulated as an optimal control problem where the cost function to be minimized is equal to the integral of the classical curvature squared. This problem is analogous to the elastic problem from differential geometry and thus the resulting rigid body motions will trace elastic curves. An application of the Maximum Principle to this optimal control problem shifts the emphasis to the language of symplectic geometry and to the associated Hamiltonian formalism. This results in a system of first order differential equations that yield coordinate free necessary conditions for optimality for these curves. From these necessary conditions we identify an integrable case and these particular set of curves are solved analytically. These analytic solutions provide interpolating curves between an initial given position and orientation and a desired position and orientation that would be useful in motion planning for systems such as robotic manipulators and autonomous-oriented vehicles.
Resumo:
This paper considers the motion planning problem for oriented vehicles travelling at unit speed in a 3-D space. A Lie group formulation arises naturally and the vehicles are modeled as kinematic control systems with drift defined on the orthonormal frame bundles of particular Riemannian manifolds, specifically, the 3-D space forms Euclidean space E-3, the sphere S-3, and the hyperboloid H'. The corresponding frame bundles are equal to the Euclidean group of motions SE(3), the rotation group SO(4), and the Lorentz group SO (1, 3). The maximum principle of optimal control shifts the emphasis for these systems to the associated Hamiltonian formalism. For an integrable case, the extremal curves are explicitly expressed in terms of elliptic functions. In this paper, a study at the singularities of the extremal curves are given, which correspond to critical points of these elliptic functions. The extremal curves are characterized as the intersections of invariant surfaces and are illustrated graphically at the singular points. It. is then shown that the projections, of the extremals onto the base space, called elastica, at these singular points, are curves of constant curvature and torsion, which in turn implies that the oriented vehicles trace helices.
Resumo:
We propose a bridge between two important parallel programming paradigms: data parallelism and communicating sequential processes (CSP). Data parallel pipelined architectures obtained with the Alpha language can be embedded in a control intensive application expressed in CSP-based Handel formalism. The interface is formally defined from the semantics of the languages Alpha and Handel. This work will ease the design of compute intensive applications on FPGAs.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
We present an intercomparison and verification analysis of 20 GCMs (Global Circulation Models) included in the 4th IPCC assessment report regarding their representation of the hydrological cycle on the Danube river basin for 1961–2000 and for the 2161–2200 SRESA1B scenario runs. The basin-scale properties of the hydrological cycle are computed by spatially integrating the precipitation, evaporation, and runoff fields using the Voronoi-Thiessen tessellation formalism. The span of the model- simulated mean annual water balances is of the same order of magnitude of the observed Danube discharge of the Delta; the true value is within the range simulated by the models. Some land components seem to have deficiencies since there are cases of violation of water conservation when annual means are considered. The overall performance and the degree of agreement of the GCMs are comparable to those of the RCMs (Regional Climate Models) analyzed in a previous work, in spite of the much higher resolution and common nesting of the RCMs. The reanalyses are shown to feature several inconsistencies and cannot be used as a verification benchmark for the hydrological cycle in the Danubian region. In the scenario runs, for basically all models the water balance decreases, whereas its interannual variability increases. Changes in the strength of the hydrological cycle are not consistent among models: it is confirmed that capturing the impact of climate change on the hydrological cycle is not an easy task over land areas. Moreover, in several cases we find that qualitatively different behaviors emerge among the models: the ensemble mean does not represent any sort of average model, and often it falls between the models’ clusters.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.
Resumo:
The dielectric constant, epsilon', and the dielectric loss, epsilon'', for gelatin films were measured in the glassy and rubbery states over a frequency range from 20 Hz to 10 MHz; epsilon' and epsilon'' were transformed into M* formalism (M* = 1/(epsilon' - i epsilon'') = M' + iM''; i, the imaginary unit). The peak of epsilon'' was masked probably due to dc conduction, but the peak of M'', e.g. the conductivity relaxation, for the gelatin used was observed. By fitting the M'' data to the Havriliak-Negami type equation, the relaxation time, tauHN, was evaluated. The value of the activation energy, Etau, evaluated from an Arrhenius plot of 1/tauHN, agreed well with that of Esigma evaluated from the DC conductivity sigma0 both in the glassy and rubbery states, indicating that the conductivity relaxation observed for the gelatin films was ascribed to ionic conduction. The value of the activation energy in the glassy state was larger than that in the rubbery state.
Resumo:
We discuss several methods of calculating the DIS structure functions F2(x,Q2) based on BFKL-type small x resummations. Taking into account new HERA data ranging down to small xand low Q2, the pure leading order BFKL-based approach is excluded. Other methods based on high energy factorization are closer to conventional renormalization group equations. Despite several difficulties and ambiguities in combining the renormalization group equations with small x resummed terms, we find that a fit to the current data is hardly feasible, since the data in the low Q2 region are not as steep as the BFKL formalism predicts. Thus we conclude that deviations from the (successful) renormalization group approach towards summing up logarithms in 1/x are disfavoured by experiment.
Resumo:
Understanding how and why the capability of one set of business resources, its structural arrangements and mechanisms compared to another works can provide competitive advantage in terms of new business processes and product and service development. However, most business models of capability are descriptive and lack formal modelling language to qualitatively and quantifiably compare capabilities, Gibson’s theory of affordance, the potential for action, provides a formal basis for a more robust and quantitative model, but most formal affordance models are complex and abstract and lack support for real-world applications. We aim to understand the ‘how’ and ‘why’ of business capability, by developing a quantitative and qualitative model that underpins earlier work on Capability-Affordance Modelling – CAM. This paper integrates an affordance based capability model and the formalism of Coloured Petri Nets to develop a simulation model. Using the model, we show how capability depends on the space time path of interacting resources, the mechanism of transition and specific critical affordance factors relating to the values of the variables for resources, people and physical objects. We show how the model can identify the capabilities of resources to enable the capability to inject a drug and anaesthetise a patient.
Resumo:
A parameterization of mesoscale eddies in coarse-resolution ocean general circulation models (GCM) is formulated and implemented using a residual-mean formalism. In that framework, mean buoyancy is advected by the residual velocity (the sum of the Eulerian and eddy-induced velocities) and modified by a residual flux which accounts for the diabatic effects of mesoscale eddies. The residual velocity is obtained by stepping forward a residual-mean momentum equation in which eddy stresses appear as forcing terms. Study of the spatial distribution of eddy stresses, derived by using them as control parameters to ‘‘fit’’ the residual-mean model to observations, supports the idea that eddy stresses can be likened to a vertical down-gradient flux of momentum with a coefficient which is constant in the vertical. The residual eddy flux is set to zero in the ocean interior, where mesoscale eddies are assumed to be quasi-adiabatic, but is parameterized by a horizontal down-gradient diffusivity near the surface where eddies develop a diabatic component as they stir properties horizontally across steep isopycnals. The residual-mean model is implemented and tested in the MIT general circulation model. It is shown that the resulting model (1) has a climatology that is superior to that obtained using the Gent and McWilliams parameterization scheme with a spatially uniform diffusivity and (2) allows one to significantly reduce the (spurious) horizontal viscosity used in coarse resolution GCMs.
Resumo:
A new formal approach for representation of polarization states of coherent and partially coherent electromagnetic plane waves is presented. Its basis is a purely geometric construction for the normalised complex-analytic coherent wave as a generating line in the sphere of wave directions, and whose Stokes vector is determined by the intersection with the conjugate generating line. The Poincare sphere is now located in physical space, simply a coordination of the wave sphere, its axis aligned with the wave vector. Algebraically, the generators representing coherent states are represented by spinors, and this is made consistent with the spinor-tensor representation of electromagnetic theory by means of an explicit reference spinor we call the phase flag. As a faithful unified geometric representation, the new model provides improved formal tools for resolving many of the geometric difficulties and ambiguities that arise in the traditional formalism.
Resumo:
The congruential rule advanced by Graves for polarization basis transformation of the radar backscatter matrix is now often misinterpreted as an example of consimilarity transformation. However, consimilarity transformations imply a physically unrealistic antilinear time-reversal operation. This is just one of the approaches found in literature to the description of transformations where the role of conjugation has been misunderstood. In this paper, the different approaches are examined in particular in respect to the role of conjugation. In order to justify and correctly derive the congruential rule for polarization basis transformation and properly place the role of conjugation, the origin of the problem is traced back to the derivation of the antenna height from the transmitted field. In fact, careful consideration of the role played by the Green’s dyadic operator relating the antenna height to the transmitted field shows that, under general unitary basis transformation, it is not justified to assume a scalar relationship between them. Invariance of the voltage equation shows that antenna states and wave states must in fact lie in dual spaces, a distinction not captured in conventional Jones vector formalism. Introducing spinor formalism, and with the use of an alternate spin frame for the transmitted field a mathematically consistent implementation of the directional wave formalism is obtained. Examples are given comparing the wider generality of the congruential rule in both active and passive transformations with the consimilarity rule.
Resumo:
Clusters of galaxies are the most impressive gravitationally-bound systems in the universe, and their abundance (the cluster mass function) is an important statistic to probe the matter density parameter (Omega(m)) and the amplitude of density fluctuations (sigma(8)). The cluster mass function is usually described in terms of the Press-Schecther (PS) formalism where the primordial density fluctuations are assumed to be a Gaussian random field. In previous works we have proposed a non-Gaussian analytical extension of the PS approach with basis on the q-power law distribution (PL) of the nonextensive kinetic theory. In this paper, by applying the PL distribution to fit the observational mass function data from X-ray highest flux-limited sample (HIFLUGCS), we find a strong degeneracy among the cosmic parameters, sigma(8), Omega(m) and the q parameter from the PL distribution. A joint analysis involving recent observations from baryon acoustic oscillation (BAO) peak and Cosmic Microwave Background (CMB) shift parameter is carried out in order to break these degeneracy and better constrain the physically relevant parameters. The present results suggest that the next generation of cluster surveys will be able to probe the quantities of cosmological interest (sigma(8), Omega(m)) and the underlying cluster physics quantified by the q-parameter.
Resumo:
The complete understanding of the basic constituents of hadrons and the hadronic dynamics at high energies are two of the main challenges for the theory of strong interactions. In particular, the existence of intrinsic heavy quark components in the hadron wave function must be confirmed (or disproved). In this paper we propose a new mechanism for the production of D-mesons at forward rapidities based on the Color Glass Condensate (CGC) formalism and demonstrate that the resulting transverse momentum spectra are strongly dependent on the behavior of the charm distribution at large Bjorken x. Our results show clearly that the hypothesis of intrinsic charm can be tested in pp and p(d)A collisions at RHIC and LHC. (C) 2010 Elsevier B.V. All rights reserved.