985 resultados para Classical Invariant Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lorentz-Dirac equation is not an unavoidable consequence of solely linear and angular momenta conservation for a point charge. It also requires an additional assumption concerning the elementary character of the charge. We here use a less restrictive elementarity assumption for a spinless charge and derive a system of conservation equations that are not properly the equation of motion because, as it contains an extra scalar variable, the future evolution of the charge is not determined. We show that a supplementary constitutive relation can be added so that the motion is determined and free from the troubles that are customary in the Lorentz-Dirac equation, i.e., preacceleration and runaways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outgoing radiation is introduced in the framework of the classical predictive electrodynamics using LorentzDiracs equation as a subsidiary condition. In a perturbative scheme in the charges the first radiative self-terms of the accelerations, momentum and angular momentum of a two charge system without external field are calculated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The RuskSkinner formalism was developed in order to give a geometrical unified formalism for describing mechanical systems. It incorporates all the characteristics of Lagrangian and Hamiltonian descriptions of these systems (including dynamical equations and solutions, constraints, Legendre map, evolution operators, equivalence, etc.). In this work we extend this unified framework to first-order classical field theories, and show how this description comprises the main features of the Lagrangian and Hamiltonian formalisms, both for the regular and singular cases. This formulation is a first step toward further applications in optimal control theory for partial differential equations. 2004 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies a risk measure inherited from ruin theory and investigates some of its properties. Specifically, we consider a value-at-risk (VaR)-type risk measure defined as the smallest initial capital needed to ensure that the ultimate ruin probability is less than a given level. This VaR-type risk measure turns out to be equivalent to the VaR of the maximal deficit of the ruin process in infinite time. A related Tail-VaR-type risk measure is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to molecular epidemiology theory, two isolates belong to the same chain of transmission if they are similar according to a highly discriminatory molecular typing method. This has been demonstrated in outbreaks, but is rarely studied in endemic situations. Person-to-person transmission cannot be established when isolates of meticillin-resistant Staphylococcus aureus (MRSA) belong to endemically predominant genotypes. By contrast, isolates of infrequent genotypes might be more suitable for epidemiological tracking. The objective of the present study was to determine, in newly identified patients harbouring non-predominant MRSA genotypes, whether putative epidemiological links inferred from molecular typing could replace classical epidemiology in the context of a regional surveillance programme. MRSA genotypes were defined using double-locus sequence typing (DLST) combining clfB and spa genes. A total of 1,268 non-repetitive MRSA isolates recovered between 2005 and 2006 in Western Switzerland were typed: 897 isolates (71%) belonged to four predominant genotypes, 231 (18%) to 55 non-predominant genotypes, and 140 (11%) were unique. Obvious epidemiological links were found in only 106/231 (46%) patients carrying isolates with non-predominant genotypes suggesting that molecular surveillance identified twice as many clusters as those that may have been suspected with classical epidemiological links. However, not all of these molecular clusters represented person-to-person transmission. Thus, molecular typing cannot replace classical epidemiology but is complementary. A prospective surveillance of MRSA genotypes could help to target epidemiological tracking in order to recognise new risk factors in hospital and community settings, or emergence of new epidemic clones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sameness between the inertial mass and the gravitational mass is an assumption and not a consequence of the equivalent principle is shown. In the context of the Sciama’s inertia theory, the sameness between the inertial mass and the gravitational mass is discussed and a certain condition which must be experimentally satisfied is given. The inertial force proposed by Sciama, in a simple case, is derived from the Assis’ inertia theory based in the introduction of a Weber type force. The origin of the inertial force is totally justified taking into account that the Weber force is, in fact, an approximation of a simple retarded potential, see [18, 19]. The way how the inertial forces are also derived from some solutions of the general relativistic equations is presented. We wonder if the theory of inertia of Assis is included in the framework of the General Relativity. In the context of the inertia developed in the present paper we establish the relation between the constant acceleration a0 , that appears in the classical Modified Newtonian Dynamics (M0ND) theory, with the Hubble constant H0 , i.e. a0 ≈ cH0 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classical theory of collision induced emission (CIE) from pairs of dissimilar rare gas atoms was developed in Paper I [D. Reguera and G. Birnbaum, J. Chem. Phys. 125, 184304 (2006)] from a knowledge of the straight line collision trajectory and the assumption that the magnitude of the dipole could be represented by an exponential function of the inter-nuclear distance. This theory is extended here to deal with other functional forms of the induced dipole as revealed by ab initio calculations. Accurate analytical expression for the CIE can be obtained by least square fitting of the ab initio values of the dipole as a function of inter-atomic separation using a sum of exponentials and then proceeding as in Paper I. However, we also show how the multi-exponential fit can be replaced by a simpler fit using only two analytic functions. Our analysis is applied to the polar molecules HF and HBr. Unlike the rare gas atoms considered previously, these atomic pairs form stable bound diatomic molecules. We show that, interestingly, the spectra of these reactive molecules are characterized by the presence of multiple peaks. We also discuss the CIE arising from half collisions in excited electronic states, which in principle could be probed in photo-dissociation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depuis l’introduction de la mécanique quantique, plusieurs mystères de la nature ont trouvé leurs explications. De plus en plus, les concepts de la mécanique quantique se sont entremêlés avec d’autres de la théorie de la complexité du calcul. De nouvelles idées et solutions ont été découvertes et élaborées dans le but de résoudre ces problèmes informatiques. En particulier, la mécanique quantique a secoué plusieurs preuves de sécurité de protocoles classiques. Dans ce m´emoire, nous faisons un étalage de résultats récents de l’implication de la mécanique quantique sur la complexité du calcul, et cela plus précisément dans le cas de classes avec interaction. Nous présentons ces travaux de recherches avec la nomenclature des jeux à information imparfaite avec coopération. Nous exposons les différences entre les théories classiques, quantiques et non-signalantes et les démontrons par l’exemple du jeu à cycle impair. Nous centralisons notre attention autour de deux grands thèmes : l’effet sur un jeu de l’ajout de joueurs et de la répétition parallèle. Nous observons que l’effet de ces modifications a des conséquences très différentes en fonction de la théorie physique considérée.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La théorie de l'information quantique étudie les limites fondamentales qu'imposent les lois de la physique sur les tâches de traitement de données comme la compression et la transmission de données sur un canal bruité. Cette thèse présente des techniques générales permettant de résoudre plusieurs problèmes fondamentaux de la théorie de l'information quantique dans un seul et même cadre. Le théorème central de cette thèse énonce l'existence d'un protocole permettant de transmettre des données quantiques que le receveur connaît déjà partiellement à l'aide d'une seule utilisation d'un canal quantique bruité. Ce théorème a de plus comme corollaires immédiats plusieurs théorèmes centraux de la théorie de l'information quantique. Les chapitres suivants utilisent ce théorème pour prouver l'existence de nouveaux protocoles pour deux autres types de canaux quantiques, soit les canaux de diffusion quantiques et les canaux quantiques avec information supplémentaire fournie au transmetteur. Ces protocoles traitent aussi de la transmission de données quantiques partiellement connues du receveur à l'aide d'une seule utilisation du canal, et ont comme corollaires des versions asymptotiques avec et sans intrication auxiliaire. Les versions asymptotiques avec intrication auxiliaire peuvent, dans les deux cas, être considérées comme des versions quantiques des meilleurs théorèmes de codage connus pour les versions classiques de ces problèmes. Le dernier chapitre traite d'un phénomène purement quantique appelé verrouillage: il est possible d'encoder un message classique dans un état quantique de sorte qu'en lui enlevant un sous-système de taille logarithmique par rapport à sa taille totale, on puisse s'assurer qu'aucune mesure ne puisse avoir de corrélation significative avec le message. Le message se trouve donc «verrouillé» par une clé de taille logarithmique. Cette thèse présente le premier protocole de verrouillage dont le critère de succès est que la distance trace entre la distribution jointe du message et du résultat de la mesure et le produit de leur marginales soit suffisamment petite.