120 resultados para FORMALISMS
Resumo:
We study the Becchi-Rouet-Stora-Tyutin (BRST) structure of a self-interacting antisymmetric tensor gauge field, which has an on-shell null-vector gauge transformation. The Batalin-Vilkovisky covariant general formalism is briefly reviewed, and the issue of on-shell nilpotency of the BRST transformation is elucidated. We establish the connection between the covariant and the canonical BRST formalisms for our particular theory. Finally, we point out the similarities and differences with Wittens string field theory.
Resumo:
We obtain the next-to-next-to-leading-logarithmic renormalization-group improvement of the spectrum of hydrogenlike atoms with massless fermions by using potential NRQED. These results can also be applied to the computation of the muonic hydrogen spectrum where we are able to reproduce some known double logarithms at O(m¿s6). We compare with other formalisms dealing with logarithmic resummation available in the literature.
Resumo:
We discuss the relation between spacetime diffeomorphisms and gauge transformations in theories of the YangMills type coupled with Einsteins general relativity. We show that local symmetries of the Hamiltonian and Lagrangian formalisms of these generally covariant gauge systems are equivalent when gauge transformations are required to induce transformations which are projectable under the Legendre map. Although pure YangMills gauge transformations are projectable by themselves, diffeomorphisms are not. Instead, the projectable symmetry group arises from infinitesimal diffeomorphism-inducing transformations which must depend on the lapse function and shift vector of the spacetime metric plus associated gauge transformations. Our results are generalizations of earlier results by ourselves and by Salisbury and Sundermeyer. 2000 American Institute of Physics.
Resumo:
For a dynamical system defined by a singular Lagrangian, canonical Noether symmetries are characterized in terms of their commutation relations with the evolution operators of Lagrangian and Hamiltonian formalisms. Separate characterizations are given in phase space, in velocity space, and through an evolution operator that links both spaces. 2000 American Institute of Physics.
Resumo:
In order to study the connections between Lagrangian and Hamiltonian formalisms constructed from aperhaps singularhigher-order Lagrangian, some geometric structures are constructed. Intermediate spaces between those of Lagrangian and Hamiltonian formalisms, partial Ostrogradskiis transformations and unambiguous evolution operators connecting these spaces are intrinsically defined, and some of their properties studied. Equations of motion, constraints, and arbitrary functions of Lagrangian and Hamiltonian formalisms are thoroughly studied. In particular, all the Lagrangian constraints are obtained from the Hamiltonian ones. Once the gauge transformations are taken into account, the true number of degrees of freedom is obtained, both in the Lagrangian and Hamiltonian formalisms, and also in all the intermediate formalisms herein defined.
Resumo:
The RuskSkinner formalism was developed in order to give a geometrical unified formalism for describing mechanical systems. It incorporates all the characteristics of Lagrangian and Hamiltonian descriptions of these systems (including dynamical equations and solutions, constraints, Legendre map, evolution operators, equivalence, etc.). In this work we extend this unified framework to first-order classical field theories, and show how this description comprises the main features of the Lagrangian and Hamiltonian formalisms, both for the regular and singular cases. This formulation is a first step toward further applications in optimal control theory for partial differential equations. 2004 American Institute of Physics.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
In the last years there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI, logic programming and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. In the last decade, defeasible argumentation has proven to be a confluence point for many approaches to formalizing commonsense reasoning. Different formalisms have been developed, many of them sharing common features. This paper presents a formalization of an LDS for defensible argumentation, in which the main issues concerning defeasible argumentation are captured within a unified logical framework. The proposed framework is defined in two stages. First, defeasible inference will be formalized by characterizing an argumentative LDS. That system will be then extended in order to capture conflict among arguments using a dialectical approach. We also present some logical properties emerging from the proposed framework, discussing also its semantical characterization.
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
In this article we review some of the basic aspects of rare earth spectroscopy applied to vitreous materials. The characteristics of the intra-atomic free ion and ligand field interactions, as well as the formalisms of the forced electric dipole and dynamic coupling mechanisms of 4f-4f intensities, are outlined. The contribution of the later mechanism to the 4f-4f intensities is critically discussed, a point that has been commonly overlooked in the literature of rare earth doped glasses. The observed correlation between the empirical intensity parameter W2 and the covalence of the ion first coordination shell is discussed accordingly to the theoretical predictions.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
Humans have used arguments for defending or refuting statements long before the creation of logic as a specialized discipline. This can be interpreted as the fact that an intuitive notion of "logical consequence" or a psychic disposition to articulate reasoning according to this pattern is present in common sense, and logic simply aims at describing and codifying the features of this spontaneous capacity of human reason. It is well known, however, that several arguments easily accepted by common sense are actually "logical fallacies", and this indicates that logic is not just a descriptive, but also a prescriptive or normative enterprise, in which the notion of logical consequence is defined in a precise way and then certain rules are established in order to maintain the discourse in keeping with this notion. Yet in the justification of the correctness and adequacy of these rules commonsense reasoning must necessarily be used, and in such a way its foundational role is recognized. Moreover, it remains also true that several branches and forms of logic have been elaborated precisely in order to reflect the structural features of correct argument used in different fields of human reasoning and yet insufficiently mirrored by the most familiar logical formalisms.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.