28 resultados para algebra extensions
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
Throughout the history of Linnean taxonomy, species have been described with varying degrees of justification. Many descriptions have been based on only a few ambiguous morphological characters. Moreover, species have been considered natural, well-defined units whereas higher taxa have been treated as disparate, non-existent creations. In the present thesis a few such cases were studied in detail. Often the species-level descriptions were based on only a few specimens and the variation previously thought to be interspecific was found to be intraspecific. In some cases morphological characters were sufficient to resolve the evolutionary relationships between the taxa, but generally more resolution was gained by the addition of molecular evidence. However, both morphological and molecular data were found to be deceptive in some cases. The DNA sequences of morphologically similar specimens were found to differ distinctly in some cases, whereas in other closely related species the morphology of specimens with identical DNA sequences differed substantially. This study counsels caution when evolutionary relationships are being studied utilizing only one source of evidence or a very limited number of characters (e.g. barcoding). Moreover, it emphasizes the importance of high quality data as well as the utilization of proper methods when making scientific inferences. Properly conducted analyses produce robust results that can be utilized in numerous interesting ways. The present thesis considered two such extensions of systematics. A novel hypothesis on the origin of bioluminescence in Elateriformia beetles is presented, tying it to the development of the clicking mechanism in the ancestors of these animals. An entirely different type of extension of systematics is the proposed high value of the white sand forests in maintaining the diversity of beetles in the Peruvian Amazon. White sand forests are under growing pressure from human activities that lead to deforestation. They were found to harbor an extremely diverse beetle fauna and many taxa were specialists living only in this unique habitat. In comparison to the predominant clay soil forests, considerably more elateroid beetles belonging to all studied taxonomic levels (species, genus, tribus, and subfamily) were collected in white sand forests. This evolutionary diversity is hypothesized to be due to a combination of factors: (1) the forest structure, which favors the fungus-plant interactions important for the elateroid beetles, (2) the old age of the forest type favoring survival of many evolutionary lineages and (3) the widespread distribution and fragmentation of the forests in the Miocene, favoring speciation.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.
Resumo:
"Body and Iron: Essays on the Socialness of Objects" focuses on the bodily-material interaction of human subjects and technical objects. It poses a question, how is it possible that objects have an impact on their human users and examines the preconditions of active efficacy of objects. In this theoretical task the work relies on various discussions drawing from realistic ontology, phenomenology of body, neurophysiology of Antonio Damasio and psychoanalysis to establish both objects and bodies as material entities related in a causal interaction with each other. Out of material interaction emerge a symbolic field, psyche and culture that produce representations of interactions with material world they remain dependent on and conditioned by. Interaction with objects informs the human body via its somatosensory systems: interoseptive and proprioseptive (or kinesthetic) systems provide information to central nervous system of the internal state of the body and muscle tensions and motor activity of the limbs. Capability to control the movements of one's body by the internal "feel" of being a body turns out to be a precondition to the ability to control artificial extensions of the body. Motor activity of the body is involved in every perception of environment as the feel of one's own body is constitutive of any perception of external objects. Perception of an object cause changes in the internal milieu of the body and these changes in the organism form a bodily representation of an external object. Via these "muscle images" the subject can develop a feel for an instrument. Bodily feel for an object is pre-conceptual, practical knowledge that resists articulation but allows sensing the world through the object. This is what I would call sensual knowledge. Technical objects intervene between body and environment, transforming the relation of perception and motor activity. Once connected to a vehicle, human subject has to calibrate visual information of his or her position and movement in space to the bodily actions controlling the machine. It is the machine that mediates the relation of human actions to the relation of her body to its environment. Learning to use the machine necessarily means adjusting his or her bodily actions to the responses of the machine in relation to environmental changes it causes. Responsiveness of the machine to human touch "teaches" its subject by providing feedback of the "correctitude" of his or her bodily actions. Correct actions form a body technique of handling the object. This is the way of socialness of objects. While responding to human actions they generate their subjects. Learning to handle a machine means accepting the position of the user in the program of action materialized in the construction of the object. Objects mediate, channel and transform the relation of the body to its environment and via environment to the body itself according to their material and technical construction. Objects are sensory media: they channel signals and information from the environment thus constituting a representation of environment, a virtual or artificial reality. They also feed the body directly with their powers equipping their user with means of regulating somatic and psychic states of her self. For these reasons humans look for the company of objects. Keywords: material objects, material culture, sociology of technology, sociology of body, mobility, driving
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
The magnetically induced currents in organic monoring and multiring molecules, in Möbius shaped molecules and in inorganic all-metal molecules have been investigated by means of the Gauge-including magnetically induced currents (GIMIC) method. With the GIMIC method, the ring-current strengths and the ring-current density distributions can be calculated. For open-shell molecules, also the spin current can be obtained. The ring-current pathways and ring-current strengths can be used to understand the magnetic resonance properties of the molecules, to indirectly identify the effect of non-bonded interactions on NMR chemical shifts, to design new molecules with tailored properties and to discuss molecular aromaticity. In the thesis, the magnetic criterion for aromaticity has been adopted. According to this, a molecule which has a net diatropic ring current might be aromatic. Similarly, a molecule which has a net paratropic current might be antiaromatic. If the net current is zero, the molecule is nonaromatic. The electronic structure of the investigated molecules has been resolved by quantum chemical methods. The magnetically induced currents have been calculated with the GIMIC method at the density-functional theory (DFT) level, as well as at the self-consistent field Hartree-Fock (SCF-HF), at the Møller-Plesset perturbation theory of the second order (MP2) and at the coupled-cluster singles and doubles (CCSD) levels of theory. For closed-shell molecules, accurate ring-current strengths can be obtained with a reasonable computational cost at the DFT level and with rather small basis sets. For open-shell molecules, it is shown that correlated methods such as MP2 and CCSD might be needed to obtain reliable charge and spin currents. The basis set convergence has to be checked for open-shell molecules by performing calculations with large enough basis sets. The results discussed in the thesis have been published in eight papers. In addition, some previously unpublished results on the ring currents in the endohedral fullerene Sc3C2@C80 and in coronene are presented. It is shown that dynamical effects should be taken into account when modelling magnetic resonance parameters of endohedral metallofullerenes such as Sc3C2@C80. The ring-current strengths in a series of nano-sized hydrocarbon rings are related to static polarizabilities and to H-1 nuclear magnetic resonance (NMR) shieldings. In a case study on the possible aromaticity of a Möbius-shaped [16]annulene we found that, according to the magnetic criterion, the molecule is nonaromatic. The applicability of the GIMIC method to assign the aromatic character of molecules was confirmed in a study on the ring currents in simple monocylic aromatic, homoaromatic, antiaromatic, and nonaromatic hydrocarbons. Case studies on nanorings, hexaphyrins and [n]cycloparaphenylenes show that explicit calculations are needed to unravel the ring-current delocalization pathways in complex multiring molecules. The open-shell implementation of GIMIC was applied in studies on the charge currents and the spin currents in single-ring and bi-ring molecules with open shells. The aromaticity predictions that are made based on the GIMIC results are compared to other aromaticity criteria such as H-1 NMR shieldings and shifts, electric polarizabilities, bond-length alternation, as well as to predictions provided by the traditional Hückel (4n+2) rule and its more recent extensions that account for Möbius twisted molecules and for molecules with open shells.
Resumo:
The question what a business-to-business (B2B) collaboration setup and enactment application-system should look like remains open. An important element of such collaboration constitutes the inter-organizational disclosure of business-process details so that the opposing parties may protect their business secrets. For that purpose, eSourcing [37] has been developed as a general businessprocess collaboration concept in the framework of the EU research project Cross- Work. The eSourcing characteristics are guiding for the design and evaluation of an eSourcing Reference Architecture (eSRA) that serves as a starting point for software developers of B2B-collaboration systems. In this paper we present the results of a scenario-based evaluation method conducted with the earlier specified eSourcing Architecture (eSA) that generates as results risks, sensitivity, and tradeoff points that must be paid attention to if eSA is implemented. Additionally, the evaluation method detects shortcomings of eSA in terms of integrated components that are required for electronic B2B-collaboration. The evaluation results are used for the specification of eSRA, which comprises all extensions for incorporating the results of the scenario-based evaluation, on three refinement levels.
Resumo:
Pappret conceptualizes parsning med Constraint Grammar på ett nytt sätt som en process med två viktiga representationer. En representation innehåller lokala tvetydighet och den andra sammanfattar egenskaperna hos den lokala tvetydighet klasser. Båda representationer manipuleras med ren finite-state metoder, men deras samtrafik är en ad hoc -tillämpning av rationella potensserier. Den nya tolkningen av parsning systemet har flera praktiska fördelar, bland annat det inåt deterministiska sättet att beräkna, representera och räkna om alla potentiella tillämpningar av reglerna i meningen.