413 resultados para Proofs
Efficient extension of standard Schnorr/RSA signatures into Universal Designated-Verifier Signatures
Resumo:
Universal Designated-Verifier Signature (UDVS) schemes are digital signature schemes with additional functionality which allows any holder of a signature to designate the signature to any desired designated-verifier such that the designated-verifier can verify that the message was signed by the signer, but is unable to convince anyone else of this fact. Since UDVS schemes reduce to standard signatures when no verifier designation is performed, it is natural to ask how to extend the classical Schnorr or RSA signature schemes into UDVS schemes, so that the existing key generation and signing implementation infrastructure for these schemes can be used without modification. We show how this can be efficiently achieved, and provide proofs of security for our schemes in the random oracle model.
Resumo:
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes.
Resumo:
This chapter identifies ways in which laws are capable of responding to child maltreatment, both as an immediate regulator of conduct, and as an influence on a society’s cultural development and approach to children’s welfare. Informed by practices and experiences in selected common law systems, the chapter provides examples of legal mechanisms that can inform discussion of optimal strategies to identify and manage child maltreatment in many different societies. Both positive and negative aspects of these mechanisms are noted. While controversies arise as to what kinds of laws are best in preventing and responding to child maltreatment, and even, more fundamentally, whether there is a role for law in protecting children, this chapter offers evidence that a variety of legal tools can be employed to address child abuse and neglect, for any cultural setting in which there is willingness to act to prevent and treat its various forms.
Resumo:
n this paper we study the genericity of simultaneous stabilizability, simultaneous strong stabilizability, and simultaneous pole assignability, in linear multivariable systems. The main results of the paper had been previously established by Ghosh and Byrnes using state-space methods. In contrast, the proofs in the present paper are based on input-output arguments, and are much simpler to follow, especially in the case of simultaneous and simultaneous strong stabilizability. Moreover, the input-output methods used here suggest computationally reliable algorithms for solving these two types of problems. In addition to the main results, we also prove some lemmas on generic greatest common divisors which are of independent interest.
Resumo:
he Dirac generator formalism for relativistic Hamiltonian dynamics is reviewed along with its extension to constraint formalism. In these theories evolution is with respect to a dynamically defined parameter, and thus time evolution involves an eleventh generator. These formulations evade the No-Interaction Theorem. But the incorporation of separability reopens the question, and together with the World Line Condition leads to a second no-interaction theorem for systems of three or more particles. Proofs are omitted, but the results of recent research in this area is highlighted.
Resumo:
It is well known that the notions of normal forms and acyclicity capture many practical desirable properties for database schemes. The basic schema design problem is to develop design methodologies that strive toward these ideals. The usual approach is to first normalize the database scheme as far as possible. If the resulting scheme is cyclic, then one tries to transform it into an acyclic scheme. In this paper, we argue in favor of carrying out these two phases of design concurrently. In order to do this efficiently, we need to be able to incrementally analyze the acyclicity status of a database scheme as it is being designed. To this end, we propose the formalism of "binary decompositions". Using this, we characterize design sequences that exactly generate theta-acyclic schemes, for theta = agr,beta. We then show how our results can be put to use in database design. Finally, we also show that our formalism above can be effectively used as a proof tool in dependency theory. We demonstrate its power by showing that it leads to a significant simplification of the proofs of some previous results connecting sets of multivalued dependencies and acyclic join dependencies.
Resumo:
This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
We present four new reinforcement learning algorithms based on actor-critic, natural-gradient and functi approximation ideas,and we provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their compatibility with function-approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of special interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients. Our results extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.
Resumo:
We extend some of the classical connections between automata and logic due to Büchi (1960) [5] and McNaughton and Papert (1971) [12] to languages of finitely varying functions or “signals”. In particular, we introduce a natural class of automata for generating finitely varying functions called View the MathML source’s, and show that it coincides in terms of language definability with a natural monadic second-order logic interpreted over finitely varying functions Rabinovich (2002) [15]. We also identify a “counter-free” subclass of View the MathML source’s which characterise the first-order definable languages of finitely varying functions. Our proofs mainly factor through the classical results for word languages. These results have applications in automata characterisations for continuously interpreted real-time logics like Metric Temporal Logic (MTL) Chevalier et al. (2006, 2007) [6] and [7].
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
We consider the slotted ALOHA protocol on a channel with a capture effect. There are M
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
Resumo:
Tax havens have attracted increasing attention from the authorities of non-haven countries. The financial crisis exacerbates the negative attitude to tax havens. Offshore zones are now under strong pressure from the international, both financial and political institutions. Thus, the thesis will focus on the current problem of the modern economy, namely tax havens and their impact on the non-haven countries. This thesis will be based on the several articles, in particular “Tax Competition With Parasitic Tax Havens” by Joel Slemrod and John D. Wilson (University of Michigan, 2009) and “Do Havens Divert Economic Activity” by James R. Hines Jr., C. Fritz Foley and Mihir A. Desai (Ross School of Business, 2005). This paper provides two completely different and contradictory viewpoints on the problem of coexisting tax havens and non-haven countries. There are two models, examined in this work, present two important researches. The first one will be concentrated on the positive effect from tax havens whereas the last model will be focused on the completely negative effect from offshore jurisdictions. The first model gives us a good explanation and proof of its statement why tax havens can positively influence on nearby high-tax countries. It describes that the existence of offshore jurisdictions can stimulate the growth of operations and facilitates economic activity in non-haven countries. In contrast to above mentioned, the model with quite opposite view was presented. This economic model and its analysis confirms the undesirability of the existence of offshore areas. Taking into consideration, that the jurisdictions choose their optimal policy, the elimination of offshores will have positive impact on the rest of countries. The model proofs the statement that full or partial elimination of tax havens raises the equilibrium level of the public good and increases country welfare. According to the following study, it can be concluded that both of the models provide telling arguments to prove their assertions. Thereby both of these points of view have their right to exist. Nevertheless, the ongoing debate concerning this issue still will raise a lot of questions.
Resumo:
The object of this work is Hegel's Logic, which comprises the first third of his philosophical System that also includes the Philosophy of Nature and the Philosophy of Spirit. The work is divided into two parts, where the first part investigates Hegel s Logic in itself or without an explicit reference to rest of Hegel's System. It is argued in the first part that Hegel's Logic contains a methodology for constructing examples of basic ontological categories. The starting point on which this construction is based is a structure Hegel calls Nothing, which I argue to be identical with an empty situation, that is, a situation with no objects in it. Examples of further categories are constructed, firstly, by making previous structures objects of new situations. This rule makes it possible for Hegel to introduce examples of ontological structures that contain objects as constituents. Secondly, Hegel takes also the very constructions he uses as constituents of further structures: thus, he is able to exemplify ontological categories involving causal relations. The final result of Hegel's Logic should then be a model of Hegel s Logic itself, or at least of its basic methods. The second part of the work focuses on the relation of Hegel's Logic to the other parts of Hegel's System. My interpretation tries to avoid, firstly, the extreme of taking Hegel's System as a grand metaphysical attempt to deduce what exists through abstract thinking, and secondly, the extreme of seeing Hegel's System as mere diluted Kantianism or a second-order investigation of theories concerning objects instead of actual objects. I suggest a third manner of reading Hegel's System, based on extending the constructivism of Hegel's Logic to the whole of his philosophical System. According to this interpretation, transitions between parts of Hegel's System should not be understood as proofs of any sort, but as constructions of one structure or its model from another structure. Hence, these transitions involve at least, and especially within the Philosophy of Nature, modelling of one type of object or phenomenon through characteristics of an object or phenomenon of another type, and in the best case, and especially within the Philosophy of Spirit, transformations of an object or phenomenon of one type into an object or phenomenon of another type. Thus, the transitions and descriptions within Hegel's System concern actual objects and not mere theories, but they still involve no fallacious deductions.