965 resultados para Algebra, Boolean
Resumo:
We define an applicative theory of truth TPT which proves totality exactly for the polynomial time computable functions. TPT has natural and simple axioms since nearly all its truth axioms are standard for truth theories over an applicative framework. The only exception is the axiom dealing with the word predicate. The truth predicate can only reflect elementhood in the words for terms that have smaller length than a given word. This makes it possible to achieve the very low proof-theoretic strength. Truth induction can be allowed without any constraints. For these reasons the system TPT has the high expressive power one expects from truth theories. It allows embeddings of feasible systems of explicit mathematics and bounded arithmetic. The proof that the theory TPT is feasible is not easy. It is not possible to apply a standard realisation approach. For this reason we develop a new realisation approach whose realisation functions work on directed acyclic graphs. In this way, we can express and manipulate realisation information more efficiently.
Resumo:
Protecting different kinds of information has become an important area of research. One aspect is to provide effective means to avoid that secrets can be deduced from the answers of legitimate queries. In the context of atomic propositional databases several methods have been developed to achieve this goal. However, in those databases it is not possible to formalize structural information. Also they are quite restrictive with respect to the specification of secrets. In this paper we extend those methods to match the much greater expressive power of Boolean description logics. In addition to the formal framework, we provide a discussion of various kinds of censors and establish different levels of security they can provide.
Resumo:
An elementary algebra identifies conceptual and corresponding applicational limitations in John Kemeny and Paul Oppenheim’s (K-O) 1956 model of theoretical reduction in the sciences. The K-O model was once widely accepted, at least in spirit, but seems afterward to have been discredited, or in any event superceeded. Today, the K-O reduction model is seldom mentioned, except to clarify when a reduction in the Kemeny-Oppenheim sense is not intended. The present essay takes a fresh look at the basic mathematics of K-O comparative vocabulary theoretical term reductions, from historical and philosophical standpoints, as a contribution to the history of the philosophy of science. The K-O theoretical reduction model qualifies a theory replacement as a successful reduction when preconditions of explanatory adequacy and comparable systematicization are met, and there occur fewer numbers of theoretical terms identified as replicable syntax types in the most economical statement of a theory’s putative propositional truths, as compared with the theoretical term count for the theory it replaces. The challenge to the historical model developed here, to help explain its scope and limitations, involves the potential for equivocal theoretical meanings of multiple theoretical term tokens of the same syntactical type.
Resumo:
von Fr. Adrian Köcher
Resumo:
Von J. Ineichen, Professor der Physik am Lyceum zu Luzern
Resumo:
This paper analyzes the role of Computer Algebra Systems (CAS) in a model of learning based on competences. The proposal is an e-learning model Linear Algebra course for Engineering, which includes the use of a CAS (Maxima) and focuses on problem solving. A reference model has been taken from the Spanish Open University. The proper use of CAS is defined as an indicator of the generic ompetence: Use of Technology. Additionally, we show that using CAS could help to enhance the following generic competences: Self Learning, Planning and Organization, Communication and Writing, Mathematical and Technical Writing, Information Management and Critical Thinking.
Resumo:
This project investigates the utility of differential algebra (DA) techniques applied to the problem of orbital dynamics with initial uncertainties in the orbital determination of the involved bodies. The use of DA theory allows the splitting of a common Monte Carlo simulation in two parts: the generation of a Taylor map of the final states with regard to the perturbation in the initial coordinates, and the evaluation of the map for many points. A propagator is implemented exploiting DA techniques, and tested in the field of asteroid impact risk monitoring with the potentially hazardous 2011 AG5 and 2007 VK184 as test cases. Results show that the new method is able to simulate 2.5 million trajectories with a precision good enough for the impact probability to be accurately reproduced, while running much faster than a traditional Monte Carlo approach (in 1 and 2 days, respectively).
Resumo:
In this work, the algebraic properties of the local transition functions of elementary cellular automata (ECA) were analysed. Specifically, a classification of such cellular automata was done according to their algebraic degree, the balancedness, the resiliency, nonlinearity, the propagation criterion and the existence of non-zero linear structures. It is shown that there is not any ECA satisfying all properties at the same time.
Resumo:
We present simulation results on how power output-input characteristic Instability in Distributed FeedBack -DFB semiconductor laser diode SLA can be employed to implemented Boolean logic device. Two configurations of DFB Laser diode under external optical injection, either in the transmission or in the reflective mode of operation, is used to implement different Optical Logic Cells (OLCs), called the Q- and the P-Device OLCs. The external optical injection correspond to two inputs data plus a cw control signal that allows to choose the Boolean logic function to be implement. DFB laser diode parameters are choosing to obtain an output-input characteristic with the values desired. The desired values are mainly the on-off contrast and switching power, conforming shape of hysteretic cycle. Two DFB lasers in cascade, one working in transmission operation and the other one in reflective operation, allows designing an inputoutput characteristic based on the same respond of a self-electrooptic effect device is obtained. Input power for a bit'T' is 35 uW(70uW) and a bit "0" is zero for all the Boolean function to be execute. Device control signal range to choose the logic function is 0-140 uW (280 uW). Q-device (P-device)
Resumo:
This work describes an experience with a methodology for learning based on competences in Linear Algebra for engineering students. The experience has been based in autonomous team work of students. DERIVE tutorials for Linear Algebra topics are provided to the students. They have to work with the tutorials as their homework. After, worksheets with exercises have been prepared to be solved by the students organized in teams, using DERIVE function previously defined in the tutorials. The students send to the instructor the solution of the proposed exercises and they fill a survey with their impressions about the following items: ease of use of the files, usefulness of the tutorials for understanding the mathematical topics and the time spent in the experience. As a final work, we have designed an activity directed to the interested students. They have to prepare a project, related with a real problem in Science and Engineering. The students are free to choose the topic and to develop it but they have to use DERIVE in the solution. Obviously they are guided by the instructor. Some examples of activities related with Orthogonal Transformations will be presented.
Resumo:
A toolbox is a set of procedures taking advantage of the computing power and graphical capacities of a CAS. With these procedures the students can solve math problems, apply mathematics to engineering or simply reinforce the learning of certain mathematical concepts. From the point of view of their construction, we can consider two types of toolboxes: (i) the closed box, built by the teacher, in which the utility files are provided to the students together with the respective tutorials and several worksheets with proposed exercises and problems,
Resumo:
Marca tip. en v. de port. y en v. de última h.
Resumo:
Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa una función booleana vectorial se puede inicializar de manera muy flexible mediante diferentes estructuras de datas tales como la Tabla de verdad, la Representación de traza y la Forma algebraica normal entre otras. De esta manera VBF permite evaluar los criterios criptográficos más relevantes de los algoritmos de cifra en bloque y de stream, así como funciones hash: por ejemplo, proporciona la no-linealidad, la distancia lineal, el grado algebraico, las estructuras lineales, la distribución de frecuencias de los valores absolutos del espectro Walsh o del espectro de autocorrelación, entre otros criterios. Adicionalmente, VBF puede llevar a cabo operaciones entre funciones booleanas vectoriales tales como la comprobación de igualdad, la composición, la inversión, la suma, la suma directa, el bricklayering (aplicación paralela de funciones booleanas vectoriales como la empleada en el algoritmo de cifra Rijndael), y la adición de funciones coordenada. La tesis también muestra el empleo de la librería VBF en dos aplicaciones prácticas. Por un lado, se han analizado las características más relevantes de los sistemas de cifra en bloque. Por otro lado, combinando VBF con algoritmos de optimización, se han diseñado funciones booleanas cuyas propiedades criptográficas son las mejores conocidas hasta la fecha. ABSTRACT This thesis develops the theoretical foundations and designs an open collection of C++ classes, called VBF, designed for analyzing vector Boolean functions (functions that map a Boolean vector to another Boolean vector) from a cryptographic perspective. This new implementation uses the NTL library from Victor Shoup, adding new modules which complement the existing ones making VBF better suited for cryptography. The fundamental class representing a vector Boolean function can be initialized in a flexible way via several alternative types of data structures such as Truth Table, Trace Representation, Algebraic Normal Form (ANF) among others. This way, VBF allows the evaluation of the most relevant cryptographic criteria for block and stream ciphers as well as for hash functions: for instance, it provides the nonlinearity, the linearity distance, the algebraic degree, the linear structures, the frequency distribution of the absolute values of the Walsh Spectrum or the Autocorrelation Spectrum, among others. In addition, VBF can perform operations such as equality testing, composition, inversion, sum, direct sum, bricklayering (parallel application of vector Boolean functions as employed in Rijndael cipher), and adding coordinate functions of two vector Boolean functions. This thesis also illustrates the use of VBF in two practical applications. On the one hand, the most relevant properties of the existing block ciphers have been analysed. On the other hand, by combining VBF with optimization algorithms, new Boolean functions have been designed which have the best known cryptographic properties up-to-date.