967 resultados para Theoretical Computer Science
Resumo:
At the core of the analysis task in the development process is information systems requirements modelling, Modelling of requirements has been occurring for many years and the techniques used have progressed from flowcharting through data flow diagrams and entity-relationship diagrams to object-oriented schemas today. Unfortunately, researchers have been able to give little theoretical guidance only to practitioners on which techniques to use and when. In an attempt to address this situation, Wand and Weber have developed a series of models based on the ontological theory of Mario Bunge-the Bunge-Wand-Weber (BWW) models. Two particular criticisms of the models have persisted however-the understandability of the constructs in the BWW models and the difficulty in applying the models to a modelling technique. This paper addresses these issues by presenting a meta model of the BWW constructs using a meta language that is familiar to many IS professionals, more specific than plain English text, but easier to understand than the set-theoretic language of the original BWW models. Such a meta model also facilitates the application of the BWW theory to other modelling techniques that have similar meta models defined. Moreover, this approach supports the identification of patterns of constructs that might be common across meta models for modelling techniques. Such findings are useful in extending and refining the BWW theory. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We detail the automatic construction of R matrices corresponding to (the tensor products of) the (O-m\alpha(n)) families of highest-weight representations of the quantum superalgebras Uq[gl(m\n)]. These representations are irreducible, contain a free complex parameter a, and are 2(mn)-dimensional. Our R matrices are actually (sparse) rank 4 tensors, containing a total of 2(4mn) components, each of which is in general an algebraic expression in the two complex variables q and a. Although the constructions are straightforward, we describe them in full here, to fill a perceived gap in the literature. As the algorithms are generally impracticable for manual calculation, we have implemented the entire process in MATHEMATICA; illustrating our results with U-q [gl(3\1)]. (C) 2002 Published by Elsevier Science B.V.
Resumo:
We introduce a model of computation based on read only memory (ROM), which allows us to compare the space-efficiency of reversible, error-free classical computation with reversible, error-free quantum computation. We show that a ROM-based quantum computer with one writable qubit is universal, whilst two writable bits are required for a universal classical ROM-based computer. We also comment on the time-efficiency advantages of quantum computation within this model.
Resumo:
Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.
Resumo:
The way humans interact with technology is undergoing a tremendous change. It is hard to imagine the lives we live today without the benefits of technology that we take for granted. Applying research in computer science, engineering, and information systems to non-technical descriptions of technology, such as human interaction, has shaped and continues to shape our lives. Human Interaction with Technology for Working, Communicating, and Learning: Advancements provides a framework for conceptual, theoretical, and applied research in regards to the relationship between technology and humans. This book is unique in the sense that it does not only cover technology, but also science, research, and the relationship between these fields and individuals' experience. This book is a must have for anyone interested in this research area, as it provides a voice for all users and a look into our future.
Resumo:
The formulation of a bending vibration problem of an elastically restrained Bernoulli-Euler beam carrying a finite number of concentrated elements along its length is presented. In this study, the authors exploit the application of the differential evolution optimization technique to identify the torsional stiffness properties of the elastic supports of a Bernoulli-Euler beam. This hybrid strategy allows the determination of the natural frequencies and mode shapes of continuous beams, taking into account the effect of attached concentrated masses and rotational inertias, followed by a reconciliation step between the theoretical model results and the experimental ones. The proposed optimal identification of the elastic support parameters is computationally demanding if the exact eigenproblem solving is considered. Hence, the use of a Gaussian process regression as a meta-model is addressed. An experimental application is used in order to assess the accuracy of the estimated parameters throughout the comparison of the experimentally obtained natural frequency, from impact tests, and the correspondent computed eigenfrequency.
Resumo:
Relatório de estágio de mestrado em Ensino de Informática
Resumo:
L'article és una reflexió sobre els requisits de formació dels professionals que demana la societat del coneixement. Un dels objectius més importants que ha de tenir la universitat en la societat del coneixement és la formació de professionals competents que tinguin prou eines intel·lectuals per a enfrontar-se a la incertesa de la informació, a la consciència que aquesta té una data de caducitat a curt termini i a l'ansietat que això provoca. Però, a més, també han de ser capaços de definir i crear les eines de treball amb què donaran sentit i eficàcia a aquest coneixement mudable i mutant. Per això, l'espai europeu d'ensenyament superior prioritza la competència transversal del treball col·laboratiu amb l'objectiu de promoure un aprenentatge autònom, compromès i adaptat a les noves necessitats de l'empresa del segle xxi. En aquest context, es presenta l'entorn teòric que fonamenta el treball desenvolupat a la plataforma informàtica ACME, que uneix el treball col·laboratiu i l'aprenentatge semipresencial o blended learning. Així mateix, es descriuen amb detall alguns exemples de wikis, paradigma del treball col·laboratiu, fets en assignatures impartides per la Universitat de Girona en l'espai virtual ACME
Resumo:
L'article és una reflexió sobre els requisits de formació dels professionals que demana la societat del coneixement. Un dels objectius més importants que ha de tenir la universitat en la societat del coneixement és la formació de professionals competents que tinguin prou eines intel·lectuals per a enfrontar-se a la incertesa de la informació, a la consciència que aquesta té una data de caducitat a curt termini i a l'ansietat que això provoca. Però, a més, també han de ser capaços de definir i crear les eines de treball amb què donaran sentit i eficàcia a aquest coneixement mudable i mutant. Per això, l'espai europeu d'ensenyament superior prioritza la competència transversal del treball col·laboratiu amb l'objectiu de promoure un aprenentatge autònom, compromès i adaptat a les noves necessitats de l'empresa del segle xxi. En aquest context, es presenta l'entorn teòric que fonamenta el treball desenvolupat a la plataforma informàtica ACME, que uneix el treball col·laboratiu i l'aprenentatge semipresencial o blended learning. Així mateix, es descriuen amb detall alguns exemples de wikis, paradigma del treball col·laboratiu, fets en assignatures impartides per la Universitat de Girona en l'espai virtual ACME
Resumo:
L'article és una reflexió sobre els requisits de formació dels professionals que demana la societat del coneixement. Un dels objectius més importants que ha de tenir la universitat en la societat del coneixement és la formació de professionals competents que tinguin prou eines intel·lectuals per a enfrontar-se a la incertesa de la informació, a la consciència que aquesta té una data de caducitat a curt termini i a l'ansietat que això provoca. Però, a més, també han de ser capaços de definir i crear les eines de treball amb què donaran sentit i eficàcia a aquest coneixement mudable i mutant. Per això, l'espai europeu d'ensenyament superior prioritza la competència transversal del treball col·laboratiu amb l'objectiu de promoure un aprenentatge autònom, compromès i adaptat a les noves necessitats de l'empresa del segle xxi. En aquest context, es presenta l'entorn teòric que fonamenta el treball desenvolupat a la plataforma informàtica ACME, que uneix el treball col·laboratiu i l'aprenentatge semipresencial o blended learning. Així mateix, es descriuen amb detall alguns exemples de wikis, paradigma del treball col·laboratiu, fets en assignatures impartides per la Universitat de Girona en l'espai virtual ACME
Resumo:
El concepte d'alfabetització digital ha evolucionat per diverses vies al llarg del temps pel que fa a l'enfocament teòric emprat per a investigar les seves implicacions en l'estudi de la divisió digital de gènere en diversos contextos de la vida real. L'objectiu principal d'aquest document consisteix a fer servir un enfocament interdisciplinari per a analitzar algunes de les llacunes teòriques i empíriques presents en l'estudi de la divisió digital de gènere. S'analitzen alguns dels estudis empírics existents sobre aquesta qüestió i es proposen futures línies de recerca, amb l'objectiu de cobrir algunes de les llacunes en la recerca relacionada amb les implicacions de l'alfabetització digital en l'anàlisi de la divisió digital de gènere.
Resumo:
A Fundamentals of Computing Theory course involves different topics that are core to the Computer Science curricula and whose level of abstraction makes them difficult both to teach and to learn. Such difficulty stems from the complexity of the abstract notions involved and the required mathematical background. Surveys conducted among our students showed that many of them were applying some theoretical concepts mechanically rather than developing significant learning. This paper shows a number of didactic strategies that we introduced in the Fundamentals of Computing Theory curricula to cope with the above problem. The proposed strategies were based on a stronger use of technology and a constructivist approach. The final goal was to promote more significant learning of the course topics.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
At the present time, protein folding is an extremely active field of research including aspects of biology, chemistry, biochemistry, computer science and physics. The fundamental principles have practical applications in the exploitation of the advances in genome research, in the understanding of different pathologies and in the design of novel proteins with special functions. Although the detailed mechanisms of folding are not completely known, significant advances have been made in the understanding of this complex process through both experimental and theoretical approaches. In this review, the evolution of concepts from Anfinsen's postulate to the "new view" emphasizing the concept of the energy landscape of folding is presented. The main rules of protein folding have been established from in vitro experiments. It has been long accepted that the in vitro refolding process is a good model for understanding the mechanisms by which a nascent polypeptide chain reaches its native conformation in the cellular environment. Indeed, many denatured proteins, even those whose disulfide bridges have been disrupted, are able to refold spontaneously. Although this assumption was challenged by the discovery of molecular chaperones, from the amount of both structural and functional information now available, it has been clearly established that the main rules of protein folding deduced from in vitro experiments are also valid in the cellular environment. This modern view of protein folding permits a better understanding of the aggregation processes that play a role in several pathologies, including those induced by prions and Alzheimer's disease. Drug design and de novo protein design with the aim of creating proteins with novel functions by application of protein folding rules are making significant progress and offer perspectives for practical applications in the development of pharmaceuticals and medical diagnostics.