876 resultados para COMPUTER SCIENCE, THEORY


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Graphical user interfaces (GUIs) are critical components of today's open source software. Given their increased relevance, the correctness and usability of GUIs are becoming essential. This paper describes the latest results in the development of our tool to reverse engineer the GUI layer of interactive computing open source systems. We use static analysis techniques to generate models of the user interface behavior from source code. Models help in graphical user interface inspection by allowing designers to concentrate on its more important aspects. One particular type of model that the tool is able to generate is state machines. The paper shows how graph theory can be useful when applied to these models. A number of metrics and algorithms are used in the analysis of aspects of the user interface's quality. The ultimate goal of the tool is to enable analysis of interactive system through GUIs source code inspection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power law distributions, also known as heavy tail distributions, model distinct real life phenomena in the areas of biology, demography, computer science, economics, information theory, language, and astronomy, amongst others. In this paper, it is presented a review of the literature having in mind applications and possible explanations for the use of power laws in real phenomena. We also unravel some controversies around power laws.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Published also at Lecture Notes in Engineering and Computer Science

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated itthe justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferredthe assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a scientific research project is important to define the underlying philosophical orientation of the project, because this will influence the choices made in respect of scientific methods used, as well as the way they will be applied. It is crucial, therefore, that the philosophy and research design strategy are consistent with each other. These questions become even more relevant in qualitative research. Historically, the interpretive research philosophy is more associated to the scientific areas of social sciences and humanities where the subjectivity inherent to human intervention is more explicitly defined. Information systems field are, primarily, trapped in computer science field, though it also integrates issues related with management and organizations field. This shift from a purely technological guidance for the consideration of the problems of management and organizations has fostered the rise of research projects according to the interpretive philosophy and using qualitative methods. This paper explores the importance of alignment between the epistemological orientation and research design strategy, in qualitative research projects. As a result, it is presented two PhD projects, with different research design strategies, that are being developed in the technology and information systems field, in the light of the interpretive paradigm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Game theory is a branch of applied mathematics used to analyze situation where two or more agents are interacting. Originally it was developed as a model for conflicts and collaborations between rational and intelligent individuals. Now it finds applications in social sciences, eco- nomics, biology (particularly evolutionary biology and ecology), engineering, political science, international relations, computer science, and philosophy. Networks are an abstract representation of interactions, dependencies or relationships. Net- works are extensively used in all the fields mentioned above and in many more. Many useful informations about a system can be discovered by analyzing the current state of a network representation of such system. In this work we will apply some of the methods of game theory to populations of agents that are interconnected. A population is in fact represented by a network of players where one can only interact with another if there is a connection between them. In the first part of this work we will show that the structure of the underlying network has a strong influence on the strategies that the players will decide to adopt to maximize their utility. We will then introduce a supplementary degree of freedom by allowing the structure of the population to be modified along the simulations. This modification allows the players to modify the structure of their environment to optimize the utility that they can obtain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Se describen algunas aplicaciones de la teora de matrices a diversos temas pertenecientes almbito de la matem\'atica discreta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The proposed transdisciplinary field of complexics would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not to reduce complexity to simplicity, [but] totranslate complexity into theory.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the sciences and in the arts. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The proposed transdisciplinary field of complexics would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not to reduce complexity to simplicity, [but] totranslate complexity into theory.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the sciences and in the arts. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at bo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike traditional biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is foreign to tra- ditional biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because interface signs act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of a large and reliable DNA codeword library is a key problem in DNA based computing. DNA codes, namely sets of fixed length edit metric codewords over the alphabet {A, C, G, T}, satisfy certain combinatorial constraints with respect to biological and chemical restrictions of DNA strands. The primary constraints that we consider are the reverse--complement constraint and the fixed GC--content constraint, as well as the basic edit distance constraint between codewords. We focus on exploring the theory underlying DNA codes and discuss several approaches to searching for optimal DNA codes. We use Conway's lexicode algorithm and an exhaustive search algorithm to produce provably optimal DNA codes for codes with small parameter values. And a genetic algorithm is proposed to search for some sub--optimal DNA codes with relatively large parameter values, where we can consider their sizes as reasonable lower bounds of DNA codes. Furthermore, we provide tables of bounds on sizes of DNA codes with length from 1 to 9 and minimum distance from 1 to 9.