871 resultados para Interactive interfaces
Resumo:
With hardware and software technologies advance, it s also happenning modifications in the development models of computational systems. New methodologies for user interface specification are being created with user interface description languages (UIDL). The UIDLs are a way to have a precise description in a language with more abstraction and independent of how will be implemented. A great problem is that even using these nowadays methodologies, we still have a big distance between the UIDLs and its design, what means, the distance between abstract and concrete. The tool BRIDGE (Interface Design Generator Environment) was created with the intention of being a linking bridge between a specification language (the Interactive Message Modeling Language IMML) and its implementation in Java, linking the abstract (specification) to the concrete (implementation). IMML is a language based on models, that allows the designer works in distinct abstraction levels, being each model a distinct abstraction level. IMML is a XML language, that uses the Semiotic Engineering concepts, that deals the computational system, with the user interface and its elements like a metacommunicative artifact, where these elements must to transmit a message to the user about what task must to be realized and the way to reach this goal. With BRIDGE, we intend to supply a lot of support to the design task, being the user interface prototipation the greater of them. BRIDGE allows the design becomes easier and more intuitive coming from an interface specification language
Resumo:
This work presents an User Interface (UI) prototypes generation process to the softwares that has a Web browser as a plataform. This process uses UI components more complex than HTML elements. To described this components more complex this work suggest to use the XICL (eXtensinble User Interface Components Language). XICL is a language, based on XML syntax, to describe UI Components and IUs. XICL promotes extensibility and reusability in the User Interface development process. We have developed two compiler. The first one compiles IMML (Interactive Message Modeling Language) code and generates XICL code. The second one compiles XICL code and generates DHTML code
Resumo:
This paper describes an interactive environment built entirely upon public domain or free software, intended to be used as the preprocessor of a finite element package for the simulation of three-dimensional electromagnetic problems.
Resumo:
Pós-graduação em Artes - IA
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
The central question for this paper is how to improve the production process by closing the gap between industrial designers and software engineers of television(TV)-based User Interfaces (UI) in an industrial environment. Software engineers are highly interested whether one UI design can be converted into several fully functional UIs for TV products with different screen properties. The aim of the software engineers is to apply automatic layout and scaling in order to speed up and improve the production process. However, the question is whether a UI design lends itself for such automatic layout and scaling. This is investigated by analysing a prototype UI design done by industrial designers. In a first requirements study, industrial designers had created meta-annotations on top of their UI design in order to disclose their design rationale for discussions with software engineers. In a second study, five (out of ten) industrial designers assessed the potential of four different meta-annotation approaches. The question was which annotation method industrial designers would prefer and whether it could satisfy the technical requirements of the software engineering process. One main result is that the industrial designers preferred the method they were already familiar with, which therefore seems to be the most effective one although the main objective of automatic layout and scaling could still not be achieved.
Resumo:
Our research project develops an intranet search engine with concept- browsing functionality, where the user is able to navigate the conceptual level in an interactive, automatically generated knowledge map. This knowledge map visualizes tacit, implicit knowledge, extracted from the intranet, as a network of semantic concepts. Inductive and deductive methods are combined; a text ana- lytics engine extracts knowledge structures from data inductively, and the en- terprise ontology provides a backbone structure to the process deductively. In addition to performing conventional keyword search, the user can browse the semantic network of concepts and associations to find documents and data rec- ords. Also, the user can expand and edit the knowledge network directly. As a vision, we propose a knowledge-management system that provides concept- browsing, based on a knowledge warehouse layer on top of a heterogeneous knowledge base with various systems interfaces. Such a concept browser will empower knowledge workers to interact with knowledge structures.
Resumo:
The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.
Resumo:
Facilitating general access to data from sensor networks (including traffic, hydrology and other domains) increases their utility. In this paper we argue that the journalistic metaphor can be effectively used to automatically generate multimedia presentations that help non-expert users analyze and understand sensor data. The journalistic layout and style are familiar to most users. Furthermore, the journalistic approach of ordering information from most general to most specific helps users obtain a high-level understanding while providing them the freedom to choose the depth of analysis to which they want to go. We describe the general characteristics and architectural requirements for an interactive intelligent user interface for exploring sensor data that uses the journalistic metaphor. We also describe our experience in developing this interface in real-world domains (e.g., hydrology).
Resumo:
Ubiquitous computing (one person, many computers) is the third era in the history of computing. It follows the mainframe era (many people, one computer) and the PC era (one person, one computer). Ubiquitous computing empowers people to communicate with services by interacting with their surroundings. Most of these so called smart environments contain sensors sensing users’ actions and try to predict the users’ intentions and necessities based on sensor data. The main drawback of this approach is that the system might perform unexpected or unwanted actions, making the user feel out of control. In this master thesis we propose a different procedure based on Interactive Spaces: instead of predicting users’ intentions based on sensor data, the system reacts to users’ explicit predefined actions. To that end, we present REACHeS, a server platform which enables communication among services, resources and users located in the same environment. With REACHeS, a user controls services and resources by interacting with everyday life objects and using a mobile phone as a mediator between himself/herself, the system and the environment. REACHeS’ interfaces with a user are built upon NFC (Near Field Communication) technology. NFC tags are attached to objects in the environment. A tag stores commands that are sent to services when a user touches the tag with his/her NFC enabled device. The prototypes and usability tests presented in this thesis show the great potential of NFC to build such user interfaces.
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
The goal of the project is to analyze, experiment, and develop intelligent, interactive and multilingual Text Mining technologies, as a key element of the next generation of search engines, systems with the capacity to find "the need behind the query". This new generation will provide specialized services and interfaces according to the search domain and type of information needed. Moreover, it will integrate textual search (websites) and multimedia search (images, audio, video), it will be able to find and organize information, rather than generating ranked lists of websites.