850 resultados para User interface style
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Neste trabalho, estuda-se um novo método de inversão tomográfica de reflexão para a determinação de um modelo isotrópico e suave de velocidade por meio da aplicação, em dados sintéticos e reais, do programa Niptomo que é uma implementação do método de inversão tomográfica dos atributos cinemáticos da onda hipotética do ponto de incidência normal (PIN). Os dados de entrada para a inversão tomográfica, isto é, o tempo de trânsito e os atributos da onda PIN (raio de curvatura da frente de onda emergente e ângulo de emergência), são retirados de uma série de pontos escolhidos na seção afastamento nulo (AN) simulada, obtida pelo método de empilhamento por superfícies de reflexão comum (SRC). Normalmente, a escolha destes pontos na seção AN é realizada utilizando-se programas de picking automático, que identificam eventos localmente coerentes na seção sísmica com base nos parâmetros fornecidos pelo usuário. O picking é um dos processos mais críticos dos métodos de inversão tomográfica, pois a inclusão de dados de eventos que não sejam de reflexões primárias podem ser incluídos neste processo, prejudicando assim o modelo de velocidades a ser obtido pela inversão tomográfica. Este trabalho tem por objetivo de construir um programa de picking interativo para fornecer ao usuário o controle da escolha dos pontos de reflexões sísmicas primárias, cujos dados serão utilizados na inversão tomográfica. Os processos de picking e inversão tomográfica são aplicados nos dados sintéticos Marmousi e nos dados da linha sísmica 50-RL-90 da Bacia do Tacutu. Os resultados obtidos mostraram que o picking interativo para a escolha de pontos sobre eventos de reflexões primárias favorece na obtenção de um modelo de velocidade mais preciso.
Resumo:
The AEDROMO (Experimental and Didactic Environment with Mobile Robots) is a versatile, user friendly and scalable environment that supports a wide range of experiments. In it there is an area that is similar to a desk where objects can interact with each other, including robots and other objects, and thus can perform numerous activities. In it's current state, AEDROMO has client computers that interact with the system through an interface, and thus realize the communication between the user and AEDROMO. This project offer support to create a new form of interface for AEDROMO and can therefore be used for devices running Android, the app developed in this project will serve as a basis for future work on this new interface
Resumo:
This work presents the development of a graphical interface to the Lock-in Amplifier, which is used in physiological studies on the motility of the gastrointestinal tract in rats and signal processing. With a simple and low cost instrumentation, the resources offered by the virtual interface of LabVIEW software allows the creation of commands similar to the actual instrument that, through communication via standard serial port, transmits data between a PC and peripheral device performing specific and particular needs in the amplifier. Created for the lock-in amplifier model SR830 Stanford Research Systems, the remote manipulation gives the user greater accessibility in the process of configuration and calibration. And, since the software is installed, there is the advantage of eliminating the need of purchase new devices to upgrade the system. The commands created were made to perform six basic modifications that are used in routine of the Biomagnetism Laboratory. The instrumentation developed has the following controls: Amplitude, Frequency, Time Constant, slope low pass filter, sensitivity and offset
Resumo:
In this project the Pattern Recognition Problem is approached with the Support Vector Machines (SVM) technique, a binary method of classification that provides the best solution separating the data in the better way with a hiperplan and an extension of the input space dimension, as a Machine Learning solution. The system aims to classify two classes of pixels chosen by the user in the interface in the interest selection phase and in the background selection phase, generating all the data to be used in the LibSVM library, a library that implements the SVM, illustrating the library operation in a casual way. The data provided by the interface is organized in three types, RGB (Red, Green and Blue color system), texture (calculated) or RGB + texture. At last the project showed successful results, where the classification of the image pixels was showed as been from one of the two classes, from the interest selection area or from the background selection area. The simplest user view of results classification is the RGB type of data arrange, because it’s the most concrete way of data acquisition
Resumo:
The paper presents an ergonomic analysis of reading usability of electronic journals and the comparison with newspapers. As a method, it was adopted an evaluation of the user perception, from a printed questionnaire applied to a group of 41 people. Overall, the results indicate that on the analyzed newspapers there is need for greater care concerning the aspects of visual representation, involving more design application, usability, ergonomics, technology and communication.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
[ES] El estándar Functional Mockup Interface (FMI), es un estándar abierto e independiente de cualquier aplicación o herramienta que permite compartir modelos de sistemas dinámicos entre aplicaciones. Provee una interfaz escrita en lenguaje C que ha de ser implementada por las distintas herramientas exportadoras y pone en común un conjunto de funciones para manipular los modelos.
JavaFMI es una herramienta que permite utilizar simulaciones que cumplen con el estándar FMI en aplicaciones Java de una manera muy simple, limpia y eficiente. Es un proyecto open source con licencia LGPL V2.1H y su código fuente se encuentra disponible para ser clonado en la pagina del proyecto. El proyecto se encuentra alojado en www.bitbucket.org/siani/javafmi y cuenta con una página de bienvenida donde se explica como se usa la librería, una página para reportar incidencias o solicitar que se implementen nuevas historias y una página donde se listan todas las versiones que hay disponibles para descargar. JavaFMI se distribuye como un fichero zip que contiene el .jar con el código compilado de la librería una carpeta lib con las dos dependencias que tiene con librerías externas y una copia de la licencia. Comparada con JFMI, con menos lineas de código, una API limpia, expresiva y auto documentada, y un rendimiento que es un 66 % mejor, JavaFMI es objetivamente la mejor herramienta Java que existe para manipular FMUs de la versión 1.0 y 2.0 del estándar FMI.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
This thesis addresses the issue of generating texts in the style of an existing author, that also satisfy structural constraints imposed by the genre of the text. Although Markov processes are known to be suitable for representing style, they are difficult to control in order to satisfy non-local properties, such as structural constraints, that require long distance modeling. The framework of Constrained Markov Processes allows to precisely generate texts that are consistent with a corpus, while being controllable in terms of rhymes and meter. Controlled Markov processes consist in reformulating Markov processes in the context of constraint satisfaction. The thesis describes how to represent stylistic and structural properties in terms of constraints in this framework and how this approach can be used for the generation of lyrics in the style of 60 differents authors An evaluation of the desctibed method is provided by comparing it to both pure Markov and pure constraint-based approaches. Finally the thesis describes the implementation of an augmented text editor, called Perec. Perec is intended to improve creativity, by helping the user to write lyrics and poetry, exploiting the techniques presented so far.
Resumo:
The monitoring of cognitive functions aims at gaining information about the current cognitive state of the user by decoding brain signals. In recent years, this approach allowed to acquire valuable information about the cognitive aspects regarding the interaction of humans with external world. From this consideration, researchers started to consider passive application of brain–computer interface (BCI) in order to provide a novel input modality for technical systems solely based on brain activity. The objective of this thesis is to demonstrate how the passive Brain Computer Interfaces (BCIs) applications can be used to assess the mental states of the users, in order to improve the human machine interaction. Two main studies has been proposed. The first one allows to investigate whatever the Event Related Potentials (ERPs) morphological variations can be used to predict the users’ mental states (e.g. attentional resources, mental workload) during different reactive BCI tasks (e.g. P300-based BCIs), and if these information can predict the subjects’ performance in performing the tasks. In the second study, a passive BCI system able to online estimate the mental workload of the user by relying on the combination of the EEG and the ECG biosignals has been proposed. The latter study has been performed by simulating an operative scenario, in which the occurrence of errors or lack of performance could have significant consequences. The results showed that the proposed system is able to estimate online the mental workload of the subjects discriminating three different difficulty level of the tasks ensuring a high reliability.
Resumo:
Among daily computer users who are proficient, some are flexible at accomplishing unfamiliar tasks on their own and others have difficulty. Software designers and evaluators involved with Human Computer Interaction (HCI) should account for any group of proficient daily users that are shown to stumble over unfamiliar tasks. We define "Just Enough" (JE) users as proficient daily computer users with predominantly extrinsic motivation style who know just enough to get what they want or need from the computer. We hypothesize that JE users have difficulty with unfamiliar computer tasks and skill transfer, whereas intrinsically motivated daily users accomplish unfamiliar tasks readily. Intrinsic motivation can be characterized by interest, enjoyment, and choice and extrinsic motivation is externally regulated. In our study we identified users by motivation style and then did ethnographic observations. Our results confirm that JE users do have difficulty accomplishing unfamiliar tasks on their own but had fewer problems with near skill transfer. In contrast, intrinsically motivated users had no trouble with unfamiliar tasks nor with near skill transfer. This supports our assertion that JE users know enough to get routine tasks done and can transfer that knowledge, but become unproductive when faced with unfamiliar tasks. This study combines quantitative and qualitative methods. We identified 66 daily users by motivation style using an inventory adapted from Deci and Ryan (Ryan and Deci 2000) and from Guay, Vallerand, and Blanchard (Guay et al. 2000). We used qualitative ethnographic methods with a think aloud protocol to observe nine extrinsic users and seven intrinsic users. Observation sessions had three customized phases where the researcher directed the participant to: 1) confirm the participant's proficiency; 2) test the participant accomplishing unfamiliar tasks; and 3) test transfer of existing skills to unfamiliar software.
Resumo:
This paper proposes an extension to the televisionwatching paradigm that permits an end-user to enrich broadcast content. Examples of this enriched content are: virtual edits that allow the order of presentation within the content to be changed or that allow the content to be subsetted; conditional text, graphic or video objects that can be placed to appear within content and triggered by viewer interaction; additional navigation links that can be added to structure how other users view the base content object. The enriched content can be viewed directly within the context of the TV viewing experience. It may also be shared with other users within a distributed peer group. Our architecture is based on a model that allows the original content to remain unaltered, and which respects DRM restrictions on content reuse. The fundamental approach we use is to define an intermediate content enhancement layer that is based on the W3C’s SMIL language. Using a pen-based enhancement interface, end-users can manipulate content that is saved in a home PDR setting. This paper describes our architecture and it provides several examples of how our system handles content enhancement. We also describe a reference implementation for creating and viewing enhancements.
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.