925 resultados para web user interface
Resumo:
Customer specific functionalities are a challenging part of procurement and invoice automation environments. In Basware Enterprise Purchase to Payment product family the customer specific reports are supported only in a basic level without any seamless interface between all EPP products. Also other customer specific functionalities are not supported as there is no customizable interface between the applications and only the most common features are implemented to the products themselves. In this thesis foundations are created for a new web based value added module where it is possible to create seamless customer specific functionalities throughout the whole EPP product family. The work is implemented in a Proof of Concept type of piloting. The system is created in user centered way where the users are able to explain their requests and determine their needs. The result is an excellent foundation for a module that can be developed further.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
TactoColor : conception et évaluation d’une interface d’exploration spatiale du web pour malvoyants.
Resumo:
Nous nous intéressons, dans le cadre de cette recherche, à l’accès à l’internet des personnes malvoyantes. Plusieurs types d’outils destinés à ce public sont disponibles sur le marché, comme les lecteurs et les agrandisseurs d’écran, en fonction de l’acuité visuelle de la personne. Bien que ces outils soient utiles et régulièrement utilisés, les malvoyants (ainsi que les aveugles) évoquent souvent leur aspect frustrant. Plusieurs raisons sont citées, comme le manque d’organisation spatiale du contenu lu avec les lecteurs d’écran ou le fait de ne solliciter qu’un seul sens. La présente recherche consiste à adapter pour les malvoyants un système en développement le TactoWeb (Petit, 2013) qui permet une exploration audio-tactile du Web. TactoWeb a été conçu pour les handicapés ayant une cécité complète et n’offre donc aucune propriété visuelle. Nous proposons ici une adaptation du système pour les handicapés n’ayant qu’une déficience visuelle partielle. Nous espérons fournir à cette population des outils performants qui leur permettront de naviguer sur l’internet de façon efficace et agréable. En effet, grâce à une exploration non-linéaire (qui devrait améliorer l’orientation spatiale) et une interface multimodale (qui sollicite la vue, l’ouïe et le toucher), nous pensons réduire fortement le sentiment de frustration qu’évoquent les malvoyants. Nous avons posé l’hypothèse qu’une exploration non-linéaire et trimodale d’un site internet avec TactoColor est plus satisfaisante et efficace qu’une exploration non-linéaire bimodale avec TactoWeb (sans retour visuel). TactoColor a été adapté pour les malvoyants en ajoutant des indices visuels traduisant les composantes de la page (liens, menus, boutons) qui devraient rendre l’exploration plus aisée. Pour vérifier notre hypothèse, les deux versions du logiciel ont été évaluées par des malvoyants. Ainsi, les participants ont commencé soit avec TactoWeb, soit avec TactoColor afin de ne pas favoriser une des versions. La qualité de la navigation, son efficacité et son efficience ont été analysées en se basant sur le temps nécessaire à l’accomplissement d’une tâche, ainsi que la facilité ou la difficulté évoquée par le participant. Aussi, à la fin de chaque session, nous avons demandé leur avis aux participants, grâce à un questionnaire d’évaluation, ce qui nous a permis d’avoir leur retour sur notre logiciel après leur brève expérience. Tous ces relevés nous ont permis de déterminer que l’ajout des couleurs entraine une exploration plus rapide des pages web et une meilleure orientation spatiale. Par contre les performances très différentes des participants ne permettent pas de dire si la présence des couleurs facilite la complétion des tâches.
Resumo:
Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student
Resumo:
In this lecture, we will focus on analyzing user goals in search query logs. Readings: M. Strohmaier, P. Prettenhofer, M. Lux, Different Degrees of Explicitness in Intentional Artifacts - Studying User Goals in a Large Search Query Log, CSKGOI'08 International Workshop on Commonsense Knowledge and Goal Oriented Interfaces, in conjunction with IUI'08, Canary Islands, Spain, 2008.
Resumo:
Search engines - such as Google - have been characterized as "Databases of intentions". This class will focus on different aspects of intentionality on the web, including goal mining, goal modeling and goal-oriented search. Readings: M. Strohmaier, M. Lux, M. Granitzer, P. Scheir, S. Liaskos, E. Yu, How Do Users Express Goals on the Web? - An Exploration of Intentional Structures in Web Search, We Know'07 International Workshop on Collaborative Knowledge Management for Web Information Systems in conjunction with WISE'07, Nancy, France, 2007. [Web link] Readings: Automatic identification of user goals in web search, U. Lee and Z. Liu and J. Cho WWW '05: Proceedings of the 14th International World Wide Web Conference 391--400 (2005) [Web link]
Resumo:
When publishing information on the web, one expects it to reach all the people that could be interested in. This is mainly achieved with general purpose indexing and search engines like Google which is the most used today. In the particular case of geographic information (GI) domain, exposing content to mainstream search engines is a complex task that needs specific actions. In many occasions it is convenient to provide a web site with a specially tailored search engine. Such is the case for on-line dictionaries (wikipedia, wordreference), stores (amazon, ebay), and generally all those holding thematic databases. Due to proliferation of these engines, A9.com proposed a standard interface called OpenSearch, used by modern web browsers to manage custom search engines. Geographic information can also benefit from the use of specific search engines. We can distinguish between two main approaches in GI retrieval information efforts: Classical OGC standardization on one hand (CSW, WFS filters), which are very complex for the mainstream user, and on the other hand the neogeographer’s approach, usually in the form of specific APIs lacking a common query interface and standard geographic formats. A draft ‘geo’ extension for OpenSearch has been proposed. It adds geographic filtering for queries and recommends a set of simple standard response geographic formats, such as KML, Atom and GeoRSS. This proposal enables standardization while keeping simplicity, thus covering a wide range of use cases, in both OGC and the neogeography paradigms. In this article we will analyze the OpenSearch geo extension in detail and its use cases, demonstrating its applicability to both the SDI and the geoweb. Open source implementations will be presented as well
Resumo:
The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it ‘3D’, has resulted in much research and development. To evaluate the impact of 3D on the ‘finding a target picture in a folder’ task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface.
Resumo:
Every time more we hear in our everyday statements like "I'm stressed!", "Don´t worry me more than I am." But in what sense can we use technology to combat these congestions that we deal with daily? Well, one way would be to use technology to create objects, systems or applications that can spoil us and preferably be imperceptible by the user and, for this we have the ubiquitous computing and nurturant technologies. The ubiquitous computing is increasingly discussed as well as ways to make your computer more subtle in the view of the user, which is subject of research and development. The use of technology as a source of relaxation and spoil us is a strand that is being explored in the context of nurturant technologies. Accordingly, this thesis is focused on the development of an object and several applications with which we can interact. The object and applications have the purpose to spoil us and help us relax after a long day at work or in some situation more stressful. The object developed employs technologies like the use of accelerometers and the applications developed employs communications between computers and Web cameras. This thesis begins with a brief introduction to the areas of research and others that we can include in this thesis, such as ubiquitous computing and the nurturant technologies, providing yet general information on stress and ways to mitigate it. Later is described some of the work already done and that influenced this thesis as well as the prototypes developed and the experiences performed, ending with a general conclusion and future work.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents a user experience evaluation of two online shopping websites from the perspective of older users (those aged 50 and older). Two online shopping websites were evaluated using methodological procedures established in prior research [1]. The methodology consists of four steps: (1) heuristic interface evaluation using an ergonomic criteria checklist, (2) online identification and experience questionnaire, (3) evaluation of user experience and interface interaction, and (4) satisfaction questionnaire. Results of the study revealed the analyzed websites are not suitable for older users, who find it difficult to interact with these interfaces.
Resumo:
The paper presents an ergonomic analysis of reading usability of electronic journals and the comparison with newspapers. As a method, it was adopted an evaluation of the user perception, from a printed questionnaire applied to a group of 41 people. Overall, the results indicate that on the analyzed newspapers there is need for greater care concerning the aspects of visual representation, involving more design application, usability, ergonomics, technology and communication.