878 resultados para Unified User Experience Model
                                
Resumo:
Temporal data are a core element of a reservation. In this paper we formulate 10 requirements and 14 sub-requirements for handling temporal data in online hotel reservation systems (OHRS) from a usability viewpoint. We test the fulfillment of these requirements for city and resort hotels in Austria and Switzerland. Some of the requirements are widely met; however, many requirements are fulfilled only by a surprisingly small number of hotels. In particular, numerous systems offer options for selecting data which lead to error messages in the next step. A few screenshots illustrate flaws of the systems. We also draw conclusions on the state of applying software engineering principles in the development of Web pages.
                                
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.
                                
Resumo:
Models of Immigrant Political Incorporation brings together a multidisciplinary group of scholars to consider pathways by which immigrants may be incorporated into the political processes of western democracies. It builds on a rich tradition of studying immigrant incorporation, but each chapter innovates by moving beyond singular accounts of particular groups and locations toward a general causal model with the scope and breadth to apply across groups, places, and time. Models of Immigrant Political Incorporation addresses three key analytic questions: what, if anything, are the distinctive features of immigrants or immigrant groups? How broadly should one define and study politics? What are the initial premises for analyzing pathways toward incorporation; does one learn more by starting from an assumption of racialization and exclusion or from an assumption of engagement and inclusion? While all models engage with all three key analytic questions, chapters vary in their relative focus on one or another, and in the answers they provide. Most include graphical illustrations of the model, as well as extended examples applying the model to one or more immigrant populations. At a time when research on immigrant political incorporation is rapidly accumulating - and when immigrants are increasingly significant political actors in many democratic polities — this volume makes a timely and valuable intervention by pushing researchers to articulate causal dynamics, provide clear definitions and measurable concepts, and develop testable hypotheses. Furthermore, the wide array of frameworks examining how immigrants become part of a polity or are shunted aside ensure that activists and analysts alike will find useful insights. By including historians, sociologists, and political scientists, by ranging across North America and Western Europe, by addressing successful and failed incorporative efforts, this handbook offers guides for anyone seeking to develop a dynamic, unified, and supple model of immigrant political incorporation.
                                
Resumo:
A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes. User experience on watching live video sequences must also be satisfactory even in scenarios with buffer overflow and high packet loss ratio. In this paper, we introduce a Cross-layer Link quality and Geographical-aware beaconless opportunistic routing protocol (XLinGO). It enhances the transmission of simultaneous multiple video flows over FANETs by creating and keeping reliable persistent multi-hop routes. XLinGO considers a set of cross-layer and human-related information for routing decisions, as performance metrics and Quality of Experience (QoE). Performance evaluation shows that XLinGO achieves multimedia dissemination with QoE support and robustness in a multi-hop, multi-flow, and mobile network environments.
                                
Resumo:
A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes, and also to recover the quality level of the delivered multiple video flows under dynamic network topologies. The user experience on watching live videos must also be satisfactory even in scenarios with network congestion, buffer overflow, and packet loss ratio, as experienced in many FANET multimedia applications. In this paper, we perform a comparative simulation study to assess the robustness, reliability, and quality level of videos transmitted via well-known beaconless opportunistic routing protocols. Simulation results shows that our developed protocol XLinGO achieves multimedia dissemination with Quality of Experience (QoE) support and robustness in a multi-hop, multi-flow, and mobile networks, as required in many multimedia FANET scenarios.
                                
Resumo:
The user experience on watching live video se- quences transmitted over a Flying Ad-Hoc Networks (FANETs) must be considered to drop packets in overloaded queues, in scenarios with high buffer overflow and packet loss rate. In this paper, we introduce a context-aware adaptation mechanism to manage overloaded buffers. More specifically, we propose a utility function to compute the dropping probability of each packet in overloaded queues based on video context information, such as frame importance, packet deadline, and sensing relevance. In this way, the proposed mechanism drops the packet that adds the minimum video distortion. Simulation evaluation shows that the proposed adaptation mechanism provides real-time multimedia dissemination with QoE support in a multi-hop, multi-flow, and mobile network environments.
                                
Resumo:
User experience on watching live videos must be satisfactory even under the inuence of different network conditions and topology changes, such as happening in Flying Ad-Hoc Networks (FANETs). Routing services for video dissemination over FANETs must be able to adapt routing decisions at runtime to meet Quality of Experience (QoE) requirements. In this paper, we introduce an adaptive beaconless opportunistic routing protocol for video dissemination over FANETs with QoE support, by taking into account multiple types of context information, such as link quality, residual energy, buffer state, as well as geographic information and node mobility in a 3D space. The proposed protocol takes into account Bayesian networks to define weight vectors and Analytic Hierarchy Process (AHP) to adjust the degree of importance for the context information based on instantaneous values. It also includes a position prediction to monitor the distance between two nodes in order to detect possible route failure.
                                
Resumo:
Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.
                                
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.
                                
Resumo:
Enriching knowledge bases with multimedia information makes it possible to complement textual descriptions with visual and audio information. Such complementary information can help users to understand the meaning of assertions, and in general improve the user experience with the knowledge base. In this paper we address the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines. DBpedia has evolved into a major hub in the Linked Data cloud, interconnecting millions of entities organized under a consistent ontology. Our approach taps into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images. We performed experiments with focus on the particularly challenging problem of highly ambiguous names. Both methods presented in this work outperformed the baseline. Our best method leveraged context words from Wikipedia, tags from Flickr and type information from DBpedia to achieve an average precision of 80%.
                                
Resumo:
Future high-quality consumer electronics will contain a number of applications running in a highly dynamic environment, and their execution will need to be efficiently arbitrated by the underlying platform software. The multimedia applications that currently execute in such similar contexts face frequent run-time variations in their resource demands, originated by the greedy nature of the multimedia processing itself. Changes in resource demands are triggered by numerous reasons (e.g. a switch in the input media compression format). Such situations require real-time adaptation mechanisms to adjust the system operation to the new requirements, and this must be done seamlessly to satisfy the user experience. One solution for efficiently managing application execution is to apply quality of service resource management techniques, based on assigning and enforcing resource contracts to applications. Most resource management solutions provide temporal isolation by enforcing resource assignments and avoiding any resource overruns. However, this has a clear limitation over the cost-effective resource usage. This paper presents a simple priority assignment scheme based on uniform priority bands to allow that greedy multimedia tasks incur in safe overruns that increase resource usage and do not threaten the timely execution of non-overrunning tasks. Experimental results show that the proposed priority assignment scheme in combination with a resource accounting mechanism preserves timely multimedia execution and delivery, achieves a higher cost-effective processor usage, and guarantees the execution isolation of non-overrunning tasks.
                                
Resumo:
Hoy en día el uso de dispositivos portátiles multimedia es ya una realidad totalmente habitual. Además, estos dispositivos tienen una capacidad de cálculo y unos recursos gráficos y de memoria altos, tanto es así que por ejemplo en un móvil se pueden reproducir vídeos de muy alta calidad o tener capacidad para manejar entornos 3D. El precio del uso de estos recursos es un mayor consumo de batería que en ocasiones es demasiado alto y acortan en gran medida la vida de la carga útil de la batería. El Grupo de Diseño Electrónico y Microelectrónico de la Universidad Politécnica de Madrid ha abierto una línea de trabajo que busca la optimización del consumo de energía en este tipo de dispositivos, concretamente en el ámbito de la reproducción de vídeo. El enfoque para afrontar la solución del problema se basa en obtener un mayor rendimiento de la batería a costa de disminuir la experiencia multimedia del usuario. De esta manera, cuando la carga de la batería esté por debajo de un determinado umbral mientras el dispositivo esté reproduciendo un vídeo de alta calidad será el dispositivo quien se autoconfigure dinámicamente para consumir menos potencia en esta tarea, reduciendo la tasa de imágenes por segundo o la resolución del vídeo que se descodifica. Además de lo citado anteriormente se propone dividir la descodificación y la representación del vídeo en dos procesadores, uno de propósito general y otro para procesado digital de señal, con esto se consigue que tener la misma capacidad de cálculo que con un solo procesador pero a una frecuencia menor. Para materializar la propuesta se usará la tarjeta BeagleBoard basada en un procesador multinúcleo OMAP3530 de Texas Instrument que contiene dos núcleos: un ARM1 Cortex-A8 y un DSP2 de la familia C6000. Este procesador multinúcleo además permite modificar la frecuencia de reloj y la tensión de alimentación dinámicamente para conseguir reducir de este modo el consumo del terminal. Por otro lado, como reproductor de vídeos se utilizará una versión de MPlayer que integra un descodificador de vídeo escalable que permite elegir dinámicamente la resolución o las imágenes por segundo que se decodifican para posteriormente mostrarlas. Este reproductor se ejecutará en el núcleo ARM pero debido a la alta carga computacional de la descodificación de vídeos, y que el ARM no está optimizado para este tipo de procesado de datos, el reproductor debe encargar la tarea de la descodificación al DSP. El objetivo de este Proyecto Fin de Carrera consiste en que mientras el descodificador de vídeo está ejecutándose en el núcleo DSP y el Mplayer en el núcleo ARM del OMAP3530 se pueda elegir dinámicamente qué parte del vídeo se descodifica, es decir, seleccionar en tiempo real la calidad o capa del vídeo que se quiere mostrar. Haciendo esto, se podrá quitar carga computacional al núcleo ARM y asignársela al DSP el cuál puede procesarla a menor frecuencia para ahorrar batería. 1 ARM: Es una arquitectura de procesadores de propósito general basada en RISC (Reduced Instruction Set Computer). Es desarrollada por la empresa inglesa ARM holdings. 2 DSP: Procesador Digital de Señal (Digital Signal Processor). Es un sistema basado en procesador, el cual está orientado al cálculo matemático a altas velocidad. Generalmente poseen varias unidades aritmético-lógicas (ALUs) para conseguir realizar varias operaciones simultáneamente. SUMMARY. Nowadays, the use of multimedia devices is a well known reality. In addition, these devices have high graphics and calculus performance and a lot of memory as well. In instance, we can play high quality videos and 3D environments in a mobile phone. That kind of use may increase the device's power consumption and make shorter the battery duration. Electronic and Microelectronic Design Group of Technical University of Madrid has a research line which is looking for optimization of power consumption while these devices are playing videos. The solution of this trouble is based on taking more advantage of battery by decreasing multimedia user experience. On this way, when battery charge is under a threshold while device is playing a high quality video the device is going to configure itself dynamically in order to decrease its power consumption by decreasing frame per second rate, video resolution or increasing the noise in the decoded frame. It is proposed splitting decoding and representation tasks in two processors in order to have the same calculus capability with lower frecuency. The first one is specialized in digital signal processing and the other one is a general purpose processor. In order to materialize this proposal we will use a board called BeagleBoard which is based on a multicore processor called OMAP3530 from Texas Instrument. This processor includes two cores: ARM Cortex-A8 and a TMS320C64+ DSP core. Changing clock frequency and supply voltage is allowed by OMAP3530, we can decrease the power consumption on this way. On the other hand, MPlayer will be used as video player. It includes a scalable video decoder which let us changing dynamically the resolution or frames per second rate of the video in order to show it later. This player will be executed by ARM core but this is not optimized for this task, for that reason, DSP core will be used to decoding video. The target of this final career project is being able to choose which part of the video is decoded each moment while decoder is executed by DSP and Mplayer by ARM. It will be able to change in real time the video quality, resolution and frames per second that user want to show. On this way, reducing the computational charge within the processor will be possible.
Case study on mobile applications UX: effect of the usage of a crosss-platform development framework
                                
Resumo:
Cross-platform development frameworks for mobile applications promise important advantages in cost cuttings and easy maintenance, posing as a very good option for organizations interested in the design of mobile applications for several platforms. Given that platform conventions are especially important for the User eXperience (UX) of mobile applications, the usage of framework where the same code defines the behavior of the app in different platforms could have negative impact in the UX. The objetive of this study is comparing the cross-platform and the native approach for being able to determine if the selected development approach has any impact on the users in terms of UX. To be able to set a base line under this subject, study on cross-platform frameworks was performed to select the most appropriate one from a UX point of view. In order to achieve the objectives of this work, two development teams have developed two versions of the same application; one using framework that generates Android and iOS versions automatically, and another team developing native versions of the same application. The alternative versions for each platform have been evaluated with 37 users with a combination of a laboratory usability test and a longitudinal study. The results show that differences are minimal in the Android version, but in iOS, even if a reasonable good UX can be obtained with the usage of this framework by an UX-conscious design team, a higher level of UX can be obtained directly developing in native code.
                                
Resumo:
La razón de este proyecto, es la de desarrollar el módulo de cursos de la plataforma de Massive Online Open Courses (MOOCs), CloudRoom. Dicho módulo está englobado en una arquitectura orientada a servicios (SOA) y en una infraestructura de Cloud Computing utilizando Amazon Web Services (AWS). Nuestro objetivo es el de diseñar un Software as a Service (SaaS) robusto con las cualidades que a un producto de este tipo se le estiman: alta disponibilidad, alto rendimiento, gran experiencia de usuario y gran extensibilidad del sistema. Para lograrlo, se llevará a cabo la integración de las últimas tendencias tecnológicas dentro del desarrollo de sistemas distribuidos como Neo4j, Node.JS, Servicios RESTful, CoffeeScript. Todo esto siguiendo un estrategia de desarrollo PLAN-DO-CHECK utilizando Scrum y prácticas de metodologías ágiles. ---ABSTRACT---The reason of this Project is to develop the courses‟ module of CloudRoom, a Massive Online Open Courses platform. This module is encapsulated in a service-oriented architecture (SOA) based on a Cloud Computing infrastructure built on Amazon Web Services (AWS). Our goal is to design a robust Software as a Service (SaaS) with the qualities that are estimated in a product of this type: high availability, high performance, great user experience and great extensibility of the system. In order to address this, we carry out the integration of the latest technology trends in the development of distributed systems: Neo4j, Node.JS, RESTful Services and CoffeeScript. All of this, following a development strategy PLAN-DO-CHECK, using Scrum and practices of agile methodologies.
                                
Resumo:
Este trabajo trata cómo se pueden aplicar las técnicas de análisis de usabilidad al desarrollo de plataformas web. Actualmente es común que los servicios sean ofrecidos mediante plataformas web para un grupo muy heterogéneo de personas. Por otra parte, los análisis de usabilidad son una herramienta muy útil para conocer cómo interactúan las personas con los ordenadores y mejorar el diseño de las aplicaciones. Realizar un buen diseño permite mejorar la experiencia de usuario, factor fundamental para el éxito de cualquier producto que requiera interacción con el usuario. A continuación se describen las diferentes fases de los test de usabilidad y se detalla cómo han sido aplicadas durante el desarrollo del proyecto. Finalmente, se presentarán los resultados obtenidos durante la evaluación de la plataforma y el análisis de los mismos indicando cómo han afectado al diseño de la plataforma. ---ABSTRACT---This document discusses how to apply usability test techniques over web platform development. Nowadays, it is common that services are offered through web platforms for a large group of heterogeneous people. Moreover, usability tests are a very useful tool to understand human-computer interaction and improve the design of the applications. A good design can improve user experience, which is essential for the success of any product that requires user interaction. The following pages describes the different phases of usability testing and detail how these have been applied during the development of the project. Finally, the results obtained during the platform evaluation are presented and analysed, explaining how they have affected the design of the platform.
 
                    