870 resultados para Tangible User Interfaces
Resumo:
Para obtenção do grau de Doutor pela Universidade de Vigo com menção internacional Departamento de Informática
Resumo:
The emergence of smartphones with Wireless LAN (WiFi) network interfaces brought new challenges to application developers. The expected increase of users connectivity will impact their expectations for example on the performance of background applications. Unfortunately, the number and breadth of the studies on the new patterns of user mobility and connectivity that result from the emergence of smartphones is still insufficient to support this claim. This paper contributes with preliminary results on a large scale study of the usage pattern of about 49000 devices and 31000 users who accessed at least one access point of the eduroam WiFi network on the campuses of the Lisbon Polytechnic Institute. Results confirm that the increasing number of smartphones resulted in significant changes to the pattern of use, with impact on the amount of traffic and users connection time.
Resumo:
MSCC Dissertation in Computer Engineering
Resumo:
Os avanços nas Interfaces Cérebro-máquina, resultantes dos avanços no tratamento de sinal e da inteligência artificial, estão a permitir-nos aceder à atividade cerebral, descodificá-la, e usála para comandar dispositivos, sejam eles braços artificiais ou computadores. Isto é muito mais importante quando os utilizadores são pessoas que perderam a capacidade de comunicar, embora mantenham as suas capacidades cognitivas intactas. O caso mais extremo desta situação é o das pessoas afetadas pela Síndrome de Encarceramento. Este trabalho pretende contribuir para a melhoria da qualidade de vida das pessoas afetadas por esta síndrome, disponibilizando-lhes um meio de comunicação adaptado às suas limitações. É essencialmente um estudo de usabilidade aplicada a um tipo de utilizador extremamente diminuído na sua capacidade de interação. Nesta investigação começamos por compreender a Síndrome de Encarceramento e as limitações e capacidades das pessoas afetadas por ela. Abordamos a neuroplasticidade, o que é, e em que medida é importante para a utilização das Interfaces Cérebro-máquina. Analisamos o funcionamento destas interfaces, e os fundamentos científicos que o suportam. Finalmente, com todo este conhecimento em mãos, investigamos e desenvolvemos métodos que nos permitissem otimizar as limitadas capacidades do utilizador na sua interação com o sistema, minimizando o esforço e maximizando o desempenho. Foi para o efeito desenhado e implementado um protótipo que nos permitisse validar as soluções encontradas.
Resumo:
Uma interface cérebro-computador (BCI) não é mais do que um dispositivo que lê e analisa ondas cerebrais e as converte em ações sobre um computador. Com a evolução das BCI e a possibilidade de acesso às mesmas por parte do público começou a ser possível o uso de BCIs para fins lúdicos. Nesse sentido nesta tese foi feito um estudo sobre interfaces cérebro-computador, o que são, que tipos de BCI existem, o seu uso para entretenimento, as suas limitações e o futuro deste tipo de interfaces. Foi ainda criado um software lúdico controlado por BCI (Emotiv EPOC) que é composto por um jogo tipo Pong e um reprodutor de música. O reprodutor de música através de BCI classifica e recomenda músicas ao utilizador. Com esta tese foi possível chegar à conclusão que é possível utilizar BCI para entretenimento (jogos e recomendação de conteúdos) apesar de se ter verificado que para jogos os dispositivos tradicionais de controlo (rato e teclado) ainda têm uma precisão muito superior.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Resumo:
Dissertação de Mestrado em Engenharia Informática 2º Semestre, 2011/2012
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
L'interès per les carreres relacionades amb les enginyeries del camp de les Tecnologies de la Informació i la Comunicació (TIC) va caient any rere any i això es tradueix en menys matriculacions per cada any que passa. Una de les causes principals és la falta de motivació i interès envers aquestes carreres. Aquest problema s’atribueix a que les metodologies d’aprenentatge tradicionals no s’adeqüen a les necessitats i requeriments dels estudiants d’avui dia, que han crescut envoltats de tecnologia, i se’ls coneix amb el nom en anglès de digitalnatives.L’aprenentatge basat en jocs sorgeix com a una possible solució per afrontar aquesta falta d’interès per les àrees del coneixement relacionades amb les enginyeries TIC. Aquest mètode d'aprenentatge consisteix en que els estudiants aprenguin conceptes de les TIC mentre estan jugant a un joc. Per tant, en el context d’aquest PFC, ens centrarem en el disseny i implementació d’un joc per aprendre conceptes bàsics de les enginyeries TIC. En concret, el disseny d’aquest joc segueix les característiques definides per un model conceptual que descriuquins són els elements necessaris per crear jocs basats en puzles: quins són els elements principals que componen aquests tipus de joc i quines pistes es poden afegir per guiar a l’estudiant en el procés d’aprenentatge amb el joc. Per altra banda, pel que fa a laimplementació del joc, s’utilitzarà tecnologia basada en interfícies tangibles.Per tal d’analitzar el joc basat en puzles que s’ha dissenyat i implementat per aprendre conceptes basics de les Enginyeries TIC, s’ha portat a terme una avaluació amb estudiants d’un centre educatiu. Aspectes com el disseny de pistes, l’experiència dels estudiants amb el joc il’aprenentatge amb el joc seran analitzats per extreure les conclusions oportunes.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The value and benefits of user experience (UX) are widely recognized in the modern world and UX is seen as an integral part of many fields. This dissertation integrates UX and understanding end users with the early phases of software development. The concept of UX is still unclear, as witnessed by more than twenty-five definitions and ongoing argument about its different aspects and attributes. This missing consensus forms a problem in creating a link between UX and software development: How to take the UX of end users into account when it is unclear for software developers what UX stands for the end users. Furthermore, currently known methods to estimate, evaluate and analyse UX during software development are biased in favor of the phases where something concrete and tangible already exists. It would be beneficial to further elaborate on UX in the beginning phases of software development. Theoretical knowledge from the fields of UX and software development is presented and linked with surveyed and analysed UX attribute information from end users and UX professionals. Composing the surveys around the identified 21 UX attributes is described and the results are analysed in conjunction with end user demographics. Finally the utilization of the gained results is explained with a proof of concept utility, the Wizard of UX, which demonstrates how UX can be integrated into early phases of software development. The process of designing, prototyping and testing this utility is an integral part of this dissertation. The analyses show statistically significant dependencies between appreciation towards UX attributes and surveyed end user demographics. In addition, tests conducted by software developers and industrial UX designer both indicate the benefits and necessity of the prototyped Wizard of UX utility. According to the conducted tests, this utility meets the requirements set for it: It provides a way for software developers to raise their know-how of UX and a possibility to consider the UX of end users with statistical user profiles during the early phases of software development. This dissertation produces new and relevant information for the UX and software development communities by demonstrating that it is possible to integrate UX as a part of the early phases of software development.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
Ce document présente les résultats d’une étude empirique sur l’utilisation de la vidéoconférence mobile selon le contexte de l’usager afin de proposer des lignes directrices pour la conception des interfaces des dispositifs de communication vidéo mobile. Grâce à un échange riche d’informations, ce type de communication peut amener un sentiment de présence fort, mais les interfaces actuelles manquent de flexibilité qui permettrait aux usagers d’être créatifs et d’avoir des échanges plus riches lors d’une vidéoconférence. Nous avons mené une recherche avec seize participants dans trois activités où leurs conversations, leurs réactions et leurs comportements ont été observés. Deux groupes de discussion ont aussi servi à identifier les habitudes développées à partir de leur utilisation régulière de la vidéoconférence. Les résultats suggèrent une différence importante entre l’utilisation de la caméra avant et la caméra arrière de l’appareil mobile, et la nécessité de fournir des outils qui offrent plus de contrôle sur l’échange dans la conversation. L’étude propose plusieurs lignes directrices de conception pour les interfaces de communication vidéo mobiles, concernant la construction du contexte mobile de l’utilisateur.
Resumo:
In this lecture, we will focus on analyzing user goals in search query logs. Readings: M. Strohmaier, P. Prettenhofer, M. Lux, Different Degrees of Explicitness in Intentional Artifacts - Studying User Goals in a Large Search Query Log, CSKGOI'08 International Workshop on Commonsense Knowledge and Goal Oriented Interfaces, in conjunction with IUI'08, Canary Islands, Spain, 2008.
Resumo:
Resumen basado en el de la publicaci??n