971 resultados para Natural User Interfaces
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
With the emergence and growing supply of mobile apps for museums it becomes relevant to study the importance of all the design aspects of those apps in order to provide users/visitors with a better museum experience. One of these aspects is User Interface (UI) which may condition the quality of the application experience as well as the museum experience, serving the function of intermediary. Since interface design must combine usability with appearance (Schlatter e Levinson, 2013) the design must always appeal to the user, representing also a potential source of distraction. Hence the concern of this dissertation is to understand how we can distribute the user's attention in a balanced way, between the application and the exhibition via the User Interface design. For better understanding of the issue – sharing the attention between the physical experience and the application - questions are addressed as: what represents a distraction during a visit to a museum and what comprises the attention process. Thus, it was possible to find some good and bad practice to design a good mobile UI which suits the visual criteria and does not require too much visitor’s attention, serving as a complement to the visit
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.
Resumo:
Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
This research investigates the general user interface problems in using networked services. Some of the problems are: users have to recall machine names and procedures to. invoke networked services; interactions with some of the services are by means of menu-based interfaces which are quite cumbersome to use; inconsistencies exist between the interfaces for different services because they were developed independently. These problems have to be removed so that users can use the services effectively. A prototype system has been developed to help users interact with networked services. This consists of software which gives the user an easy and consistent interface with the various services. The prototype is based on a graphical user interface and it includes the following appJications: Bath Information & Data Services; electronic mail; file editor. The prototype incorporates an online help facility to assist users using the system. The prototype can be divided into two parts: the user interface part that manages interactlon with the user; the communicatIon part that enables the communication with networked services to take place. The implementation is carried out using an object-oriented approach where both the user interface part and communication part are objects. The essential characteristics of object-orientation, - abstraction, encapsulation, inheritance and polymorphism - can all contribute to the better design and implementation of the prototype. The Smalltalk Model-View-Controller (MVC) methodology has been the framework for the construction of the prototype user interface. The purpose of the development was to study the effectiveness of users interaction to networked services. Having completed the prototype, tests users were requested to use the system to evaluate its effectiveness. The evaluation of the prototype is based on observation, i.e. observing the way users use the system and the opinion rating given by the users. Recommendations to improve further the prototype are given based on the results of the evaluation. based on the results of the evah:1ation. . .'. " "', ':::' ,n,<~;'.'
Resumo:
This paper aims to identify the communication goal(s) of a user's information-seeking query out of a finite set of within-domain goals in natural language queries. It proposes using Tree-Augmented Naive Bayes networks (TANs) for goal detection. The problem is formulated as N binary decisions, and each is performed by a TAN. Comparative study has been carried out to compare the performance with Naive Bayes, fully-connected TANs, and multi-layer neural networks. Experimental results show that TANs consistently give better results when tested on the ATIS and DARPA Communicator corpora.
Resumo:
Linked Data semantic sources, in particular DBpedia, can be used to answer many user queries. PowerAqua is an open multi-ontology Question Answering (QA) system for the Semantic Web (SW). However, the emergence of Linked Data, characterized by its openness, heterogeneity and scale, introduces a new dimension to the Semantic Web scenario, in which exploiting the relevant information to extract answers for Natural Language (NL) user queries is a major challenge. In this paper we discuss the issues and lessons learned from our experience of integrating PowerAqua as a front-end for DBpedia and a subset of Linked Data sources. As such, we go one step beyond the state of the art on end-users interfaces for Linked Data by introducing mapping and fusion techniques needed to translate a user query by means of multiple sources. Our first informal experiments probe whether, in fact, it is feasible to obtain answers to user queries by composing information across semantic sources and Linked Data, even in its current form, where the strength of Linked Data is more a by-product of its size than its quality. We believe our experiences can be extrapolated to a variety of end-user applications that wish to scale, open up, exploit and re-use what possibly is the greatest wealth of data about everything in the history of Artificial Intelligence. © 2010 Springer-Verlag.
Resumo:
Formulating complex queries is hard, especially when users cannot understand all the data structures of multiple complex knowledge bases. We see a gap between simplistic but user friendly tools and formal query languages. Building on an example comparison search, we propose an approach in which reusable search components take an intermediary role between the user interface and formal query languages.
Resumo:
Term dependence is a natural consequence of language use. Its successful representation has been a long standing goal for Information Retrieval research. We present a methodology for the construction of a concept hierarchy that takes into account the three basic dimensions of term dependence. We also introduce a document evaluation function that allows the use of the concept hierarchy as a user profile for Information Filtering. Initial experimental results indicate that this is a promising approach for incorporating term dependence in the way documents are filtered.
Resumo:
Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.
Resumo:
Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.
Resumo:
Many countries have an increasingly ageing population. In recent years, mobile technologies have had a massive impact on social and working lives. As the size of the older user population rises, many people will want to continue professional, social and lifestyle usage of mobiles into 70s and beyond. Mobile technologies can lead to increased community involvement and personal independence. While mobile technologies can provide many opportunities, the ageing process can interfere with their use. This workshop brings together researchers who are re-imagining common mobile interfaces so that they are more suited to use by older adults.
Resumo:
The design of interfaces to facilitate user search has become critical for search engines, ecommercesites, and intranets. This study investigated the use of targeted instructional hints to improve search by measuring the quantitative effects of users' performance and satisfaction. The effects of syntactic, semantic and exemplar search hints on user behavior were evaluated in an empirical investigation using naturalistic scenarios. Combining the three search hint components, each with two levels of intensity, in a factorial design generated eight search engine interfaces. Eighty participants participated in the study and each completed six realistic search tasks. Results revealed that the inclusion of search hints improved user effectiveness, efficiency and confidence when using the search interfaces, but with complex interactions that require specific guidelines for search interface designers. These design guidelines will allow search designers to create more effective interfaces for a variety of searchapplications.
Resumo:
The purpose of this research is to analyze different daylighting systems in schools in the city of Natal/RN. Although with the abundantly daylight available locally, there are a scarce and diffuse architectural recommendations relating sky conditions, dimensions of daylight systems, shading, fraction of sky visibility, required illuminance, glare, period of occupation and depth of the lit area. This research explores different selected apertures systems to explore the potential of natural light for each system. The method has divided into three phases: The first phase is the modeling which involves the construction of three-dimensional model of a classroom in Sketchup software 2014, which is featured in follow recommendations presented in the literature to obtain a good quality of environmental comfort in school settings. The second phase is the dynamic performance computer simulation of the light through the Daysim software. The input data are the climate file of 2009 the city of Natal / RN, the classroom volumetry in 3ds format with the assignment of optical properties of each surface, the sensor mapping file and the user load file . The results produced in the simulation are organized in a spreadsheet prepared by Carvalho (2014) to determine the occurrence of useful daylight illuminance (UDI) in the range of 300 to 3000lux and build graphics illuminance curves and contours of UDI to identify the uniformity of distribution light, the need of the minimum level of illuminance and the occurrence of glare.