874 resultados para User Interfaces


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En aos recientes,la Inteligencia Artificial ha contribuido a resolver problemas encontrados en el desempeo de las tareas de unidades informticas, tanto si las computadoras estn distribuidas para interactuar entre ellas o en cualquier entorno (Inteligencia Artificial Distribuida). Las Tecnologas de la Informacin permiten la creacin de soluciones novedosas para problemas especficos mediante la aplicacin de los hallazgos en diversas reas de investigacin. Nuestro trabajo est dirigido a la creacin de modelos de usuario mediante un enfoque multidisciplinario en los cuales se emplean los principios de la psicologa, inteligencia artificial distribuida, y el aprendizaje automtico para crear modelos de usuario en entornos abiertos; uno de estos es la Inteligencia Ambiental basada en Modelos de Usuario con funciones de aprendizaje incremental y distribuido (conocidos como Smart User Model). Basndonos en estos modelos de usuario, dirigimos esta investigacin a la adquisicin de caractersticas del usuario importantes y que determinan la escala de valores dominantes de este en aquellos temas en los cuales est ms interesado, desarrollando una metodologa para obtener la Escala de Valores Humanos del usuario con respecto a sus caractersticas objetivas, subjetivas y emocionales (particularmente en Sistemas de Recomendacin).Una de las reas que ha sido poco investigada es la inclusin de la escala de valores humanos en los sistemas de informacin. Un Sistema de Recomendacin, Modelo de usuario o Sistemas de Informacin, solo toman en cuenta las preferencias y emociones del usuario [Velsquez, 1996, 1997; Goldspink, 2000; Conte and Paolucci, 2001; Urban and Schmidt, 2001; Dal Forno and Merlone, 2001, 2002; Berkovsky et al., 2007c]. Por lo tanto, el principal enfoque de nuestra investigacin est basado en la creacin de una metodologa que permita la generacin de una escala de valores humanos para el usuario desde el modelo de usuario. Presentamos resultados obtenidos de un estudio de casos utilizando las caractersticas objetivas, subjetivas y emocionales en las reas de servicios bancarios y de restaurantes donde la metodologa propuesta en esta investigacin fue puesta a prueba.En esta tesis, las principales contribuciones son: El desarrollo de una metodologa que, dado un modelo de usuario con atributos objetivos, subjetivos y emocionales, se obtenga la Escala de Valores Humanos del usuario. La metodologa propuesta est basada en el uso de aplicaciones ya existentes, donde todas las conexiones entre usuarios, agentes y dominios que se caracterizan por estas particularidades y atributos; por lo tanto, no se requiere de un esfuerzo extra por parte del usuario.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ISO norm line 9241 states some criteria for ergonomics of human system interaction. In markets with a huge variety of offers and little possibility of differentiation, providers can gain a decisive competitive advantage by user oriented interfaces. A precondition for this is that relevant information can be obtained for entrepreneurial decisions in this regard. To test how users of universal search result pages use those pages and pay attention to different elements, an eye tracking experiment with a mixed design has been developed. Twenty subjects were confronted with search engine result pages (SERPs) and were instructed to make a decision while conditions national vs. international city and with vs. without miniaturized Google map were used. Different parameters like fixation count, duration and time to first fixation were computed from the eye tracking raw data and supplemented by click rate data as well as data from questionnaires. Results of this pilot study revealed some remarkable facts like a vampire effect on miniaturized Google maps. Furthermore, Google maps did not shorten the process of decision making, Google ads were not fixated, visual attention on SERPs was influenced by position of the elements on the SERP and by the users familiarity with the search target. These results support the theory of Amount of Invested Mental Effort (AIME) and give providers empirical evidence to take users expectations into account. Furthermore, the results indicated that the task oriented goal mode of participants was a moderator for the attention spent on ads. Most important, SERPs with images attracted the viewers attention much longer than those without images. This unique selling proposition may lead to a distortion of competition on markets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No novidade que o paradigma vigente baseia-se na Internet, em que cada vez mais aplicaes mudam o seu modelo de negcio relativamente a licenciamento e manuteno, para passar a oferecer ao utilizador final uma aplicao mais acessvel no que concerne a licenciamento e custos de manuteno, j que as aplicaes se encontram distribudas eliminando os custos de capitais e operacionais inerentes a uma arquitetura centralizada. Com a disseminao das Interfaces de Programao de Aplicaes (Application Programming Interfaces API) baseadas na Internet, os programadores passaram a poder desenvolver aplicaes que utilizam funcionalidades disponibilizadas por terceiros, sem terem que as programar de raiz. Neste conceito, a API das aplicaes Google permitem a distribuio de aplicaes a um mercado muito vasto e a integrao com ferramentas de produtividade, sendo uma oportunidade para a difuso de ideias e conceitos. Este trabalho descreve o processo de conceo e implementao de uma plataforma, usando as tecnologias HTML5, Javascript, PHP e MySQL com integrao com Google Apps, com o objetivo de permitir ao utilizador a preparao de oramentos, desde o clculo de preos de custo compostos, preparao dos preos de venda, elaborao do caderno de encargos e respetivo cronograma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this study was to document the benefits and possible detriments of combining ipsilateral acoustic hearing in the cochlear implant ear of a patient with preserved low frequency residual hearing post cochlear implantation. The secondary aim was to examine the efficacy of various cochlear implant mapping and hearing aid fitting strategies in relation to electro-acoustic benefits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses a study to determine the effectiveness of the Hearing Aid Performance Inventory (HAPI) on hearing aid outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centres HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the worlds coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grids Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with mpirun) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For those few readers who do not know, CAFS is a system developed by ICL to search through data at speeds of several million characters per second. Its full name is Content Addressable File Store Information Search Processor, CAFS-ISP or CAFS for short. It is an intelligent hardware-based searching engine, currently available with both ICL's 2966 family of computers and the recently announced Series 39, operating within the VME environment. It uses content addressing techniques to perform fast searches of data or text stored on discs: almost all fields are equally accessible as search keys. Software in the mainframe generates a search task; the CAFS hardware performs the search, and returns the hit records to the mainframe. Because special hardware is used, the searching process is very much more efficient than searching performed by any software method. Various software interfaces are available which allow CAFS to be used in many different situations. CAFS can be used with existing systems without significant change. It can be used to make online enquiries of mainframe files or databases or directly from user written high level language programs. These interfaces are outlined in the body of the report.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. A key ingredient of success in most systems is involving users in the specification and development of systems as they are being built. However, until recently, system designers have paid little attention to ascertaining user needs and to developing systems with corresponding functionality and appropriate interfaces to match those requirements. Although the situation is beginning to change, many developers do not know how to go about involving users, or else tackle the problem in an inadequate way. This paper discusses the need for user involvement and considers why many developers are still not involving users in an optimal way. It looks at the different ways in which users can be involved in the development process and describes how to select appropriate techniques and methods for studying users. Finally, it discusses some of the problems inherent in involving users in expert system development, and recommends an approach which incorporates both ethnographic analysis and formal user testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common users behaviour in the domain of knowledge construction. The users requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual users requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling users requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual users needs and discovering the users requirements.