17 resultados para Graphical User Interfaces
em Aston University Research Archive
Resumo:
The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.
Resumo:
Audio feedback remains little used in most graphical user interfaces despite its potential to greatly enhance interaction. Not only does sonic enhancement of interfaces permit more natural human-computer communication but it also allows users to employ an appropriate sense to solve a problem rather than having to rely solely on vision. Research shows that designers do not typically know how to use sound effectively; subsequently, their ad hoc use of sound often leads to audio feedback being considered an annoying distraction. Unlike the design of purely graphical user interfaces for which guidelines are common, the audio-enhancement of graphical user interfaces has (until now) been plagued by a lack of suitable guidance. This paper presents a series of empirically substantiated guidelines for the design and use of audio-enhanced graphical user interface widgets.
Resumo:
This research investigates the general user interface problems in using networked services. Some of the problems are: users have to recall machine names and procedures to. invoke networked services; interactions with some of the services are by means of menu-based interfaces which are quite cumbersome to use; inconsistencies exist between the interfaces for different services because they were developed independently. These problems have to be removed so that users can use the services effectively. A prototype system has been developed to help users interact with networked services. This consists of software which gives the user an easy and consistent interface with the various services. The prototype is based on a graphical user interface and it includes the following appJications: Bath Information & Data Services; electronic mail; file editor. The prototype incorporates an online help facility to assist users using the system. The prototype can be divided into two parts: the user interface part that manages interactlon with the user; the communicatIon part that enables the communication with networked services to take place. The implementation is carried out using an object-oriented approach where both the user interface part and communication part are objects. The essential characteristics of object-orientation, - abstraction, encapsulation, inheritance and polymorphism - can all contribute to the better design and implementation of the prototype. The Smalltalk Model-View-Controller (MVC) methodology has been the framework for the construction of the prototype user interface. The purpose of the development was to study the effectiveness of users interaction to networked services. Having completed the prototype, tests users were requested to use the system to evaluate its effectiveness. The evaluation of the prototype is based on observation, i.e. observing the way users use the system and the opinion rating given by the users. Recommendations to improve further the prototype are given based on the results of the evaluation. based on the results of the evah:1ation. . .'. " "', ':::' ,n,<~;'.'
Resumo:
In recent years, mobile technology has been one of the major growth areas in computing. Designing the user interface for mobile applications, however, is a very complex undertaking which is made even more challenging by the rapid technological developments in mobile hardware. Mobile human-computer interaction, unlike desktop-based interaction, must be cognizant of a variety of complex contextual factors affecting both users and technology. The Handbook of Research on User Interface Design and Evaluation provides students, researchers, educators, and practitioners with a compendium of research on the key issues surrounding the design and evaluation of mobile user interfaces, such as the physical environment and social context in which a mobile device is being used and the impact of multitasking behavior typically exhibited by mobile-device users. Compiling the expertise of over 150 leading experts from 26 countries, this exemplary reference tool will make an indispensable addition to every library collection.
Resumo:
The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.
Resumo:
Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.
Resumo:
This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.
Resumo:
The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.
Resumo:
Handheld and mobile technologies have witnessed significant advances in functionality, leading to their widespread use as both business and social networking tools. Human-Computer Interaction and Innovation in Handheld, Mobile and Wearable Technologies reviews concepts relating to the design, development, evaluation, and application of mobile technologies. Studies on mobile user interfaces, mobile learning, and mobile commerce contribute to the growing body of knowledge on this expanding discipline.
Resumo:
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Resumo:
As mobile devices become increasingly diverse and continue to shrink in size and weight, their portability is enhanced but, unfortunately, their usability tends to suffer. Ultimately, the usability of mobile technologies determines their future success in terms of end-user acceptance and, thereafter, adoption and social impact. Widespread acceptance will not, however, be achieved if users’ interaction with mobile technology amounts to a negative experience. Mobile user interfaces need to be designed to meet the functional and sensory needs of users. Social and Organizational Impacts of Emerging Mobile Devices: Evaluating Use focuses on human-computer interaction related to the innovation and research in the design, evaluation, and use of innovative handheld, mobile, and wearable technologies in order to broaden the overall body of knowledge regarding such issues. It aims to provide an international forum for researchers, educators, and practitioners to advance knowledge and practice in all facets of design and evaluation of human interaction with mobile technologies.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
The application of pharmacokinetic modelling within the drug development field essentially allows one to develop a quantitative description of the temporal behaviour of a compound of interest at a tissue/organ level, by identifying and defining relationships between a dose of a drug and dependent variables. In order to understand and characterise the pharmacokinetics of a drug, it is often helpful to employ pharmacokinetic modelling using empirical or mechanistic approaches. Pharmacokinetic models can be developed within mathematical and statistical commercial software such as MATLAB using traditional mathematical and computation coding, or by using the Simbiology Toolbox available within MATLAB for a graphical user interface approach to developing pharmacokinetic (PBPK) models. For formulations dosed orally, a prerequisite for clinical activity is the entry of the drug into the systemic circulation.
Resumo:
This paper examines the application of commercial and non-invasive electroencephalography (EEG)-based brain-computer (BCIs) interfaces with serious games. Two different EEG-based BCI devices were used to fully control the same serious game. The first device (NeuroSky MindSet) uses only a single dry electrode and requires no calibration. The second device (Emotiv EPOC) uses 14 wet sensors requiring additional training of a classifier. User testing was performed on both devices with sixty-two participants measuring the player experience as well as key aspects of serious games, primarily learnability, satisfaction, performance and effort. Recorded feedback indicates that the current state of BCIs can be used in the future as alternative game interfaces after familiarisation and in some cases calibration. Comparative analysis showed significant differences between the two devices. The first device provides more satisfaction to the players whereas the second device is more effective in terms of adaptation and interaction with the serious game.
Resumo:
Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.