49 resultados para text user interface
Resumo:
This paper introduces a novel approach for free-text keystroke dynamics authentication which incorporates the use of the keyboard’s key-layout. The method extracts timing features from specific key-pairs. The Euclidean distance is then utilized to find the level of similarity between a user’s profile data and his/her test data. The results obtained from this method are reasonable for free-text authentication while maintaining the maximum level of user relaxation. Moreover, it has been proven in this study that flight time yields better authentication results when compared with dwell time. In particular, the results were obtained with only one training sample for the purpose of practicality and ease of real life application.
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties.
Resumo:
Explanations are an important by-product of medical decisionsupport activities, as they have proved to favour compliance and correct treatment performance. To achieve this purpose, these texts should have a strong argumentation content and should adapt to emotional, as well as to rational attitudes of the Addressee. This paper describes how Rhetorical Sentence Planning can contribute to this aim: the rulebased plan discourse revision is introduced between Text Planning and Linguistic Realization, and exploits knowledge about the user personality and emotions and about the potential impact of domain items on user compliance and memory recall. The proposed approach originates from analytical and empirical evaluation studies of computer generated explanation texts in the domain of drug prescription. This work was partially supported by a British-Italian Collaboration in Research and Higher Education Project, which involved the Universities of Reading and of Bari, in 1996.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
The general packet radio service (GPRS) has been developed to allow packet data to be transported efficiently over an existing circuit-switched radio network, such as GSM. The main application of GPRS are in transporting Internet protocol (IP) datagrams from web servers (for telemetry or for mobile Internet browsers). Four GPRS baseband coding schemes are defined to offer a trade-off in requested data rates versus propagation channel conditions. However, data rates in the order of > 100 kbits/s are only achievable if the simplest coding scheme is used (CS-4) which offers little error detection and correction (EDC) (requiring excellent SNR) and the receiver hardware is capable of full duplex which is not currently available in the consumer market. A simple EDC scheme to improve the GPRS block error rate (BLER) performance is presented, particularly for CS-4, however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel and improving the user's application data rate. As GPRS requires intensive processing in the baseband, a viable field programmable gate array (FPGA) solution is presented in this paper.
Resumo:
The causes of pathological conditions such as Alzheimer’s and Parkinson’s diseases are becoming better understood. Proteins that misfold from their native structure to form aggregates of β-sheet fibrils — termed amyloid — are known1,2 to be implicated in these ‘amyloid diseases’. Understanding the early steps of fibril formation is critical, and the conditions, mechanism and kinetics of protein and peptide aggregation are being widely investigated through a variety of in vitro studies. Kinetic aspects of the dispersion of the protein or peptide in solution are thought to influence the fibrillization process by mass-transfer effects. In addition, mixing also leads to shear forces, which can influence fibril growth by perturbing the equilibrium between the isolated and aggregated proteins, causing existing fibrils to fragment and create new nuclei3. Writing in the Journal of the American Chemical Society, David Talaga and co-workers have now highlighted4 an additional factor that can influence the fibrillization of amyloid-forming proteins — the presence of hydrophobic interfaces.
Resumo:
Research in the last four decades has brought a considerable advance in our understanding of how the brain synthesizes information arising from different sensory modalities. Indeed, many cortical and subcortical areas, beyond those traditionally considered to be ‘associative,’ have been shown to be involved in multisensory interaction and integration (Ghazanfar and Schroeder 2006). Visuo-tactile interaction is of particular interest, because of the prominent role played by vision in guiding our actions and anticipating their tactile consequences in everyday life. In this chapter, we focus on the functional role that visuo-tactile processing may play in driving two types of body-object interactions: avoidance and approach. We will first review some basic features of visuo-tactile interactions, as revealed by electrophysiological studies in monkeys. These will prove to be relevant for interpreting the subsequent evidence arising from human studies. A crucial point that will be stressed is that these visuo-tactile mechanisms have not only sensory, but also motor-related activity that qualifies them as multisensory-motor interfaces. Evidence will then be presented for the existence of functionally homologous processing in the human brain, both from neuropsychological research in brain-damaged patients and in healthy participants. The final part of the chapter will focus on some recent studies in humans showing that the human motor system is provided with a multisensory interface that allows for continuous monitoring of the space near the body (i.e., peripersonal space). We further demonstrate that multisensory processing can be modulated on-line as a consequence of interacting with objects. This indicates that, far from being passive, the monitoring of peripersonal space is an active process subserving actions between our body and objects located in the space around us.
Resumo:
Our research investigates the impact that hearing has on the perception of digital video clips, with and without captions, by discussing how hearing loss, captions and deafness type affects user QoP (Quality of Perception). QoP encompasses not only a user's satisfaction with the quality of a multimedia presentation, but also their ability to analyse, synthesise and assimilate informational content of multimedia . Results show that hearing has a significant effect on participants’ ability to assimilate information, independent of video type and use of captions. It is shown that captions do not necessarily provide deaf users with a ‘greater level of information’ from video, but cause a change in user QoP, depending on deafness type, which provides a ‘greater level of context of the video’. It is also shown that post-lingual mild and moderately deaf participants predict less accurately their level of information assimilation than post-lingual profoundly deaf participants, despite residual hearing. A positive correlation was identified between level of enjoyment (LOE) and self-predicted level of information assimilation (PIA), independent of hearing level or hearing type. When this is considered in a QoP quality framework, it puts into question how the user perceives certain factors, such as ‘informative’ and ‘quality’.
Resumo:
Typeface design: a series of collaborative projects commissioned by Adobe, Inc. and Brill to develop extensive polytonic Greek typefaces. The two Adobe typefaces can be seen as extension of previous research for the Garamond Premier Pro family (2005), and concludes a research theme started in 1998 with work for Adobe’s Minion Pro Greek. These typefaces together define the state of the art for text-intensive Greek typesetting for wide character set texts (from classical texts, to poetry, to essays, to prose). They serve both as exemplar for other developers, and as vehicles for developing the potential of Greek text typography, for example with the parallel inclusion of monotonic and polytonic characters, detailed localised punctuation options, fluid handling of case-conversion issues, and innovative options such as accented small caps (originally requested by bibliographers, and subsequently rolled out to a general user base). The Brill typeface (for the established academic publisher) has an exceptionally wide character set to cover several academic disciplines, and is intended to differentiate sufficiently from its partner Latin typeface, while maintaining a clear texture in both offset and low-resolution print-on-demand reproduction. This work involved substantial amounts of testing and modifying the design, especially of diacritics, to maintain clarity the readability of unfamiliar words. All together these typefaces form a study in how Greek typesetting meets contemporary typographic requirements, while resonating with historically accurate styles, where these are present. Significant research in printing archives helped to identify appropriate styles, as well as originate variants that are coherent stylistically, even when historical equivalents were absent.
Resumo:
Health care provision is significantly impacted by the ability of the health providers to engineer a viable healthcare space to support care stakeholders needs. In this paper we discuss and propose use of organisational semiotics as a set of methods to link stakeholders to systems, which allows us to capture clinician activity, information transfer, and building use; which in tern allows us to define the value of specific systems in the care environment to specific stakeholders and the dependence between systems in a care space. We suggest use of a semantically enhanced building information model (BIM) to support the linking of clinician activity to the physical resource objects and space; and facilitate the capture of quantifiable data, over time, concerning resource use by key stakeholders. Finally we argue for the inclusion of appropriate stakeholder feedback and persuasive mechanism, to incentivise building user behaviour to support organisational level sustainability policy.
Resumo:
We are sympathetic with Bentley et al’s attempt to encompass the wisdom of crowds in a generative model, but posit that success at using Big Data will include more sensitive measurements, more and more varied sources of information, as well as build from the indirect information available through technology, from ancillary technical features to data from brain-computer interface.