32 resultados para User-based evaluation

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Design has traditionally required a great deal of geometrical and parametric data. This data can only be supplied at stages later than conceptual design, typically the detail stage, and design quality is given by some absolute fitness function. On the other hand, design evaluation offers a relative measure of design quality that requires only a sparse representation. Quality, in this case, is a measure of how well a design will complete its task.

The research intends to address the question: "Is it possible to evaluate a mechanical design at the conceptual design phase and be able to make some prediction of its quality?" Quality can be interpreted as success in the marketplace, success in performing the required task, or some other user requirement. This work aims to determine a minimum level of representation such that conceptual designs can be usefully evaluated without needing to capture detailed geometry. This representation will form the model for the conceptual designs that are being considered for evaluation. The method to be developed will be a case-based evaluation system, that uses a database of previous designs to support design exploration. The method will not be able to support novel design as case-based design implies the model topology must be fixed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The utilisation of good design practices in the development of complex health services is essential to improving quality. Healthcare organisations, however, are often seriously out of step with modern design thinking and practice. As a starting point to encourage the uptake of good design practices, it is important to understand the context of their intended use. This study aims to do that by articulating current health service development practices. METHODS: Eleven service development projects carried out in a large mental health service were investigated through in-depth interviews with six operation managers. The critical decision method in conjunction with diagrammatic elicitation was used to capture descriptions of these projects. Stage-gate design models were then formed to visually articulate, classify and characterise different service development practices. RESULTS: Projects were grouped into three categories according to design process patterns: new service introduction and service integration; service improvement; service closure. Three common design stages: problem exploration, idea generation and solution evaluation - were then compared across the design process patterns. Consistent across projects were a top-down, policy-driven approach to exploration, underexploited idea generation and implementation-based evaluation. CONCLUSIONS: This study provides insight into where and how good design practices can contribute to the improvement of current service development practices. Specifically, the following suggestions for future service development practices are made: genuine user needs analysis for exploration; divergent thinking and innovative culture for idea generation; and fail-safe evaluation prior to implementation. Better training for managers through partnership working with design experts and researchers could be beneficial.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Over the past decade, a variety of user models have been proposed for user simulation-based reinforcement-learning of dialogue strategies. However, the strategies learned with these models are rarely evaluated in actual user trials and it remains unclear how the choice of user model affects the quality of the learned strategy. In particular, the degree to which strategies learned with a user model generalise to real user populations has not be investigated. This paper presents a series of experiments that qualitatively and quantitatively examine the effect of the user model on the learned strategy. Our results show that the performance and characteristics of the strategy are in fact highly dependent on the user model. Furthermore, a policy trained with a poor user model may appear to perform well when tested with the same model, but fail when tested with a more sophisticated user model. This raises significant doubts about the current practice of learning and evaluating strategies with the same user model. The paper further investigates a new technique for testing and comparing strategies directly on real human-machine dialogues, thereby avoiding any evaluation bias introduced by the user model. © 2005 IEEE.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents an agenda-based user simulator which has been extended to be trainable on real data with the aim of more closely modelling the complex rational behaviour exhibited by real users. The train-able part is formed by a set of random decision points that may be encountered during the process of receiving a system act and responding with a user act. A sample-based method is presented for using real user data to estimate the parameters that control these decisions. Evaluation results are given both in terms of statistics of generated user behaviour and the quality of policies trained with different simulators. Compared to a handcrafted simulator, the trained system provides a much better fit to corpus data and evaluations suggest that this better fit should result in improved dialogue performance. © 2010 Association for Computational Linguistics.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Users’ initial perceptions of their competence are key motivational factors for further use. However, initial tasks on a mobile operating system (OS) require setup procedures, which are currently largely inconsistent, do not provide users with clear, visible and immediate feedback on their actions, and require significant adjustment time for first-time users. This paper reports on a study with ten users, carried out to better understand how both prior experience and initial interaction with two touchscreen mobile interfaces (Apple iOS and Google Android) affected setup task performance and motivation. The results show that the reactions to setup on mobile interfaces appear to be partially dependent on which device was experienced first. Initial experience with lower-complexity devices improves performance on higher-complexity devices, but not vice versa. Based on these results, the paper proposes six guidelines for designers to design more intuitive and motivating user interfaces (UI) for setup procedures. The preliminary results indicate that these guidelines can contribute to the design of more inclusive mobile platforms and further work to validate these findings is proposed.