57 resultados para user embracement
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
In recent years there has been a growing debate over whether or not standards should be produced for user system interfaces. Those in favor of standardization argue that standards in this area will result in more usable systems, while those against argue that standardization is neither practical nor desirable. The present paper reviews both sides of this debate in relation to expert systems. It argues that in many areas guidelines are more appropriate than standards for user interface design.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.
Resumo:
The Euro-Mediterranean region is an important centre for the diversity of crop wild relatives. Crops, such as oats (Avena sativa), sugar beet (Beta vulgaris), apple (Malus domestica), annual meadow grass (Festuca pratensis), white clover (Trifolium repens), arnica (Arnica montana), asparagus (Asparagus officinalis), lettuce (Lactuca sativa), and sage (Salvia officinalis) etc., all have wild relatives in the region. The European Community funded project, PGR Forum (www.pgrforum.org) is building an online information system to provide access to crop wild relative data to a broad user community; including plant breeders, protected area managers, policy-makers, conservationists, taxonomists and the wider public. The system will include data on uses, geographical distribution, biology, population and habitat information, threats (including IUCN Red List assessments) and conservation actions. This information is vital for the continued sustainable utilisation and conservation of crop wild relatives. Two major databases have been utilised as the backbone to a Euro-Mediterranean crop wild relative catalogue, which forms the core of the information system: Euro+Med PlantBase (www.euromed.org.uk) and Mansfeld’s World Database of Agricultural and Horticultural Crops (http://mansfeld.ipk-gatersleben.de). By matching the genera found within the two databases, a preliminary list of crop wild relatives has been produced. Around 20,000 of the 30,000+ species listed in Euro+Med PlantBase can be considered crop wild relatives, i.e. those species found within the same genus as a crop. The list is currently being refined by implementing a priority ranking system based on the degree of relatedness of taxa to the associated crop.
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
Abstract. This paper presents the User-Intimate Requirements Hierarchy Resolution Framework (UI-REF) based on earlier work (Badii 1997-2008) to optimise the requirements engineering process particularly to support userintimate interactive systems co-design. The stages of the UI- EF framework for requirements resolution-and-prioritisation are described. UI-REF has been established to ensure that the most-deeply-valued needs of the majority of stakeholders are elicited and ranked, and the root rationale for requirements evolution is trace-able and contextualised so as to help resolve stakeholder conflicts. UI-REF supports the dynamically evolving requirements of the users in the context of digital economy as under-pinned by online service provisioning. Requirements prioritisation in UI-REF is fully resolved while a promotion path for lower priority requirements is delineated so as to ensure that as the requirements evolve so will their resolution and prioritisation.
Resumo:
An extensive set of machine learning and pattern classification techniques trained and tested on KDD dataset failed in detecting most of the user-to-root attacks. This paper aims to provide an approach for mitigating negative aspects of the mentioned dataset, which led to low detection rates. Genetic algorithm is employed to implement rules for detecting various types of attacks. Rules are formed of the features of the dataset identified as the most important ones for each attack type. In this way we introduce high level of generality and thus achieve high detection rates, but also gain high reduction of the system training time. Thenceforth we re-check the decision of the user-to- root rules with the rules that detect other types of attacks. In this way we decrease the false-positive rate. The model was verified on KDD 99, demonstrating higher detection rates than those reported by the state- of-the-art while maintaining low false-positive rate.