943 resultados para Web design
Resumo:
This paper reports the results of a web-based perception study of the ranking of peer reviewed accounting journals by UK academics. The design of the survey instrument allows an interactive selection of journals to be scored. The webbased format is unique in that it also includes a step in which respondents classify the journals according to methodological perspective (paradigm). This is depicted graphically in the paper in a bubble diagram that shows the "positioning" of journals according to perceptions of both paradigm and quality.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
This work is concerned with the behaviour of thin webbed rolled steel joists or universal beams when they are subjected to concentrated loads applied to the flanges. The prime concern is the effect of high direct stresses causing web failure in a small region of the beam. The review shows that although many tests have been carried out on rolled steel beams and built up girders, no series of tests has restricted the number of variables involved to enable firm conclusions to be drawn. The results of 100 tests on several different rolled steel universal beam sections having various types of loading conditions are presented. The majority of the beams are tested by loading with two opposite loads, thus eliminating the effects of bending and shear, except for a small number of beams which are tested simply supported on varying spans. The test results are first compared with the present design standard (BS 449) and it is shown that the British Standard is very conservative for most of the loading conditions included in the tests but is unsafe for others. Three possible failure modes are then considered, overall elastic buckling of the web, flexural yielding of the web due to large out of plane deflexions and local crushing of the material at the junction of the web and the root fillets. Each mode is considered theoretically and developed to establish the main variables, thus enabling a comparison to be made with the test results. It is shown that all three failure modes have a particular relevance for individual loading conditions, but that determining the failure load given the beam size and the loading conditions is very difficult in certain instances. Finally it is shown that there are some empirical relationships between the failure loads and the type of loading for various beam serial sizes.
Resumo:
OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
Resumo:
Using the resistance literature as an underpinning theoretical framework, this chapter analyzes how Web designers through their daily practices, (i) adopt recursive, adaptive, and resisting behavior regarding the inclusion of social cues online and (ii) shape the socio-technical power relationship between designers and other stakeholders. Five vignettes in the form of case studies with expert individual Web designers are used. Findings point out at three types of emerging resistance namely: market driven resistance, ideological resistance, and functional resistance. In addition, a series of propositions are provided linking the various themes. Furthermore, the authors suggest that stratification in Web designers’ type is occurring and that resistance offers a novel lens to analyze the debate.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources.
Resumo:
Objective - To evaluate behavioural components and strategies associated with increased uptake and effectiveness of screening for coronary heart disease and diabetes with an implementation science focus. Design - Realist review. Data sources - PubMed, Web of Knowledge, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register and reference chaining. Searches limited to English language studies published since 1990. Eligibility criteria - Eligible studies evaluated interventions designed to increase the uptake of cardiovascular disease (CVD) and diabetes screening and examined behavioural and/or strategic designs. Studies were excluded if they evaluated changes in risk factors or cost-effectiveness only. Results - In 12 eligible studies, several different intervention designs and evidence-based strategies were evaluated. Salient themes were effects of feedback on behaviour change or benefits of health dialogues over simple feedback. Studies provide mixed evidence about the benefits of these intervention constituents, which are suggested to be situation and design specific, broadly supporting their use, but highlighting concerns about the fidelity of intervention delivery, raising implementation science issues. Three studies examined the effects of informed choice or loss versus gain frame invitations, finding no effect on screening uptake but highlighting opportunistic screening as being more successful for recruiting higher CVD and diabetes risk patients than an invitation letter, with no differences in outcomes once recruited. Two studies examined differences between attenders and non-attenders, finding higher risk factors among non-attenders and higher diagnosed CVD and diabetes among those who later dropped out of longitudinal studies. Conclusions - If the risk and prevalence of these diseases are to be reduced, interventions must take into account what we know about effective health behaviour change mechanisms, monitor delivery by trained professionals and examine the possibility of tailoring programmes according to contexts such as risk level to reach those most in need. Further research is needed to determine the best strategies for lifelong approaches to screening.
Resumo:
The growing use of a variety of information systems in crisis management both by non-governmental organizations (NGOs) and emergency management agencies makes the challenges of information sharing and interoperability increasingly important. The use of semantic web technologies is a growing area and is a technology stack specifically suited to these challenges. This paper presents a review of ontologies, vocabularies and taxonomies that are useful in crisis management systems. We identify the different subject areas relevant to crisis management based on a review of the literature. The different ontologies and vocabularies available are analysed in terms of their coverage, design and usability. We also consider the use cases for which they were designed and the degree to which they follow a variety of standards. While providing comprehensive ontologies for the crisis domain is not feasible or desirable there is considerable scope to develop ontologies for the subject areas not currently covered and for the purposes of interoperability.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionize the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability. Designers of online-questionnaires are faced with a plethora of design tools to assist in the development of their electronic questionnaires. Little, if any, support is incorporated, however, within these tools to guide online-questionnaire designers according to best practice. In essence, an online-questionnaire combines questionnaire-based survey functionality with that of a webpage/site. As such, the design of an online-questionnaire should incorporate principles from both contributing fields. Drawing on existing guidelines for paper-based questionnaire design, website design (paying particular attention to issues of accessibility and usability), and existing but scarce guidelines for electronic surveys, we have derived a comprehensive set of guidelines for the design of online-questionnaires. This article introduces this comprehensive set of guidelines – as a practical reference guide – for the design of online-questionnaires.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability (Bandilla et al., 2003; Dillman, 2000; Kwak and Radler, 2002). Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Despite this, and the introduction of numerous tools to support online-questionnaire creation, current electronic survey design typically replicates that of paper-based questionnaires, failing to harness the full power of the electronic delivery medium. Worse, a recent environmental scan of online-questionnaire design tools found that little, if any, support is incorporated within these tools to guide questionnaire designers according to best-practice (Lumsden and Morgan, 2005). This article introduces a comprehensive set of guidelines - a practical reference guide - for the design of online-questionnaires.
Resumo:
* The research work reviewed in this paper has been carried out in the context of the Russian Foundation for Basic Research funded project “Adaptable Intelligent Interfaces Research and Development for Distance Learning Systems”(grant N 02-01-81019). The authors wish to acknowledge the co-operation with the Byelorussian partners of this project.
Resumo:
The paper discusses some current trends in the area of development and use of semantic portals for accessing heterogeneous museum collections on the Semantic Web. The presentation is focused on some issues concerning metadata standards for museums, museum collections ontologies and semantic search engines. A number of design considerations and recommendations are formulated.