890 resultados para Web sites-design
Resumo:
This thesis explores how the world-wide-web can be used to support English language teachers doing further studies at a distance. The future of education worldwide is moving towards a requirement that we, as teacher educators, use the latest web technology not as a gambit, but as a viable tool to improve learning. By examining the literature on knowledge, teacher education and web training, a model of teacher knowledge development, along with statements of advice for web developers based upon the model are developed. Next, the applicability and viability of both the model and statements of advice are examined by developing a teacher support site (bttp://www. philseflsupport. com) according to these principles. The data collected from one focus group of users from sixteen different countries, all studying on the same distance Masters programme, is then analysed in depth. The outcomes from the research are threefold: A functioning website that is averaging around 15, 000 hits a month provides a professional contribution. An expanded model of teacher knowledge development that is based upon five theoretical principles that reflect the ever-expanding cyclical nature of teacher learning provides an academic contribution. A series of six statements of advice for developers of teacher support sites. These statements are grounded in the theoretical principles behind the model of teacher knowledge development and incorporate nine keys to effective web facilitation. Taken together, they provide a forward-looking contribution to the praxis of web supported teacher education, and thus to the potential dissemination of the research presented here. The research has succeeded in reducing the proliferation of terminology in teacher knowledge into a succinct model of teacher knowledge development. The model may now be used to further our understanding of how teachers learn and develop as other research builds upon the individual study here. NB: Appendix 4 is only available only available for consultation at Aston University Library with prior arrangement.
Resumo:
Purine and pyrimidine triplex-forming oligonucleotides (TFOs), as potential antibacterial agents, were designed to bind by Hoogsteen and reverse Hoogsteen hydrogen bonds in a sequence specific manner in the major groove of genomic DNA at specific polypurine sites within the gyrA gene of E. coli and S. pneumoniae. Sequences were prepared by automated synthesis, with purification and characterisation determined by high performance liquid chromatograpy, capillary electrophoresis and mass spectrometry. Triplex stability was assessed using melting curves where the binding of the third strand to the duplex target, was assessed over a temperature range of 0-80°C, and at pH 6.4 and 7.2. The most successful of the unmodified TFOs (6) showed a Tm value of 26 °C at both pH values with binding via reverse Hoogsteen bonds. Binding to genomic DNA was also demonstrated by spectrofluorimetry, using fluorescein-labelled TFOs, from which dissociation constants were determined. Modifications in the form of 5mC, 5' acridine attachment, phosphorothioation, 2'-0-methylation and phosphoramidation, were made in order to. increase Tm values. Phosphoramidate modification was the most with increased Tm values of 42°C. However, the final purity of these sequences was poor due to their difficult syntheses. FACS (fluorescent activated cell sorting) analysis was used to determine the potential uptake of a fluorescently labelled analogue of 6 via passive, coJd shock mediated, and anionic liposome aided, uptake. This was established at 20°C and 37°C. At both temperatures anionic lipid-mediated uptake produced unrivalled fluorescence, equivalent to 20 and 43% at 20 and 37°C respectively. Antibacterial activity of each oligonucleotide was assessed by viable count anaJysis relying on passive uptake, cold shocking techniques, chlorpromazine-mediated uptake, and, cationic and anionic lipid-aided uptake. All oligonucleotides were assessed for their ability to enhance uptake, which is a major barrier to the effectiveness of these agents. Compound 6 under cold shocking conditions produced the greatest consistent decline in colony forming units per ml. Results for this compound were sometimes variable indicating inconsistent uptake by this particular assay method.
Resumo:
OBJECTIVES: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. METHODS: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications' functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. RESULTS: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical prostatectomy on the hospital ward) and implemented on two computing platforms-desktop and handheld computers. CONCLUSIONS: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
Resumo:
Using the resistance literature as an underpinning theoretical framework, this chapter analyzes how Web designers through their daily practices, (i) adopt recursive, adaptive, and resisting behavior regarding the inclusion of social cues online and (ii) shape the socio-technical power relationship between designers and other stakeholders. Five vignettes in the form of case studies with expert individual Web designers are used. Findings point out at three types of emerging resistance namely: market driven resistance, ideological resistance, and functional resistance. In addition, a series of propositions are provided linking the various themes. Furthermore, the authors suggest that stratification in Web designers’ type is occurring and that resistance offers a novel lens to analyze the debate.
Resumo:
The calcitonin receptor-like receptor (CLR) acts as a receptor for the calcitonin gene-related peptide (CGRP) but in order to recognize CGRP, it must form a complex with an accessory protein, receptor activity modifying protein 1 (RAMP1). Identifying the protein/protein and protein/ligand interfaces in this unusual complex would aid drug design. The role of the extreme N-terminus of CLR (Glu23-Ala60) was examined by an alanine scan and the results were interpreted with the help of a molecular model. The potency of CGRP at stimulating cAMP production was reduced at Leu41Ala, Gln45Ala, Cys48Ala and Tyr49Ala; furthermore, CGRP-induced receptor internalization at all of these receptors was also impaired. Ile32Ala, Gly35Ala and Thr37Ala all increased CGRP potency. CGRP specific binding was abolished at Leu41Ala, Ala44Leu, Cys48Ala and Tyr49Ala. There was significant impairment of cell surface expression of Gln45Ala, Cys48Ala and Tyr49Ala. Cys48 takes part in a highly conserved disulfide bond and is probably needed for correct folding of CLR. The model suggests that Gln45 and Tyr49 mediate their effects by interacting with RAMP1 whereas Leu41 and Ala44 are likely to be involved in binding CGRP. Ile32, Gly35 and Thr37 form a separate cluster of residues which modulate CGRP binding. The results from this study may be applicable to other family B GPCRs which can associate with RAMPs.
Resumo:
Monitoring land-cover changes on sites of conservation importance allows environmental problems to be detected, solutions to be developed and the effectiveness of actions to be assessed. However, the remoteness of many sites or a lack of resources means these data are frequently not available. Remote sensing may provide a solution, but large-scale mapping and change detection may not be appropriate, necessitating site-level assessments. These need to be easy to undertake, rapid and cheap. We present an example of a Web-based solution based on free and open-source software and standards (including PostGIS, OpenLayers, Web Map Services, Web Feature Services and GeoServer) to support assessments of land-cover change (and validation of global land-cover maps). Authorised users are provided with means to assess land-cover visually and may optionally provide uncertainty information at various levels: from a general rating of their confidence in an assessment to a quantification of the proportions of land-cover types within a reference area. Versions of this tool have been developed for the TREES-3 initiative (Simonetti, Beuchle and Eva, 2011). This monitors tropical land-cover change through ground-truthing at latitude / longitude degree confluence points, and for monitoring of change within and around Important Bird Areas (IBAs) by Birdlife International and the Royal Society for the Protection of Birds (RSPB). In this paper we present results from the second of these applications. We also present further details on the potential use of the land-cover change assessment tool on sites of recognised conservation importance, in combination with NDVI and other time series data from the eStation (a system for receiving, processing and disseminating environmental data). We show how the tool can be used to increase the usability of earth observation data by local stakeholders and experts, and assist in evaluating the impact of protection regimes on land-cover change.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources.
Resumo:
Objective - To evaluate behavioural components and strategies associated with increased uptake and effectiveness of screening for coronary heart disease and diabetes with an implementation science focus. Design - Realist review. Data sources - PubMed, Web of Knowledge, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register and reference chaining. Searches limited to English language studies published since 1990. Eligibility criteria - Eligible studies evaluated interventions designed to increase the uptake of cardiovascular disease (CVD) and diabetes screening and examined behavioural and/or strategic designs. Studies were excluded if they evaluated changes in risk factors or cost-effectiveness only. Results - In 12 eligible studies, several different intervention designs and evidence-based strategies were evaluated. Salient themes were effects of feedback on behaviour change or benefits of health dialogues over simple feedback. Studies provide mixed evidence about the benefits of these intervention constituents, which are suggested to be situation and design specific, broadly supporting their use, but highlighting concerns about the fidelity of intervention delivery, raising implementation science issues. Three studies examined the effects of informed choice or loss versus gain frame invitations, finding no effect on screening uptake but highlighting opportunistic screening as being more successful for recruiting higher CVD and diabetes risk patients than an invitation letter, with no differences in outcomes once recruited. Two studies examined differences between attenders and non-attenders, finding higher risk factors among non-attenders and higher diagnosed CVD and diabetes among those who later dropped out of longitudinal studies. Conclusions - If the risk and prevalence of these diseases are to be reduced, interventions must take into account what we know about effective health behaviour change mechanisms, monitor delivery by trained professionals and examine the possibility of tailoring programmes according to contexts such as risk level to reach those most in need. Further research is needed to determine the best strategies for lifelong approaches to screening.
Resumo:
The growing use of a variety of information systems in crisis management both by non-governmental organizations (NGOs) and emergency management agencies makes the challenges of information sharing and interoperability increasingly important. The use of semantic web technologies is a growing area and is a technology stack specifically suited to these challenges. This paper presents a review of ontologies, vocabularies and taxonomies that are useful in crisis management systems. We identify the different subject areas relevant to crisis management based on a review of the literature. The different ontologies and vocabularies available are analysed in terms of their coverage, design and usability. We also consider the use cases for which they were designed and the degree to which they follow a variety of standards. While providing comprehensive ontologies for the crisis domain is not feasible or desirable there is considerable scope to develop ontologies for the subject areas not currently covered and for the purposes of interoperability.
Resumo:
Disasters cause widespread harm and disrupt the normal functioning of society, and effective management requires the participation and cooperation of many actors. While advances in information and networking technology have made transmission of data easier than it ever has been before, communication and coordination of activities between actors remain exceptionally difficult. This paper employs semantic web technology and Linked Data principles to create a network of intercommunicating and inter-dependent on-line sites for managing resources. Each site publishes available resources openly and a lightweight opendata protocol is used to request and respond to requests for resources between sites in the network.
Resumo:
Templated, macroporous Mg-Al hydrotalcites synthesised via alkali-free co-precipitation exhibit superior performance in the transesterification of C4 -C18 triglycerides for biodiesel production, with rate-enhancement increasing with alkyl chain length. Promotion reflects improved diffusion of bulky triglycerides and accessibility of active sites within the hierarchical macropore-micropore architecture. © 2012 The Royal Society of Chemistry.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionize the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability. Designers of online-questionnaires are faced with a plethora of design tools to assist in the development of their electronic questionnaires. Little, if any, support is incorporated, however, within these tools to guide online-questionnaire designers according to best practice. In essence, an online-questionnaire combines questionnaire-based survey functionality with that of a webpage/site. As such, the design of an online-questionnaire should incorporate principles from both contributing fields. Drawing on existing guidelines for paper-based questionnaire design, website design (paying particular attention to issues of accessibility and usability), and existing but scarce guidelines for electronic surveys, we have derived a comprehensive set of guidelines for the design of online-questionnaires. This article introduces this comprehensive set of guidelines – as a practical reference guide – for the design of online-questionnaires.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.