33 resultados para Web-Based Application


Relevância:

90.00% 90.00%

Publicador:

Resumo:

To support student learning in a large Metabolism and Nutrition class, we have introduced a web-based package, using a commercially available program, WebCT. The package was developed at a minimal cost and with limited resources. In addition to downloadable (PDF) versions of lecture Powerpoint presentations, tutorial outlines and a practical class exercise, web-based self-directed learning exercises were included to reinforce and extend lecture material in an active learning environment. The web-site also contained a variety of formative and summative assessment tasks that examined both factual recall and higher order thinking Detailed course information, timetables and a bulletin board were also readily accessible. Student usage of the site was generally high, but varied widely between individual students. Students who achieved a high overall score for the course completed on average three times as many formative assessment items and achieved a higher score for all tests than students who did poorly. Student feedback about the site was very positive with the majority of students reporting that the course material and assessment items that were available were useful to their learning. Administration of the course was also facilitated. (C) 2001 IUBMB. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents results from field studies carried out during the 1993-1998 Australian cotton (Gossypium hirsutum L.) seasons to monitor off-target droplet movement of endosulfan (6,7,8,9,10,10-hexachloro-1,5,5a,6,9,9a-hexahydro-6,9-methano-2,4,3-benzodioxathiepin 3-oxide) insecticide applied to a commercial cotton crop. Averaged over a wide range of conditions, off-target deposition 500 m downwind of the field boundary was approximately 2% of the field-applied rate with oil-based applications and 1% with water-based applications. Mean airborne drift values recorded 100 m downwind of a single flight line were a third as much with water-based application compared with oil-based application. Calculations using a Gaussian diffusion model and the U.S. Spray Drift Task Force AgDRIFT model produced downwind drift profiles that compared favorably with experimental data. Both models and data indicate that by adopting large droplet placement (LDP) application methods and incorporating crop buffer distances, spray drift can be effectively managed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the increased offering of online communication channels to support web-based retail systems, there is limited marketing research that investigates how these channels act singly, or in combination with offline channels, to influence an individual's intention to purchase online. If the marketer's strategy is to encourage online transactions, this requires a focus on consumer acceptance of the web-based transaction technology, rather than the purchase of the products per se. The exploratory study reported in this paper examines normative influences from referent groups in an individual's on and offline social communication networks that might affect their intention to use online transaction facilities. The findings suggest that for non-adopters, there is no normative influence from referents in either network. For adopters, one online and one offline referent norm positively influenced this group's intentions to use online transaction facilities. The implications of these findings are discussed together with future research directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the use of a web-site for the dissemination of the community-based '10,000 steps' program which was originally developed and evaluated in Rockhampton, Queensland in 2001-2003. The website provides information and interactive activities for individuals, and promotes resources and programs for health promotion professionals. The dissemination activity was assessed in terms of program adoption and implementation. In a 2-year period (May 2004-March 2006) more than 18,000 people registered as users of the web-site (togging more than 8.5 billion steps) and almost 100 workplaces and 13 communities implemented aspects of the 10,000 steps program. These data support the use of the internet as an effective means of disseminating ideas and resources beyond the geographical borders of the original project. Following this preliminary dissemination, there remains a need for the systematic study of different dissemination strategies, so that evidence-based physical activity programs can be translated into more widespread public health practice. (c) 2006 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

T he international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM(2), comprised 60,770 full- length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein- coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full- length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web- based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full- length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding ( including partial or truncated transcripts), providing to our knowledge the greatest current coverage of the mouse proteome by full- length cDNAs. The total number of distinct non- protein- coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and. nal expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current trend among many universities is to increase the number of courses available online. However, there are fundamental problems in transferring traditional education courses to virtual formats. Delivering current curricula in an online format does not assist in overcoming the negative effects on student motivation which are inherent in providing information passively. Using problem-based learning (PBL) online is a method by which computers can become a tool to encourage active learning among students. The delivery of curricula via goal-based scenarios allows students to learn at different rates and can successfully shift online learning from memorization to discovery. This paper reports on a Web-based e-health course that has been delivered via PBL for the past 12 months. Thirty distance-learning students undertook postgraduate courses in e-health delivered via the Internet (asynchronous communication). Data collected via online student surveys indicated that the PBL format was both flexible and interesting. PBL has the potential to increase the quality of the educational experience of students in online environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

SQL (Structured Query Language) is one of the essential topics in foundation databases courses in higher education. Due to its apparent simple syntax, learning to use the full power of SQL can be a very difficult activity. In this paper, we introduce SQLator, which is a web-based interactive tool for learning SQL. SQLator's key function is the evaluate function, which allows a user to evaluate the correctness of his/her query formulation. The evaluate engine is based on complex heuristic algorithms. The tool also provides instructors the facility to create and populate database schemas with an associated pool of SQL queries. Currently it hosts two databases with a query pool of 300+ across the two databases. The pool is divided into 3 categories according to query complexity. The SQLator user can perform unlimited executions and evaluations on query formulations and/or view the solutions. The SQLator evaluate function has a high rate of success in evaluating the user's statement as correct (or incorrect) corresponding to the question. We will present in this paper, the basic architecture and functions of SQLator. We will further discuss the value of SQLator as an educational technology and report on educational outcomes based on studies conducted at the School of Information Technology and Electrical Engineering, The University of Queensland.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document records the process of migrating eprints.org data to a Fez repository. Fez is a Web-based digital repository and workflow management system based on Fedora (http://www.fedora.info/). At the time of migration, the University of Queensland Library was using EPrints 2.2.1 [pepper] for its ePrintsUQ repository. Once we began to develop Fez, we did not upgrade to later versions of eprints.org software since we knew we would be migrating data from ePrintsUQ to the Fez-based UQ eSpace. Since this document records our experiences of migration from an earlier version of eprints.org, anyone seeking to migrate eprints.org data into a Fez repository might encounter some small differences. Moving UQ publication data from an eprints.org repository into a Fez repository (hereafter called UQ eSpace (http://espace.uq.edu.au/) was part of a plan to integrate metadata (and, in some cases, full texts) about all UQ research outputs, including theses, images, multimedia and datasets, in a single repository. This tied in with the plan to identify and capture the research output of a single institution, the main task of the eScholarshipUQ testbed for the Australian Partnership for Sustainable Repositories project (http://www.apsr.edu.au/). The migration could not occur at UQ until the functionality in Fez was at least equal to that of the existing ePrintsUQ repository. Accordingly, as Fez development occurred throughout 2006, a list of eprints.org functionality not currently supported in Fez was created so that programming of such development could be planned for and implemented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In mapping the evolutionary process of online news and the socio-cultural factors determining this development, this paper has a dual purpose. First, in reworking the definition of “online communication”, it argues that despite its seemingly sudden emergence in the 1990s, the history of online news started right in the early days of the telegraphs and spread throughout the development of the telephone and the fax machine before becoming computer-based in the 1980s and Web-based in the 1990s. Second, merging macro-perspectives on the dynamic of media evolution by DeFleur and Ball-Rokeach (1989) and Winston (1998), the paper consolidates a critical point for thinking about new media development: that something technically feasible does not always mean that it will be socially accepted and/or demanded. From a producer-centric perspective, the birth and development of pre-Web online news forms have been more or less generated by the traditional media’s sometimes excessive hype about the power of new technologies. However, placing such an emphasis on technological potentials at the expense of their social conditions not only can be misleading but also can be detrimental to the development of new media, including the potential of today’s online news.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: There has been a proliferation of quality use of medicines activities in Australia since the 1990s. However, knowledge of the nature and extent of these activities was lacking. A mechanism was required to map the activities to enable their coordination. Aims: To develop a geographical mapping facility as an evaluative tool to assist the planning and implementation of Australia's policy on the quality use of medicines. Methods: A web-based database incorporating geographical mapping software was developed. Quality use of medicines projects implemented across the country was identified from project listings funded by the Quality Use of Medicines Evaluation Program, the National Health and Medical Research Council, Mental Health Strategy, Rural Health Support, Education and Training Program, the Healthy Seniors Initiative, the General Practice Evaluation Program and the Drug Utilisation Evaluation Network. In addition, projects were identified through direct mail to persons working in the field. Results: The Quality Use of Medicines Mapping Project (QUMMP) was developed, providing a Web-based database that can be continuously updated. This database showed the distribution of quality use of medicines activities by: (i) geographical region, (ii) project type, (iii) target group, (iv) stakeholder involvement, (v) funding body and (vi) evaluation method. At September 2001, the database included 901 projects. Sixty-two per cent of projects had been conducted in Australian capital cities, where approximately 63% of the population reside. Distribution of projects varied between States. In Western Australia and Queensland, 36 and 73 projects had been conducted, respectively, representing approximately two projects per 100 000 people. By comparison, in South Australia and Tasmania approximately seven projects per 100 000 people were recorded, with six per 100 000 people in Victoria and three per 100 000 people in New South Wales. Rural and remote areas of the country had more limited project activity. Conclusions: The mapping of projects by geographical location enabled easy identification of high and low activity areas. Analysis of the types of projects undertaken in each region enabled identification of target groups that had not been involved or services that had not yet been developed. This served as a powerful tool for policy planning and implementation and will be used to support the continued implementation of Australia's policy on the quality use of medicines.