852 resultados para Web-based tool
Resumo:
This paper argues about the utility of advanced knowledge-based techniques to develop web-based applications that help consumers in finding products within marketplaces in e-commerce. In particular, we describe the idea of model-based approach to develop a shopping agent that dynamically configures a product according to the needs and preferences of customers. Finally, the paper summarizes the advantages provided by this approach.
Resumo:
Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.
Resumo:
In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM).
Resumo:
Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.
Resumo:
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
Resumo:
The objective of database AsMamDB is to facilitate the systematic study of alternatively spliced genes of mammals. Version 1.0 of AsMamDB contains 1563 alternatively spliced genes of human, mouse and rat, each associated with a cluster of nucleotide sequences. The main information provided by AsMamDB includes gene alternative splicing patterns, gene structures, locations in chromosomes, products of genes and tissues where they express. Alternative splicing patterns are represented by multiple alignments of various gene transcripts and by graphs of their topological structures. Gene structures are illustrated by exon, intron and various regulatory elements distributions. There are 4204 DNAs, 3977 mRNAs, 8989 CDSs and 126 931 ESTs in the current database. More than 130 000 GenBank entries are covered and 4443 MEDLINE records are linked. DNA, mRNA, exon, intron and relevant regulatory element sequences are provided in FASTA format. More information can be obtained by using the web-based multiple alignment tool Asalign and various category lists. AsMamDB can be accessed at http://166.111.30.6 5/ASMAM DB.html.
MEDLINEplus: building and maintaining the National Library of Medicine's consumer health Web service
Resumo:
MEDLINEplus is a Web-based consumer health information resource, made available by the National Library of Medicine (NLM). MEDLINEplus has been designed to provide consumers with a well-organized, selective Web site facilitating access to reliable full-text health information. In addition to full-text resources, MEDLINEplus directs consumers to dictionaries, organizations, directories, libraries, and clearinghouses for answers to health questions. For each health topic, MEDLINEplus includes a preformulated MEDLINE search created by librarians. The site has been designed to match consumer language to medical terminology. NLM has used advances in database and Web technologies to build and maintain MEDLINEplus, allowing health sciences librarians to contribute remotely to the resource. This article describes the development and implementation of MEDLINEplus, its supporting technology, and plans for future development.
Resumo:
Proper management of supply chains is fundamental in the overall system performance of forestbased activities. Usually, efficient management techniques rely on a decision support software, which needs to be able to generate fast and effective outputs from the set of possibilities. In order to do this, it is necessary to provide accurate models representative of the dynamic interactions of systems. Due to forest-based supply chains’ nature, event-based models are more suited to describe their behaviours. This work proposes the modelling and simulation of a forestbased supply chain, in particular the biomass supply chain, through the SimPy framework. This Python based tool allows the modelling of discrete-event systems using operations such as events, processes and resources. The developed model was used to access the impact of changes in the daily working plan in three situations. First, as a control case, the deterministic behaviour was simulated. As a second approach, a machine delay was introduced and its implications in the plan accomplishment were analysed. Finally, to better address real operating conditions, stochastic behaviours of processing and driving times were simulated. The obtained results validate the SimPy simulation environment as a framework for modelling supply chains in general and for the biomass problem in particular.
Resumo:
Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.
Resumo:
Despite the increased offering of online communication channels to support web-based retail systems, there is limited marketing research that investigates how these channels act singly, or in combination with offline channels, to influence an individual's intention to purchase online. If the marketer's strategy is to encourage online transactions, this requires a focus on consumer acceptance of the web-based transaction technology, rather than the purchase of the products per se. The exploratory study reported in this paper examines normative influences from referent groups in an individual's on and offline social communication networks that might affect their intention to use online transaction facilities. The findings suggest that for non-adopters, there is no normative influence from referents in either network. For adopters, one online and one offline referent norm positively influenced this group's intentions to use online transaction facilities. The implications of these findings are discussed together with future research directions.
Resumo:
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
Resumo:
This paper describes the use of a web-site for the dissemination of the community-based '10,000 steps' program which was originally developed and evaluated in Rockhampton, Queensland in 2001-2003. The website provides information and interactive activities for individuals, and promotes resources and programs for health promotion professionals. The dissemination activity was assessed in terms of program adoption and implementation. In a 2-year period (May 2004-March 2006) more than 18,000 people registered as users of the web-site (togging more than 8.5 billion steps) and almost 100 workplaces and 13 communities implemented aspects of the 10,000 steps program. These data support the use of the internet as an effective means of disseminating ideas and resources beyond the geographical borders of the original project. Following this preliminary dissemination, there remains a need for the systematic study of different dissemination strategies, so that evidence-based physical activity programs can be translated into more widespread public health practice. (c) 2006 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Resumo:
T he international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM(2), comprised 60,770 full- length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein- coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full- length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web- based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full- length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding ( including partial or truncated transcripts), providing to our knowledge the greatest current coverage of the mouse proteome by full- length cDNAs. The total number of distinct non- protein- coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and. nal expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species.
Resumo:
SQL (Structured Query Language) is one of the essential topics in foundation databases courses in higher education. Due to its apparent simple syntax, learning to use the full power of SQL can be a very difficult activity. In this paper, we introduce SQLator, which is a web-based interactive tool for learning SQL. SQLator's key function is the evaluate function, which allows a user to evaluate the correctness of his/her query formulation. The evaluate engine is based on complex heuristic algorithms. The tool also provides instructors the facility to create and populate database schemas with an associated pool of SQL queries. Currently it hosts two databases with a query pool of 300+ across the two databases. The pool is divided into 3 categories according to query complexity. The SQLator user can perform unlimited executions and evaluations on query formulations and/or view the solutions. The SQLator evaluate function has a high rate of success in evaluating the user's statement as correct (or incorrect) corresponding to the question. We will present in this paper, the basic architecture and functions of SQLator. We will further discuss the value of SQLator as an educational technology and report on educational outcomes based on studies conducted at the School of Information Technology and Electrical Engineering, The University of Queensland.
Resumo:
This paper describes and analyses an innovative engineering management course that applies a project management framework in the context of a feasibility study for a prospective research project. The aim is to have students learn aspects of management that will be relevant from the outset of their professional career while simultaneously having immediate value in helping them to manage a research project and capstone design project in their senior year. An integral part of this innovation was the development of a web-based project management tool. While the main objectives of the new course design were achieved, a number of important lessons were learned that would guide the further development and continuous improvement of this course. The most critical of these is the need to achieve the optimum balance in the mind of the students between doing the project and critically analyzing the processes used to accomplish the work.