34 resultados para Web-Centric Expert System
Resumo:
In this paper, we describe the Vannotea system - an application designed to enable collaborating groups to discuss and annotate collections of high quality images, video, audio or 3D objects. The system has been designed specifically to capture and share scholarly discourse and annotations about multimedia research data by teams of trusted colleagues within a research or academic environment. As such, it provides: authenticated access to a web browser search interface for discovering and retrieving media objects; a media replay window that can incorporate a variety of embedded plug-ins to render different scientific media formats; an annotation authoring, editing, searching and browsing tool; and session logging and replay capabilities. Annotations are personal remarks, interpretations, questions or references that can be attached to whole files, segments or regions. Vannotea enables annotations to be attached either synchronously (using jabber message passing and audio/video conferencing) or asynchronously and stand-alone. The annotations are stored on an Annotea server, extended for multimedia content. Their access, retrieval and re-use is controlled via Shibboleth identity management and XACML access policies.
Resumo:
Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension.
Resumo:
A Geographic Information System (GIS) was used to model datasets of Leyte Island, the Philippines, to identify land which was suitable for a forest extension program on the island. The datasets were modelled to provide maps of the distance of land from cities and towns, land which was a suitable elevation and slope for smallholder forestry and land of various soil types. An expert group was used to assign numeric site suitabilities to the soil types and maps of site suitability were used to assist the selection of municipalities for the provision of extension assistance to smallholders. Modelling of the datasets was facilitated by recent developments of the ArcGIS® suite of computer programs and derivation of elevation and slope was assisted by the availability of digital elevation models (DEM) produced by the Shuttle Radar Topography (SRTM) mission. The usefulness of GIS software as a decision support tool for small-scale forestry extension programs is discussed.
Resumo:
Background: A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term patholog to mean a homolog of a human disease-related gene encoding a product ( transcript, anti-sense or protein) potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results: Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity ( 70 - 85% identity) to known human-disease genes. Using a newly developed biological information extraction and annotation tool ( FACTS) in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic ( 53%), hereditary ( 24%), immunological ( 5%), cardio-vascular (4%), or other (14%), disorders. Conclusions: Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
Resumo:
The subjective interpretation of dobutamine echocardiography (DBE) makes the accuracy of this technique dependent on the experience of the observer, and also poses problems of concordance between observers. Myocardial tissue Doppler velocity (MDV) may offer a quantitative technique for identification of coronary artery disease, but it is unclear whether this parameter could improve the results of less expert readers and in segments with low interobserver concordance. The aim of this study was to find whether MDV improved the accuracy of wall motion scoring in novice readers, experienced echocardiographers, and experts in stress echocardiography, and to identify the optimal means of integrating these tissue Doppler data in 77 patients who underwent DBE and angiography. New or worsening abnormalities were identified as ischemia and abnormalities seen at rest as scarring. Segmental MDV was measured independently and previously derived cutoffs were applied to categorize segments as normal or ab normal. Five strategies were used to combine MDV and wall motion score, and the results of each reader using each strategy were compared with quantitative coronary angiography. The accuracy of wall motion scoring by novice (68 +/- 3%) and experienced echocardiographers (71 +/- 3%) was less than experts in stress echocardiography (88 +/- 3%, p < 0.001). Various strategies for integration with MDV significantly improved the accuracy of wall motion scoring by novices from 75 +/- 2% to 77 +/- 5% (p < 0.01). Among the experienced group, accuracy improved from 74 +/- 2% to 77 +/- 5% (p < 0.05), but in the experts, no improvement was seen from their baseline accuracy. Integration with MDV also improved discordance related to the basal segments. Thus, use of MDV in all segments or MDV in all segments with wall motion scoring in the apex offers an improvement in sensitivity and accuracy with minimal compromise in specificity. (C) 2001 by Excerpta Medica, Inc.
Resumo:
The fate of N-15-nitrogen-enriched formulated feed fed to shrimp was traced through the food web in shallow, outdoor tank systems (1000 1) stocked with shrimp. Triplicate tanks containing shrimp water with and without sediment were used to identify the role of the natural biota in the water column and sediment in processing dietary nitrogen (N). A preliminary experiment demonstrated that N-15-nitrogen-enriched feed products could be detected in the food web. Based on this, a 15-day experiment was conducted. The ammonium (NH4+) pool in the water column became rapidly enriched (within one day) with N-15-nitrogen after shrimp were fed N-15-enriched feed. By day 15, 6% of the added N-15-nitrogen was in this fraction in the 'sediment' tanks compared with 0.4% in the 'no sediment' tanks. The particulate fraction in the water column, principally autotrophic nanoflagellates, accounted for 4-5% of the N-15-nitrogen fed to shrimp after one day. This increased to 16% in the 'no sediment' treatment, and decreased to 2% in the 'sediment' treatment by day 15. It appears that dietary N was more accessible to the phytoplankton community in the absence of sediment. The difference is possibly because a proportion of the dietary N was buried in the sediment in the 'sediment' treatment, making it unavailable to the phytoplankton. Alternatively, the dietary N was retained in the NH4+ pool in the water column since phytoplankton growth, and hence, N utilization was lower in the 'sediment' treatment. The lower growth of phytoplankton in the 'sediment' treatment appeared to be related to higher turbidity, and hence, lower light availability for growth. The percentage N-15-nitrogen detected in the sediment was only 6% despite the high capacity for sedimentation of the large biomass of plankton detritus and shrimp waste. This suggests rapid remineralization of organic waste by the microbial community in the sediment resulting in diffusion of inorganic N sources into the water column. It is likely that most of the dietary N will ultimately be removed from the tank system by water discharges. Our study showed that N-15-nitrogen derived from aquaculture feed can be processed by the microbial community in outdoor aquaculture systems and provides a method for determining the effect of dietary N on ecosystems. However, a significant amount of the dietary N was not retained by the natural biota and is likely to be present in the soluble organic fraction. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.
Resumo:
Assessments for assigning the conservation status of threatened species that are based purely on subjective judgements become problematic because assessments can be influenced by hidden assumptions, personal biases and perceptions of risks, making the assessment process difficult to repeat. This can result in inconsistent assessments and misclassifications, which can lead to a lack of confidence in species assessments. It is almost impossible to Understand an expert's logic or visualise the underlying reasoning behind the many hidden assumptions used throughout the assessment process. In this paper, we formalise the decision making process of experts, by capturing their logical ordering of information, their assumptions and reasoning, and transferring them into a set of decisions rules. We illustrate this through the process used to evaluate the conservation status of species under the NatureServe system (Master, 1991). NatureServe status assessments have been used for over two decades to set conservation priorities for threatened species throughout North America. We develop a conditional point-scoring method, to reflect the current subjective process. In two test comparisons, 77% of species' assessments using the explicit NatureServe method matched the qualitative assessments done subjectively by NatureServe staff. Of those that differed, no rank varied by more than one rank level under the two methods. In general, the explicit NatureServe method tended to be more precautionary than the subjective assessments. The rank differences that emerged from the comparisons may be due, at least in part, to the flexibility of the qualitative system, which allows different factors to be weighted on a species-by-species basis according to expert judgement. The method outlined in this study is the first documented attempt to explicitly define a transparent process for weighting and combining factors under the NatureServe system. The process of eliciting expert knowledge identifies how information is combined and highlights any inconsistent logic that may not be obvious in Subjective decisions. The method provides a repeatable, transparent, and explicit benchmark for feedback, further development, and improvement. (C) 2004 Elsevier SAS. All rights reserved.
Resumo:
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Power systems are large scale nonlinear systems with high complexity. Various optimization techniques and expert systems have been used in power system planning. However, there are always some factors that cannot be quantified, modeled, or even expressed by expert systems. Moreover, such planning problems are often large scale optimization problems. Although computational algorithms that are capable of handling large dimensional problems can be used, the computational costs are still very high. To solve these problems, in this paper, investigation is made to explore the efficiency and effectiveness of combining mathematic algorithms with human intelligence. It had been discovered that humans can join the decision making progresses by cognitive feedback. Based on cognitive feedback and genetic algorithm, a new algorithm called cognitive genetic algorithm is presented. This algorithm can clarify and extract human's cognition. As an important application of this cognitive genetic algorithm, a practical decision method for power distribution system planning is proposed. By using this decision method, the optimal results that satisfy human expertise can be obtained and the limitations of human experts can be minimized in the mean time.
Resumo:
Aim: To present an evidence-based framework to improve the quality of occupational therapy expert opinions on work capacity for litigation, compensation and insurance purposes. Methods: Grounded theory methodology was used to collect and analyse data from a sample of 31 participants, comprising 19 occupational therapists, 6 medical specialists and 6 lawyers. A focused semistructured interview was completed with each participant. In addition, 20 participants verified the key findings. Results: The framework is contextualised within a medicolegal system requiring increasing expertise. The framework consists of (i) broad professional development strategies and principles, and (ii) specific strategies and principles for improving opinions through reporting and assessment practices. Conclusions: The synthesis of the participants' recommendations provides systematic guidelines for improving occupational therapy expert opinion on work capacity.
Resumo:
T he international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM(2), comprised 60,770 full- length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein- coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full- length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web- based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full- length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding ( including partial or truncated transcripts), providing to our knowledge the greatest current coverage of the mouse proteome by full- length cDNAs. The total number of distinct non- protein- coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and. nal expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species.
Resumo:
Successful graduates in today's competitive business environments must possess sound interpersonal skills and the ability to work effectively in team situations within, and across, disciplines. However, developing these skills within the higher education curriculum is fraught with organisational and pedagogical difficulties, with many teachers not having the skills, time or resources to facilitate productive group processes. Furthermore, many students find their teamwork experiences frustrating, demanding, conflict-ridden and unproductive. This paper brings together the perspectives and experiences of an engineer and a social scientist in a cross-disciplinary examination of the characteristics of effective teamwork skills and processes. A focus is the development and operation of 'TeamWorker', an innovative online system that helps students and staff manage their team activities and assessment. TeamWorker was created to enhance team teaching and learning processes and outcomes including team creation, administration, development and evaluation. Importantly, TeamWorker can facilitate the early identification of problematic group dynamics, thereby enabling early intervention.
Resumo:
Web interface agent is used with web browsers to assist users in searching and interactions with the WWW. It is used for a variety of purposes, such as web-enabled remote control, web interactive visualization, and e-commerce activities. User may be aware or unaware of its existence. The intelligence of interface agent consists in its capability of learning and decision-making in performing interactive functions on behalf of a user. However, since web is an open system environment, the reasoning mechanism in an agent should be able to adapt changes and make decisions on exceptional situations, and therefore use meta knowledge. This paper proposes a framework of Reflective Web Interface Agent (RWIA) that is to provide causal connections between the application interfaces and the knowledge model of the interface agent. A prototype is also implemented for the purpose of demonstration.