119 resultados para Web-Centric Expert System
Resumo:
In many applications, e.g., bioinformatics, web access traces, system utilisation logs, etc., the data is naturally in the form of sequences. People have taken great interest in analysing the sequential data and finding the inherent characteristics or relationships within the data. Sequential association rule mining is one of the possible methods used to analyse this data. As conventional sequential association rule mining very often generates a huge number of association rules, of which many are redundant, it is desirable to find a solution to get rid of those unnecessary association rules. Because of the complexity and temporal ordered characteristics of sequential data, current research on sequential association rule mining is limited. Although several sequential association rule prediction models using either sequence constraints or temporal constraints have been proposed, none of them considered the redundancy problem in rule mining. The main contribution of this research is to propose a non-redundant association rule mining method based on closed frequent sequences and minimal sequential generators. We also give a definition for the non-redundant sequential rules, which are sequential rules with minimal antecedents but maximal consequents. A new algorithm called CSGM (closed sequential and generator mining) for generating closed sequences and minimal sequential generators is also introduced. A further experiment has been done to compare the performance of generating non-redundant sequential rules and full sequential rules, meanwhile, performance evaluation of our CSGM and other closed sequential pattern mining or generator mining algorithms has also been conducted. We also use generated non-redundant sequential rules for query expansion in order to improve recommendations for infrequently purchased products.
Resumo:
The existing Collaborative Filtering (CF) technique that has been widely applied by e-commerce sites requires a large amount of ratings data to make meaningful recommendations. It is not directly applicable for recommending products that are not frequently purchased by users, such as cars and houses, as it is difficult to collect rating data for such products from the users. Many of the e-commerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user's query are retrieved and recommended to the user. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their online navigation behaviour. This paper proposes to integrate collaborative filtering and search-based techniques to provide personalized recommendations for infrequently purchased products. Two different techniques are proposed, namely CFRRobin and CFAg Query. Instead of using the target user's query to search for products as normal search based systems do, the CFRRobin technique uses the products in which the target user's neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAg Query technique uses the products that the user's neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAg Query perform better than the standard Collaborative Filtering (CF) and the Basic Search (BS) approaches, which are widely applied by the current e-commerce applications. The CFRRobin and CFAg Query approaches also outperform the e- isting query expansion (QE) technique that was proposed for recommending infrequently purchased products.
Resumo:
In the legal domain, it is rare to find solutions to problems by simply applying algorithms or invoking deductive rules in some knowledge‐based program. Instead, expert practitioners often supplement domain‐specific knowledge with field experience. This type of expertise is often applied in the form of an analogy. This research proposes to combine both reasoning with precedents and reasoning with statutes and regulations in a way that will enhance the statutory interpretation task. This is being attempted through the integration of database and expert system technologies. Case‐based reasoning is being used to model legal precedents while rule‐based reasoning modules are being used to model the legislation and other types of causal knowledge. It is hoped to generalise these findings and to develop a formal methodology for integrating case‐based databases with rule‐based expert systems in the legal domain.
Resumo:
A commitment in 2010 by the Australian Federal Government to spend $466.7 million dollars on the implementation of personally controlled electronic health records (PCEHR) heralded a shift to a more effective and safer patient centric eHealth system. However, deployment of the PCEHR has met with much criticism, emphasised by poor adoption rates over the first 12 months of operation. An indifferent response by the public and healthcare providers largely sceptical of its utility and safety speaks to the complex sociotechnical drivers and obstacles inherent in the embedding of large (national) scale eHealth projects. With government efforts to inflate consumer and practitioner engagement numbers giving rise to further consumer disillusionment, broader utilitarian opportunities available with the PCEHR are at risk. This paper discusses the implications of establishing the PCEHR as the cornerstone of a holistic eHealth strategy for the aggregation of longitudinal patient information. A viewpoint is offered that the real value in patient data lies not just in the collection of data but in the integration of this information into clinical processes within the framework of a commoditised data-driven approach. Consideration is given to the eHealth-as-a-Service (eHaaS) construct as a disruptive next step for co-ordinated individualised healthcare in the Australian context.
Resumo:
Recommender systems provide personalized advice for customers online based on their own preferences, while reputation systems generate a community advice on the quality of items on the Web. Both systems use users’ ratings to generate their output. In this paper, we propose to combine reputation models with recommender systems to enhance the accuracy of recommendations. The main contributions include two methods for merging two ranked item lists which are generated based on recommendation scores and reputation scores, respectively, and a personalized reputation method to generate item reputations based on users’ interests. The proposed merging methods can be applicable to any recommendation methods and reputation methods, i.e., they are independent from generating recommendation scores and reputation scores. The experiments we conducted showed that the proposed methods could enhance the accuracy of existing recommender systems.
Resumo:
In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.
Resumo:
Video surveillance infrastructure has been widely installed in public places for security purposes. However, live video feeds are typically monitored by human staff, making the detection of important events as they occur difficult. As such, an expert system that can automatically detect events of interest in surveillance footage is highly desirable. Although a number of approaches have been proposed, they have significant limitations: supervised approaches, which can detect a specific event, ideally require a large number of samples with the event spatially and temporally localised; while unsupervised approaches, which do not require this demanding annotation, can only detect whether an event is abnormal and not specific event types. To overcome these problems, we formulate a weakly-supervised approach using Kullback-Leibler (KL) divergence to detect rare events. The proposed approach leverages the sparse nature of the target events to its advantage, and we show that this data imbalance guarantees the existence of a decision boundary to separate samples that contain the target event from those that do not. This trait, combined with the coarse annotation used by weakly supervised learning (that only indicates approximately when an event occurs), greatly reduces the annotation burden while retaining the ability to detect specific events. Furthermore, the proposed classifier requires only a decision threshold, simplifying its use compared to other weakly supervised approaches. We show that the proposed approach outperforms state-of-the-art methods on a popular real-world traffic surveillance dataset, while preserving real time performance.
Resumo:
Background The irreversible ErbB family blocker afatinib and the reversible EGFR tyrosine kinase inhibitor gefitinib are approved for first-line treatment of EGFR mutation-positive non-small-cell lung cancer (NSCLC). We aimed to compare the efficacy and safety of afatinib and gefitinib in this setting. Methods This multicentre, international, open-label, exploratory, randomised controlled phase 2B trial (LUX-Lung 7) was done at 64 centres in 13 countries. Treatment-naive patients with stage IIIB or IV NSCLC and a common EGFR mutation (exon 19 deletion or Leu858Arg) were randomly assigned (1:1) to receive afatinib (40 mg per day) or gefitinib (250 mg per day) until disease progression, or beyond if deemed beneficial by the investigator. Randomisation, stratified by EGFR mutation type and status of brain metastases, was done centrally using a validated number generating system implemented via an interactive voice or web-based response system with a block size of four. Clinicians and patients were not masked to treatment allocation; independent review of tumour response was done in a blinded manner. Coprimary endpoints were progression-free survival by independent central review, time-to-treatment failure, and overall survival. Efficacy analyses were done in the intention-to-treat population and safety analyses were done in patients who received at least one dose of study drug. This ongoing study is registered with ClinicalTrials.gov, number NCT01466660. Findings Between Dec 13, 2011, and Aug 8, 2013, 319 patients were randomly assigned (160 to afatinib and 159 to gefitinib). Median follow-up was 27·3 months (IQR 15·3–33·9). Progression-free survival (median 11·0 months [95% CI 10·6–12·9] with afatinib vs 10·9 months [9·1–11·5] with gefitinib; hazard ratio [HR] 0·73 [95% CI 0·57–0·95], p=0·017) and time-to-treatment failure (median 13·7 months [95% CI 11·9–15·0] with afatinib vs 11·5 months [10·1–13·1] with gefitinib; HR 0·73 [95% CI 0·58–0·92], p=0·0073) were significantly longer with afatinib than with gefitinib. Overall survival data are not mature. The most common treatment-related grade 3 or 4 adverse events were diarrhoea (20 [13%] of 160 patients given afatinib vs two [1%] of 159 given gefitinib) and rash or acne (15 [9%] patients given afatinib vs five [3%] of those given gefitinib) and liver enzyme elevations (no patients given afatinib vs 14 [9%] of those given gefitinib). Serious treatment-related adverse events occurred in 17 (11%) patients in the afatinib group and seven (4%) in the gefitinib group. Ten (6%) patients in each group discontinued treatment due to drug-related adverse events. 15 (9%) fatal adverse events occurred in the afatinib group and ten (6%) in the gefitinib group. All but one of these deaths were considered unrelated to treatment; one patient in the gefitinib group died from drug-related hepatic and renal failure. Interpretation Afatinib significantly improved outcomes in treatment-naive patients with EGFR-mutated NSCLC compared with gefitinib, with a manageable tolerability profile. These data are potentially important for clinical decision making in this patient population.
Resumo:
The World Wide Web has become a medium for people to share information. People use Web-based collaborative tools such as question answering (QA) portals, blogs/forums, email and instant messaging to acquire information and to form online-based communities. In an online QA portal, a user asks a question and other users can provide answers based on their knowledge, with the question usually being answered by many users. It can become overwhelming and/or time/resource consuming for a user to read all of the answers provided for a given question. Thus, there exists a need for a mechanism to rank the provided answers so users can focus on only reading good quality answers. The majority of online QA systems use user feedback to rank users’ answers and the user who asked the question can decide on the best answer. Other users who didn’t participate in answering the question can also vote to determine the best answer. However, ranking the best answer via this collaborative method is time consuming and requires an ongoing continuous involvement of users to provide the needed feedback. The objective of this research is to discover a way to recommend the best answer as part of a ranked list of answers for a posted question automatically, without the need for user feedback. The proposed approach combines both a non-content-based reputation method and a content-based method to solve the problem of recommending the best answer to the user who posted the question. The non-content method assigns a score to each user which reflects the users’ reputation level in using the QA portal system. Each user is assigned two types of non-content-based reputations cores: a local reputation score and a global reputation score. The local reputation score plays an important role in deciding the reputation level of a user for the category in which the question is asked. The global reputation score indicates the prestige of a user across all of the categories in the QA system. Due to the possibility of user cheating, such as awarding the best answer to a friend regardless of the answer quality, a content-based method for determining the quality of a given answer is proposed, alongside the non-content-based reputation method. Answers for a question from different users are compared with an ideal (or expert) answer using traditional Information Retrieval and Natural Language Processing techniques. Each answer provided for a question is assigned a content score according to how well it matched the ideal answer. To evaluate the performance of the proposed methods, each recommended best answer is compared with the best answer determined by one of the most popular link analysis methods, Hyperlink-Induced Topic Search (HITS). The proposed methods are able to yield high accuracy, as shown by correlation scores: Kendall correlation and Spearman correlation. The reputation method outperforms the HITS method in terms of recommending the best answer. The inclusion of the reputation score with the content score improves the overall performance, which is measured through the use of Top-n match scores.
Resumo:
To meet new challenges of Enterprise Systems that essentially go beyond the initial implementation, contemporary organizations seek employees with business process experts with software skills. Despite a healthy demand from the industry for such expertise, recent studies reveal that most Information Systems (IS) graduates are ill-equipped to meet the challenges of modern organizations. This paper shares insights and experiences from a course that is designed to provide a business process centric view of a market leading Enterprise System. The course, designed for both undergraduate and graduate students, uses two common business processes in a case study that employs both sequential and explorative exercises. Student feedback gained through two longitudinal surveys across two phases of the course demonstrates promising signs of the teaching approach.
Resumo:
We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.