856 resultados para Web-Centric Expert System


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many applications, e.g., bioinformatics, web access traces, system utilisation logs, etc., the data is naturally in the form of sequences. People have taken great interest in analysing the sequential data and finding the inherent characteristics or relationships within the data. Sequential association rule mining is one of the possible methods used to analyse this data. As conventional sequential association rule mining very often generates a huge number of association rules, of which many are redundant, it is desirable to find a solution to get rid of those unnecessary association rules. Because of the complexity and temporal ordered characteristics of sequential data, current research on sequential association rule mining is limited. Although several sequential association rule prediction models using either sequence constraints or temporal constraints have been proposed, none of them considered the redundancy problem in rule mining. The main contribution of this research is to propose a non-redundant association rule mining method based on closed frequent sequences and minimal sequential generators. We also give a definition for the non-redundant sequential rules, which are sequential rules with minimal antecedents but maximal consequents. A new algorithm called CSGM (closed sequential and generator mining) for generating closed sequences and minimal sequential generators is also introduced. A further experiment has been done to compare the performance of generating non-redundant sequential rules and full sequential rules, meanwhile, performance evaluation of our CSGM and other closed sequential pattern mining or generator mining algorithms has also been conducted. We also use generated non-redundant sequential rules for query expansion in order to improve recommendations for infrequently purchased products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existing Collaborative Filtering (CF) technique that has been widely applied by e-commerce sites requires a large amount of ratings data to make meaningful recommendations. It is not directly applicable for recommending products that are not frequently purchased by users, such as cars and houses, as it is difficult to collect rating data for such products from the users. Many of the e-commerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user's query are retrieved and recommended to the user. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their online navigation behaviour. This paper proposes to integrate collaborative filtering and search-based techniques to provide personalized recommendations for infrequently purchased products. Two different techniques are proposed, namely CFRRobin and CFAg Query. Instead of using the target user's query to search for products as normal search based systems do, the CFRRobin technique uses the products in which the target user's neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAg Query technique uses the products that the user's neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAg Query perform better than the standard Collaborative Filtering (CF) and the Basic Search (BS) approaches, which are widely applied by the current e-commerce applications. The CFRRobin and CFAg Query approaches also outperform the e- isting query expansion (QE) technique that was proposed for recommending infrequently purchased products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the legal domain, it is rare to find solutions to problems by simply applying algorithms or invoking deductive rules in some knowledge‐based program. Instead, expert practitioners often supplement domain‐specific knowledge with field experience. This type of expertise is often applied in the form of an analogy. This research proposes to combine both reasoning with precedents and reasoning with statutes and regulations in a way that will enhance the statutory interpretation task. This is being attempted through the integration of database and expert system technologies. Case‐based reasoning is being used to model legal precedents while rule‐based reasoning modules are being used to model the legislation and other types of causal knowledge. It is hoped to generalise these findings and to develop a formal methodology for integrating case‐based databases with rule‐based expert systems in the legal domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A commitment in 2010 by the Australian Federal Government to spend $466.7 million dollars on the implementation of personally controlled electronic health records (PCEHR) heralded a shift to a more effective and safer patient centric eHealth system. However, deployment of the PCEHR has met with much criticism, emphasised by poor adoption rates over the first 12 months of operation. An indifferent response by the public and healthcare providers largely sceptical of its utility and safety speaks to the complex sociotechnical drivers and obstacles inherent in the embedding of large (national) scale eHealth projects. With government efforts to inflate consumer and practitioner engagement numbers giving rise to further consumer disillusionment, broader utilitarian opportunities available with the PCEHR are at risk. This paper discusses the implications of establishing the PCEHR as the cornerstone of a holistic eHealth strategy for the aggregation of longitudinal patient information. A viewpoint is offered that the real value in patient data lies not just in the collection of data but in the integration of this information into clinical processes within the framework of a commoditised data-driven approach. Consideration is given to the eHealth-as-a-Service (eHaaS) construct as a disruptive next step for co-ordinated individualised healthcare in the Australian context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommender systems provide personalized advice for customers online based on their own preferences, while reputation systems generate a community advice on the quality of items on the Web. Both systems use users’ ratings to generate their output. In this paper, we propose to combine reputation models with recommender systems to enhance the accuracy of recommendations. The main contributions include two methods for merging two ranked item lists which are generated based on recommendation scores and reputation scores, respectively, and a personalized reputation method to generate item reputations based on users’ interests. The proposed merging methods can be applicable to any recommendation methods and reputation methods, i.e., they are independent from generating recommendation scores and reputation scores. The experiments we conducted showed that the proposed methods could enhance the accuracy of existing recommender systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expertsystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video surveillance infrastructure has been widely installed in public places for security purposes. However, live video feeds are typically monitored by human staff, making the detection of important events as they occur difficult. As such, an expert system that can automatically detect events of interest in surveillance footage is highly desirable. Although a number of approaches have been proposed, they have significant limitations: supervised approaches, which can detect a specific event, ideally require a large number of samples with the event spatially and temporally localised; while unsupervised approaches, which do not require this demanding annotation, can only detect whether an event is abnormal and not specific event types. To overcome these problems, we formulate a weakly-supervised approach using Kullback-Leibler (KL) divergence to detect rare events. The proposed approach leverages the sparse nature of the target events to its advantage, and we show that this data imbalance guarantees the existence of a decision boundary to separate samples that contain the target event from those that do not. This trait, combined with the coarse annotation used by weakly supervised learning (that only indicates approximately when an event occurs), greatly reduces the annotation burden while retaining the ability to detect specific events. Furthermore, the proposed classifier requires only a decision threshold, simplifying its use compared to other weakly supervised approaches. We show that the proposed approach outperforms state-of-the-art methods on a popular real-world traffic surveillance dataset, while preserving real time performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, pattern classification problem in tool wear monitoring is solved using nature inspired techniques such as Genetic Programming(GP) and Ant-Miner (AM). The main advantage of GP and AM is their ability to learn the underlying data relationships and express them in the form of mathematical equation or simple rules. The extraction of knowledge from the training data set using GP and AM are in the form of Genetic Programming Classifier Expression (GPCE) and rules respectively. The GPCE and AM extracted rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in GP evolved GPCE and AM based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The performance of the data classification using GP and AM is as good as the classification accuracy obtained in the earlier study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Dissolved Gas Analysis (DGA) a non destructive test procedure, has been in vogue for a long time now, for assessing the status of power and related transformers in service. An early indication of likely internal faults that may exist in Transformers has been seen to be revealed, to a reasonable degree of accuracy by the DGA. The data acquisition and subsequent analysis needs an expert in the concerned area to accurately assess the condition of the equipment. Since the presence of the expert is not always guaranteed, it is incumbent on the part of the power utilities to requisition a well planned and reliable artificial expert system to replace, at least in part, an expert. This paper presents the application of Ordered Ant Mner (OAM) classifier for the prediction of involved fault. Secondly, the paper also attempts to estimate the remaining life of the power transformer as an extension to the elapsed life estimation method suggested in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The irreversible ErbB family blocker afatinib and the reversible EGFR tyrosine kinase inhibitor gefitinib are approved for first-line treatment of EGFR mutation-positive non-small-cell lung cancer (NSCLC). We aimed to compare the efficacy and safety of afatinib and gefitinib in this setting. Methods This multicentre, international, open-label, exploratory, randomised controlled phase 2B trial (LUX-Lung 7) was done at 64 centres in 13 countries. Treatment-naive patients with stage IIIB or IV NSCLC and a common EGFR mutation (exon 19 deletion or Leu858Arg) were randomly assigned (1:1) to receive afatinib (40 mg per day) or gefitinib (250 mg per day) until disease progression, or beyond if deemed beneficial by the investigator. Randomisation, stratified by EGFR mutation type and status of brain metastases, was done centrally using a validated number generating system implemented via an interactive voice or web-based response system with a block size of four. Clinicians and patients were not masked to treatment allocation; independent review of tumour response was done in a blinded manner. Coprimary endpoints were progression-free survival by independent central review, time-to-treatment failure, and overall survival. Efficacy analyses were done in the intention-to-treat population and safety analyses were done in patients who received at least one dose of study drug. This ongoing study is registered with ClinicalTrials.gov, number NCT01466660. Findings Between Dec 13, 2011, and Aug 8, 2013, 319 patients were randomly assigned (160 to afatinib and 159 to gefitinib). Median follow-up was 27·3 months (IQR 15·3–33·9). Progression-free survival (median 11·0 months [95% CI 10·6–12·9] with afatinib vs 10·9 months [9·1–11·5] with gefitinib; hazard ratio [HR] 0·73 [95% CI 0·57–0·95], p=0·017) and time-to-treatment failure (median 13·7 months [95% CI 11·9–15·0] with afatinib vs 11·5 months [10·1–13·1] with gefitinib; HR 0·73 [95% CI 0·58–0·92], p=0·0073) were significantly longer with afatinib than with gefitinib. Overall survival data are not mature. The most common treatment-related grade 3 or 4 adverse events were diarrhoea (20 [13%] of 160 patients given afatinib vs two [1%] of 159 given gefitinib) and rash or acne (15 [9%] patients given afatinib vs five [3%] of those given gefitinib) and liver enzyme elevations (no patients given afatinib vs 14 [9%] of those given gefitinib). Serious treatment-related adverse events occurred in 17 (11%) patients in the afatinib group and seven (4%) in the gefitinib group. Ten (6%) patients in each group discontinued treatment due to drug-related adverse events. 15 (9%) fatal adverse events occurred in the afatinib group and ten (6%) in the gefitinib group. All but one of these deaths were considered unrelated to treatment; one patient in the gefitinib group died from drug-related hepatic and renal failure. Interpretation Afatinib significantly improved outcomes in treatment-naive patients with EGFR-mutated NSCLC compared with gefitinib, with a manageable tolerability profile. These data are potentially important for clinical decision making in this patient population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we show the applicability of Ant Colony Optimisation (ACO) techniques for pattern classification problem that arises in tool wear monitoring. In an earlier study, artificial neural networks and genetic programming have been successfully applied to tool wear monitoring problem. ACO is a recent addition to evolutionary computation technique that has gained attention for its ability to extract the underlying data relationships and express them in form of simple rules. Rules are extracted for data classification using training set of data points. These rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in ACO based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The classification accuracy obtained in ACO based approach is as good as obtained in other biologically inspired techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Intelligent Decision Support System (IDSS), also called an expert system, is explained. It was then applied to choose the right composition and firing temperature of a ZnO based varistor. 17 refs.