870 resultados para Automated Reasoning


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Paper-based nutrition screening tools can be challenging to implement in the ambulatory oncology setting. The aim of this study was to determine the validity of the Malnutrition Screening Tool (MST) and a novel, automated nutrition screening system compared to a ‘gold standard’ full nutrition assessment using the Patient-Generated Subjective Global Assessment (PG-SGA). Methods An observational, cross-sectional study was conducted in an outpatient oncology day treatment unit (ODTU) within an Australian tertiary health service. Eligibility criteria were as follows: ≥18 years, receiving outpatient anticancer treatment and English literate. Patients self-administered the MST. A dietitian assessed nutritional status using the PGSGA, blinded to the MST score. Automated screening system data were extracted from an electronic oncology prescribing system. This system used weight loss over 3 to 6 weeks prior to the most recent weight record or age-categorised body mass index (BMI) to identify nutritional risk. Sensitivity and specificity against PG-SGA (malnutrition) were calculated using contingency tables and receiver operating curves. Results There were a total of 300 oncology outpatients (51.7 % male, 58.6±13.3 years). The area under the curve (AUC) for weight loss alone was 0.69 with a cut-off value of ≥1 % weight loss yielding 63 % sensitivity and 76.7 % specificity. MST (score ≥2) resulted in 70.6 % sensitivity and 69.5 % specificity, AUC 0.77. Conclusions Both the MST and the automated method fell short of the accepted professional standard for sensitivity (~≥80 %) derived from the PG-SGA. Further investigation into other automated nutrition screening options and the most appropriate parameters available electronically is warranted to support targeted service provision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mycobacterium kansasii is a pulmonary pathogen that has been grown readily from municipal water, but rarely isolated from natural waters. A definitive link between water exposure and disease has not been demonstrated and the environmental niche for this organism is poorly understood. Strain typing of clinical isolates has revealed seven subtypes with Type 1 being highly clonal and responsible for most infections worldwide. The prevalence of other subtypes varies geographically. In this study 49 water isolates are compared with 72 patient isolates from the same geographical area (Brisbane, Australia), using automated repetitive unit PCR (Diversilab) and ITS RFLP. The clonality of the dominant clinical strain type is again demonstrated but with rep-PCR, strain variation within this group is evident comparable with other reported methods. There is significant heterogeneity of water isolates and very few are similar or related to the clinical isolates. This suggests that if water or aerosol transmission is the mode of infection, then point source contamination likely occurs from an alternative environmental source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss the strengths and weaknesses of a range of artificial intelligence approaches used in legal domains. Symbolic reasoning systems which rely on deductive, inductive and analogical reasoning are described and reviewed. The role of statistical reasoning in law is examined, and the use of neural networks analysed. There is discussion of architectures for, and examples of, systems which combine a number of these reasoning strategies. We conclude that to build intelligent legal decision support systems requires a range of reasoning strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Commercial legal expert systems are invariably rule based. Such systems are poor at dealing with open texture and the argumentation inherent in law. To overcome these problems we suggest supplementing rule based legal expert systems with case based reasoning or neural networks. Both case based reasoners and neural networks use cases-but in very different ways. We discuss these differences at length. In particular we examine the role of explanation in existing expert systems methodologies. Because neural networks provide poor explanation facilities, we consider the use of Toulmin argument structures to support explanation (S. Toulmin, 1958). We illustrate our ideas with regard to a number of systems built by the authors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we provide an overview of a number of fundamental reasoning formalisms in artificial intelligence which can and have been used in modelling legal reasoning. We describe deduction, induction and analogical reasoning formalisms, and show how they can be used separately to model legal reasoning. We argue that these formalisms can be used together to model legal reasoning more accurately, and describe a number of attempts to integrate the approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional approaches to nonmonotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Faunal vocalisations are vital indicators for environmental change and faunal vocalisation analysis can provide information for answering ecological questions. Therefore, automated species recognition in environmental recordings has become a critical research area. This thesis presents an automated species recognition approach named Timed and Probabilistic Automata. A small lexicon for describing animal calls is defined, six algorithms for acoustic component detection are developed, and a series of species recognisers are built and evaluated.The presented automated species recognition approach yields significant improvement on the analysis performance over a real world dataset, and may be transferred to commercial software in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Construction industry accounts for a tenth of global GDP. Still, challenges such as slow adoption of new work processes, islands of information, and legal disputes, remain frequent, industry-wide occurrences despite various attempts to address them. In response, IT-based approaches have been adopted to explore collaborative ways of executing construction projects. Building Information Modelling (BIM) is an exemplar of integrative technologies whose 3D-visualisation capabilities have fostered collaboration especially between clients and design teams. Yet, the ways in which specification documents are created and used in capturing clients' expectations based on industry standards have remained largely unchanged since the 18th century. As a result, specification-related errors are still common place in an industry where vast amounts of information are consumed as well as produced in the course project implementation in the built environment. By implication, processes such as cost planning which depend on specification-related information remain largely inaccurate even with the use of BIM-based technologies. This paper briefly distinguishes between non-BIM-based and BIM-based specifications and reports on-going efforts geared towards the latter. We review exemplars aimed at extending Building Information Models to specification information embedded within the objects in a product library and explore a viable way of reasoning about a semi-automated process of specification using our product library.