152 resultados para Rule-Based Classification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of appropriate features to characterise an output class or object is critical for all classification problems. In order to find optimal feature descriptors for vegetation species classification in a power line corridor monitoring application, this article evaluates the capability of several spectral and texture features. A new idea of spectral–texture feature descriptor is proposed by incorporating spectral vegetation indices in statistical moment features. The proposed method is evaluated against several classic texture feature descriptors. Object-based classification method is used and a support vector machine is employed as the benchmark classifier. Individual tree crowns are first detected and segmented from aerial images and different feature vectors are extracted to represent each tree crown. The experimental results showed that the proposed spectral moment features outperform or can at least compare with the state-of-the-art texture descriptors in terms of classification accuracy. A comprehensive quantitative evaluation using receiver operating characteristic space analysis further demonstrates the strength of the proposed feature descriptors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Precise, up-to-date and increasingly detailed road maps are crucial for various advanced road applications, such as lane-level vehicle navigation, and advanced driver assistant systems. With the very high resolution (VHR) imagery from digital airborne sources, it will greatly facilitate the data acquisition, data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lane information from aerial images with employment of the object-oriented image analysis method. Our proposed algorithm starts with constructing the DSM and true orthophotos from the stereo images. The road lane details are detected using an object-oriented rule based image classification approach. Due to the affection of other objects with similar spectral and geometrical attributes, the extracted road lanes are filtered with the road surface obtained by a progressive two-class decision classifier. The generated road network is evaluated using the datasets provided by Queensland department of Main Roads. The evaluation shows completeness values that range between 76% and 98% and correctness values that range between 82% and 97%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an extended study on the implementation of support vector machine(SVM) based speaker verification in systems that employ continuous progressive model adaptation using the weight-based factor analysis model. The weight-based factor analysis model compensates for session variations in unsupervised scenarios by incorporating trial confidence measures in the general statistics used in the inter-session variability modelling process. Employing weight-based factor analysis in Gaussian mixture models (GMM) was recently found to provide significant performance gains to unsupervised classification. Further improvements in performance were found through the integration of SVM-based classification in the system by means of GMM supervectors. This study focuses particularly on the way in which a client is represented in the SVM kernel space using single and multiple target supervectors. Experimental results indicate that training client SVMs using a single target supervector maximises performance while exhibiting a certain robustness to the inclusion of impostor training data in the model. Furthermore, the inclusion of low-scoring target trials in the adaptation process is investigated where they were found to significantly aid performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The flying capacitor multicell inverter (FCMI) possesses natural balancing property. With the phase-shifted (PS) carrier-based scheme, natural balancing can be achieved in a straightforward manner. However, to achieve natural balancing with the harmonically optimal phase-disposition (PD) carrierbased scheme, the conventional approaches require (n-1) x (n-1) trapezoidal carrier signals for an n-level inverter, which is (n-1) x (n-2) times more than that in the standard PD scheme. This paper proposes two improved natural balancing strategies for FMI under PD scheme, which use the same (n-1) carrier signals as used in the standard PD scheme. In the first scheme, on-line detection is performed of the band in which the modulation signal is located, corresponding period number of the carrier, and rising or falling half cycle of the carrier waveform to generate the switching signals based on certain rules. In the second strategy, the output voltage level selection is first processed and the switching signals are then generated according to a rule based on preferential cell selection algorithm. These methods are easy to use and can be simply implemented as compared to the other available methods. Simulation and experimental results are presented for a five-level inverter to verify these proposed schemes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reinforced concrete structures are susceptible to a variety of deterioration mechanisms due to creep and shrinkage, alkali-silica reaction (ASR), carbonation, and corrosion of the reinforcement. The deterioration problems can affect the integrity and load carrying capacity of the structure. Substantial research has been dedicated to these various mechanisms aiming to identify the causes, reactions, accelerants, retardants and consequences. This has improved our understanding of the long-term behaviour of reinforced concrete structures. However, the strengthening of reinforced concrete structures for durability has to date been mainly undertaken after expert assessment of field data followed by the development of a scheme to both terminate continuing degradation, by separating the structure from the environment, and strengthening the structure. The process does not include any significant consideration of the residual load-bearing capacity of the structure and the highly variable nature of estimates of such remaining capacity. Development of performance curves for deteriorating bridge structures has not been attempted due to the difficulty in developing a model when the input parameters have an extremely large variability. This paper presents a framework developed for an asset management system which assesses residual capacity and identifies the most appropriate rehabilitation method for a given reinforced concrete structure exposed to aggressive environments. In developing the framework, several industry consultation sessions have been conducted to identify input data required, research methodology and output knowledge base. Capturing expert opinion in a useable knowledge base requires development of a rule based formulation, which can subsequently be used to model the reliability of the performance curve of a reinforced concrete structure exposed to a given environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new method for noninvasive assessment of tear film surface quality (TFSQ) is proposed. The method is based on high-speed videokeratoscopy in which the corneal area for the analysis is dynamically estimated in a manner that removes videokeratoscopy interference from the shadows of eyelashes but not that related to the poor quality of the precorneal tear film that is of interest. The separation between the two types of seemingly similar videokeratoscopy interference is achieved by region-based classification in which the overall noise is first separated from the useful signal (unaltered videokeratoscopy pattern), followed by a dedicated interference classification algorithm that distinguishes between the two considered interferences. The proposed technique provides a much wider corneal area for the analysis of TFSQ than the previously reported techniques. A preliminary study with the proposed technique, carried out for a range of anterior eye conditions, showed an effective behavior in terms of noise to signal separation, interference classification, as well as consistent TFSQ results. Subsequently, the method proved to be able to not only discriminate between the bare eye and the lens on eye conditions but also to have the potential to discriminate between the two types of contact lenses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The band The Escalators together with the music uniquely composed for it and a subsequent CD and DVDs was the work that emerged from my period of research. The areas of interest that were investigated were sampling, minimalism, stasis, the work of David Lynch, as well as a desire to produce new and innovative music. The above concepts defined the bands composition and makeup. While each may be regarded as a discreet concept with its own boundaries in my work they seamlessly intermingle resulting in that which is unique to The Escalators sound. The research methodology used for this work was practice led research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Undergraduates working in teams can be a problematic endeavour, sometimes exacerbated for the student by poor prior experiences, a predisposition to an individual orientation of assessment, and simply the busyness that now typifies the life of a student. But effort in pedagogical design is worthwhile where team work is often a prerequisite in terms of graduate capabilities, robust learning, increased motivation, and indeed in terms of equipping individuals for emergent knowledge-age work practice, often epitomised by collaborative effort in both blended and virtual contexts. Through an iterative approach, based extensively on the established literature, we have developed a successful scaffold which is workable with a large cohort group (n >800), such that it affords students the lived experience of being a part of a learning network. Individuals within teams work together, to develop individual components that are subsequently aggregated and reified to an overall team knowledge artefact. We describe our case and propose a pedagogical model of scaffolding based on three perspectives: conceptual, rule-based and community-driven. This model provides designers with guidelines for producing and refining assessment tasks for team-based learning.