954 resultados para Automatic rule extraction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a rule-based automatic syllabifier for Danish is described using the Maximal Onset Principle. Prior success rates of rule-based methods applied to Portuguese and Catalan syllabification modules were on the basis of this work. The system was implemented and tested using a very small set of rules. The results gave rise to 96.9% and 98.7% of word accuracy rate, contrary to our initial expectations, being Danish a language with a complex syllabic structure and thus difficult to be rule-driven. Comparison with data-driven syllabification system using artificial neural networks showed a higher accuracy rate of the former system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method for lysis of single cells in continuous flow, where cells are sequentially trapped, lysed and released in an automatic process. Using optimized frequencies, dielectrophoretic trapping allows exposing cells in a reproducible way to high electrical fields for long durations, thereby giving good control on the lysis parameters. In situ evaluation of cytosol extraction on single cells has been studied for Chinese hamster ovary (CHO) cells through out-diffusion of fluorescent molecules for different voltage amplitudes. A diffusion model is proposed to correlate this out-diffusion to the total area of the created pores, which is dependent on the potential drop across the cell membrane and enables evaluation of the total pore area in the membrane. The dielectrophoretic trapping is no longer effective after lysis because of the reduced conductivity inside the cells, leading to cell release. The trapping time is linked to the time required for cytosol extraction and can thus provide additional validation of the effective cytosol extraction for non-fluorescent cells. Furthermore, the application of one single voltage for both trapping and lysis provides a fully automatic process including cell trapping, lysis, and release, allowing operating the device in continuous flow without human intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La documentation des programmes aide les développeurs à mieux comprendre le code source pendant les tâches de maintenance. Toutefois, la documentation n’est pas toujours disponible ou elle peut être de mauvaise qualité. Le recours à la redocumentation s’avère ainsi nécessaire. Dans ce contexte, nous proposons de faire la redocumentation en générant des commentaires par application de techniques de résumé par extraction. Pour mener à bien cette tâche, nous avons commencé par faire une étude empirique pour étudier les aspects quantitatifs et qualitatifs des commentaires. En particulier, nous nous sommes intéressés à l’étude de la distribution des commentaires par rapport aux différents types d’instructions et à la fréquence de documentation de chaque type. Aussi, nous avons proposé une taxonomie de commentaires pour classer les commentaires selon leur contenu et leur qualité. Suite aux résultats de l’étude empirique, nous avons décidé de résumer les classes Java par extraction des commentaires des méthodes/constructeurs. Nous avons défini plusieurs heuristiques pour déterminer les commentaires les plus pertinents à l’extraction. Ensuite, nous avons appliqué ces heuristiques sur les classes Java de trois projets pour en générer les résumés. Enfin, nous avons comparé les résumés produits (les commentaires produits) à des résumés références (les commentaires originaux) en utilisant la métrique ROUGE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there is an apparent shift in research from content based image retrieval (CBIR) to automatic image annotation in order to bridge the gap between low level features and high level semantics of images. Automatic Image Annotation (AIA) techniques facilitate extraction of high level semantic concepts from images by machine learning techniques. Many AIA techniques use feature analysis as the first step to identify the objects in the image. However, the high dimensional image features make the performance of the system worse. This paper describes and evaluates an automatic image annotation framework which uses SURF descriptors to select right number of features and right features for annotation. The proposed framework uses a hybrid approach in which k-means clustering is used in the training phase and fuzzy K-NN classification in the annotation phase. The performance of the system is evaluated using standard metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The span of writer identification extends to broad domes like digital rights administration, forensic expert decisionmaking systems, and document analysis systems and so on. As the success rate of a writer identification scheme is highly dependent on the features extracted from the documents, the phase of feature extraction and therefore selection is highly significant for writer identification schemes. In this paper, the writer identification in Malayalam language is sought for by utilizing feature extraction technique such as Scale Invariant Features Transform (SIFT).The schemes are tested on a test bed of 280 writers and performance evaluated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characterization and grading of glioma tumors, via image derived features, for diagnosis, prognosis, and treatment response has been an active research area in medical image computing. This paper presents a novel method for automatic detection and classification of glioma from conventional T2 weighted MR images. Automatic detection of the tumor was established using newly developed method called Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA).Statistical Features were extracted from the detected tumor texture using first order statistics and gray level co-occurrence matrix (GLCM) based second order statistical methods. Statistical significance of the features was determined by t-test and its corresponding p-value. A decision system was developed for the grade detection of glioma using these selected features and its p-value. The detection performance of the decision system was validated using the receiver operating characteristic (ROC) curve. The diagnosis and grading of glioma using this non-invasive method can contribute promising results in medical image computing

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes a framework for identifying the root-cause of a voltage disturbance, as well as, its source location (upstream/downstream) from the monitoring place. The framework works with three-phase voltage and current waveforms collected in radial distribution networks without distributed generation. Real-world and synthetic waveforms are used to test it. The framework involves features that are conceived based on electrical principles, and assuming some hypothesis on the analyzed phenomena. Features considered are based on waveforms and timestamp information. Multivariate analysis of variance and rule induction algorithms are applied to assess the amount of meaningful information explained by each feature, according to the root-cause of the disturbance and its source location. The obtained classification rates show that the proposed framework could be used for automatic diagnosis of voltage disturbances collected in radial distribution networks. Furthermore, the diagnostic results can be subsequently used for supporting power network operation, maintenance and planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tidal channel networks play an important role in the intertidal zone, exerting substantial control over the hydrodynamics and sediment transport of the region and hence over the evolution of the salt marshes and tidal flats. The study of the morphodynamics of tidal channels is currently an active area of research, and a number of theories have been proposed which require for their validation measurement of channels over extensive areas. Remotely sensed data provide a suitable means for such channel mapping. The paper describes a technique that may be adapted to extract tidal channels from either aerial photographs or LiDAR data separately, or from both types of data used together in a fusion approach. Application of the technique to channel extraction from LiDAR data has been described previously. However, aerial photographs of intertidal zones are much more commonly available than LiDAR data, and most LiDAR flights now involve acquisition of multispectral images to complement the LiDAR data. In view of this, the paper investigates the use of multispectral data for semiautomatic identification of tidal channels, firstly from only aerial photographs or linescanner data, and secondly from fused linescanner and LiDAR data sets. A multi-level, knowledge-based approach is employed. The algorithm based on aerial photography can achieve a useful channel extraction, though may fail to detect some of the smaller channels, partly because the spectral response of parts of the non-channel areas may be similar to that of the channels. The algorithm for channel extraction from fused LiDAR and spectral data gives an increased accuracy, though only slightly higher than that obtained using LiDAR data alone. The results illustrate the difficulty of developing a fully automated method, and justify the semi-automatic approach adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes the paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate appropriate and diverse range of keyphrases that reflect the document. This paper proposes a solution that examines the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to idiosyncrasies in their syntax, semantics or frequency, Multiword Expressions (MWEs) have received special attention from the NLP community, as the methods and techniques developed for the treatment of simplex words are not necessarily suitable for them. This is certainly the case for the automatic acquisition of MWEs from corpora. A lot of effort has been directed to the task of automatically identifying them, with considerable success. In this paper, we propose an approach for the identification of MWEs in a multilingual context, as a by-product of a word alignment process, that not only deals with the identification of possible MWE candidates, but also associates some multiword expressions with semantics. The results obtained indicate the feasibility and low costs in terms of tools and resources demanded by this approach, which could, for example, facilitate and speed up lexicographic work.