954 resultados para Automatic rule extraction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multiple flame-flame interactions in premixed combustion are investigated using direct numerical simulations of twin turbulent V-flames for a range of turbulence intensities and length scales. Interactions are identified using a novel automatic feature extraction (AFE) technique, based on data registration using the dual-tree complex wavelet transform. Information on the time, position, and type of interactions, and their influence on the flame area is extracted using AFE. Characteristic length and time scales for the interactions are identified. The effect of interactions on the flame brush is quantified through a global stretch rate, defined as the sum of flamelet stretch and interaction stretch contributions. The effects of each interaction type are discussed. It is found that the magnitude of the fluctuations in flamelet and interaction stretch are comparable, and a qualitative sensitivity to turbulence length scale is found for one interaction type. Implications for modeling are discussed. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The influence of Lewis number on turbulent premixed flame interactions is investigated using automatic feature extraction (AFE) applied to high-resolution flame simulation data. Premixed turbulent twin V-flames under identical turbulence conditions are simulated at global Lewis numbers of 0.4, 0.8, 1.0, and 1.2. Information on the position, frequency, and magnitude of the interactions is compared, and the sensitivity of the results to sample interval is discussed. It is found that both the frequency and magnitude of normal type interactions increases with decreasing Lewis number. Counternormal type interactions become more likely as the Lewis number increases. The variation in both the frequency and the magnitude of the interactions is found to be caused by large-scale changes in flame wrinkling resulting from differences in the thermo-diffusive stability of the flames. During flame interactions, thermo-diffusive effects are found to be insignificant due to the separation of time scales. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the field of control systems it is common to use techniques based on model adaptation to carry out control for plants for which mathematical analysis may be intricate. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this line, this paper gives a perspective on the quality of results given by two different biologically connected learning algorithms for the design of B-spline neural networks (BNN) and fuzzy systems (FS). One approach used is the Genetic Programming (GP) for BNN design and the other is the Bacterial Evolutionary Algorithm (BEA) applied for fuzzy rule extraction. Also, the facility to incorporate a multi-objective approach to the GP algorithm is outlined, enabling the designer to obtain models more adequate for their intended use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les travaux entrepris dans le cadre de la présente thèse portent sur l’analyse de l’équivalence terminologique en corpus parallèle et en corpus comparable. Plus spécifiquement, nous nous intéressons aux corpus de textes spécialisés appartenant au domaine du changement climatique. Une des originalités de cette étude réside dans l’analyse des équivalents de termes simples. Les bases théoriques sur lesquelles nous nous appuyons sont la terminologie textuelle (Bourigault et Slodzian 1999) et l’approche lexico-sémantique (L’Homme 2005). Cette étude poursuit deux objectifs. Le premier est d’effectuer une analyse comparative de l’équivalence dans les deux types de corpus afin de vérifier si l’équivalence terminologique observable dans les corpus parallèles se distingue de celle que l’on trouve dans les corpus comparables. Le deuxième consiste à comparer dans le détail les équivalents associés à un même terme anglais, afin de les décrire et de les répertorier pour en dégager une typologie. L’analyse détaillée des équivalents français de 343 termes anglais est menée à bien grâce à l’exploitation d’outils informatiques (extracteur de termes, aligneur de textes, etc.) et à la mise en place d’une méthodologie rigoureuse divisée en trois parties. La première partie qui est commune aux deux objectifs de la recherche concerne l’élaboration des corpus, la validation des termes anglais et le repérage des équivalents français dans les deux corpus. La deuxième partie décrit les critères sur lesquels nous nous appuyons pour comparer les équivalents des deux types de corpus. La troisième partie met en place la typologie des équivalents associés à un même terme anglais. Les résultats pour le premier objectif montrent que sur les 343 termes anglais analysés, les termes présentant des équivalents critiquables dans les deux corpus sont relativement peu élevés (12), tandis que le nombre de termes présentant des similitudes d’équivalence entre les corpus est très élevé (272 équivalents identiques et 55 équivalents non critiquables). L’analyse comparative décrite dans ce chapitre confirme notre hypothèse selon laquelle la terminologie employée dans les corpus parallèles ne se démarque pas de celle des corpus comparables. Les résultats pour le deuxième objectif montrent que de nombreux termes anglais sont rendus par plusieurs équivalents (70 % des termes analysés). Il est aussi constaté que ce ne sont pas les synonymes qui forment le groupe le plus important des équivalents, mais les quasi-synonymes. En outre, les équivalents appartenant à une autre partie du discours constituent une part importante des équivalents. Ainsi, la typologie élaborée dans cette thèse présente des mécanismes de l’équivalence terminologique peu décrits aussi systématiquement dans les travaux antérieurs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analyze the average performance of a general class of learning algorithms for the nondeterministic polynomial time complete problem of rule extraction by a binary perceptron. The examples are generated by a rule implemented by a teacher network of similar architecture. A variational approach is used in trying to identify the potential energy that leads to the largest generalization in the thermodynamic limit. We restrict our search to algorithms that always satisfy the binary constraints. A replica symmetric ansatz leads to a learning algorithm which presents a phase transition in violation of an information theoretical bound. Stability analysis shows that this is due to a failure of the replica symmetric ansatz and the first step of replica symmetry breaking (RSB) is studied. The variational method does not determine a unique potential but it allows construction of a class with a unique minimum within each first order valley. Members of this class improve on the performance of Gibbs algorithm but fail to reach the Bayesian limit in the low generalization phase. They even fail to reach the performance of the best binary, an optimal clipping of the barycenter of version space. We find a trade-off between a good low performance and early onset of perfect generalization. Although the RSB may be locally stable we discuss the possibility that it fails to be the correct saddle point globally. ©2000 The American Physical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several kinds of research in road extraction have been carried out in the last 6 years by the Photogrammetry and Computer Vision Research Group (GPF&VC - Grupo de Pesquisa em Fotogrametria e Visão Computacional). Several semi-automatic road extraction methodologies have been developed, including sequential and optimizatin techniques. The GP-F&VC has also been developing fully automatic methodologies for road extraction. This paper presents an overview of the GP-F&VC research in road extraction from digital images, along with examples of results obtained by the developed methodologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this paper is to introduce a methodology for semi-automatic road extraction from aerial digital image pairs by using dynamic programming and epipolar geometry. The method uses both images from where each road feature pair is extracted. The operator identifies the corresponding road featuresand s/he selects sparse seed points along them. After all road pairs have been extracted, epipolar geometry is applied to determine the automatic point-to-point correspondence between each correspondent feature. Finally, each correspondent road pair is georeferenced by photogrammetric intersection. Experiments were made with rural aerial images. The results led to the conclusion that the methodology is robust and efficient, even in the presence of shadows of trees and buildings or other irregularities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a methodology for edge detection in digital images using the Canny detector, but associated with a priori edge structure focusing by a nonlinear anisotropic diffusion via the partial differential equation (PDE). This strategy aims at minimizing the effect of the well-known duality of the Canny detector, under which is not possible to simultaneously enhance the insensitivity to image noise and the localization precision of detected edges. The process of anisotropic diffusion via thePDE is used to a priori focus the edge structure due to its notable characteristic in selectively smoothing the image, leaving the homogeneous regions strongly smoothed and mainly preserving the physical edges, i.e., those that are actually related to objects presented in the image. The solution for the mentioned duality consists in applying the Canny detector to a fine gaussian scale but only along the edge regions focused by the process of anisotropic diffusion via the PDE. The results have shown that the method is appropriate for applications involving automatic feature extraction, since it allowed the high-precision localization of thinned edges, which are usually related to objects present in the image. © Nauka/Interperiodica 2006.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of physical characteristics for human identification is known as biometrics. Among the many biometrics traits available, the fingerprint is the most widely used. The fingerprint identification is based on the impression patterns, as the pattern of ridges and minutiae, characteristics of first and second levels respectively. The current identification systems use these two levels of fingerprint features due to the low cost of the sensors. However, the recent advances in sensor technology, became possible to use third level features present within the ridges, such as the perspiration pores. Recent studies show that the use of third-level features can increase security and fraud protection in biometric systems, since they are difficult to reproduce. In addition, recent researches have also focused on multibiometrics recognition due to its many advantages. The goal of this research project was to apply fusion techniques for fingerprint recognition in order to combine minutia, ridges and pore-based methods and, thus, provide more robust biometrics recognition systems, and also to develop an automated fingerprint identification system using these three methods of recognition. We evaluated isotropic-based and adaptive-based automatic pore extraction methods, and the fusion of pore-based method with the identification methods based on minutiae and ridges. The experiments were performed on the public database PolyUHRF and showed a reduction of approximately 16% in the EER compared to the best results obtained by the methods individually

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In any terminological study, candidate term extraction is a very time-consuming task. Corpus analysis tools have automatized some processes allowing the detection of relevant data within the texts, facilitating term candidate selection as well. Nevertheless, these tools are (normally) not specific for terminology research; therefore, the units which are automatically extracted need manual evaluation. Over the last few years some software products have been specifically developed for automatic term extraction. They are based on corpus analysis, but use linguistic and statistical information to filter data more precisely. As a result, the time needed for manual evaluation is reduced. In this framework, we tried to understand if and how these new tools can really be an advantage. In order to develop our project, we simulated a terminology study: we chose a domain (i.e. legal framework for medicinal products for human use) and compiled a corpus from which we extracted terms and phraseologisms using AntConc, a corpus analysis tool. Afterwards, we compared our list with the lists extracted automatically from three different tools (TermoStat Web, TaaS e Sketch Engine) in order to evaluate their performance. In the first chapter we describe some principles relating to terminology and phraseology in language for special purposes and show the advantages offered by corpus linguistics. In the second chapter we illustrate some of the main concepts of the domain selected, as well as some of the main features of legal texts. In the third chapter we describe automatic term extraction and the main criteria to evaluate it; moreover, we introduce the term-extraction tools used for this project. In the fourth chapter we describe our research method and, in the fifth chapter, we show our results and draw some preliminary conclusions on the performance and usefulness of term-extraction tools.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive rehabilitation aims to remediate or alleviate the cognitive deficits appearing after an episode of acquired brain injury (ABI). The purpose of this work is to describe the telerehabilitation platform called Guttmann Neuropersonal Trainer (GNPT) which provides new strategies for cognitive rehabilitation, improving efficiency and access to treatments, and to increase knowledge generation from the process. A cognitive rehabilitation process has been modeled to design and develop the system, which allows neuropsychologists to configure and schedule rehabilitation sessions, consisting of set of personalized computerized cognitive exercises grounded on neuroscience and plasticity principles. It provides remote continuous monitoring of patient's performance, by an asynchronous communication strategy. An automatic knowledge extraction method has been used to implement a decision support system, improving treatment customization. GNPT has been implemented in 27 rehabilitation centers and in 83 patients' homes, facilitating the access to the treatment. In total, 1660 patients have been treated. Usability and cost analysis methodologies have been applied to measure the efficiency in real clinical environments. The usability evaluation reveals a system usability score higher than 70 for all target users. The cost efficiency study results show a relation of 1-20 compared to face-to-face rehabilitation. GNPT enables brain-damaged patients to continue and further extend rehabilitation beyond the hospital, improving the efficiency of the rehabilitation process. It allows customized therapeutic plans, providing information to further development of clinical practice guidelines.