849 resultados para Applied artificial intelligence
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciências Biológicas (Genética) - IBB
Resumo:
As indústrias têm buscado constantemente reduzir gastos operacionais, visando o aumento do lucro e da competitividade. Para alcançar essa meta, são necessários, dentre outros fatores, o projeto e a implantação de novas ferramentas que permitam o acesso às informações relevantes do processo de forma precisa, eficiente e barata. Os sensores virtuais têm sido aplicados cada vez mais nas indústrias. Por ser flexível, ele pode ser adaptado a qualquer tipo de medição, promovendo uma redução de custos operacionais sem comprometer, e em alguns casos até melhorar, a qualidade da informação gerada. Como estão totalmente baseados em software, não estão sujeitos a danos físicos como os sensores reais, além de permitirem uma melhor adaptação a ambientes hostis e de difícil acesso. A razão do sucesso destes tipos de sensores é a utilização de técnicas de inteligência computacional, as quais têm sido usadas na modelagem de vários processos não lineares altamente complexos. Este trabalho tem como objetivo estimar a qualidade da alumina fluoretada proveniente de uma Planta de Tratamento de Gases (PTG), a qual é resultado da adsorção de gases poluentes em alumina virgem, via sensor virtual. O modelo que emula o comportamento de um sensor de qualidade de alumina foi criado através da técnica de inteligência computacional conhecida como Rede Neural Artificial. As motivações deste trabalho consistem em: realizar simulações virtuais, sem comprometer o funcionamento da PTG; tomar decisões mais precisas e não baseada somente na experiência do operador; diagnosticar potenciais problemas, antes que esses interfiram na qualidade da alumina fluoretada; manter o funcionamento do forno de redução de alumínio dentro da normalidade, pois a produção de alumina de baixa qualidade afeta a reação de quebra da molécula que contém este metal. Os benefícios que este projeto trará consistem em: aumentar a eficiência da PTG, produzindo alumina fluoretada de alta qualidade e emitindo menos gases poluentes na atmosfera, além de aumentar o tempo de vida útil do forno de redução.
Resumo:
A adoção de sistemas digitais de radiodifusão sonora, que estão em fase de testes no país, permite realizar novos estudos visando um melhor planejamento para a implementação dessas novas emissoras. O que significa reavaliar os principais modelos de radiopropagação existentes ou propor novas alternativas para atender as demandas inerentes dos sistemas digitais. Os modelos atuais, conforme Recomendações ITU-R P. 1546 e ITU-R P. 1812, não condizem fielmente com a realidade de algumas regiões do Brasil, principalmente com as regiões de clima tropical, como a Região Amazônica, seja pelo elevado índice pluviométrico seja pela vasta flora existente. A partir dos modelos adequados ao canal de propagação, torna-se viável desenvolver ferramentas de planejamento de cobertura mais precisas e eficientes. A utilização destas ferramentas é cabível tanto para a ANATEL, para a elaboração dos planos básicos de distribuição de canais quanto para os radiodifusores. No presente trabalho é apresentada uma metodologia utilizando a inteligência computacional, baseada em Inferênciass Baysianas, para predição da intensidade de campo elétrico, a qual pode ser aplicada ao planejamento ou expansão de áreas de cobertura em sistemas de radiodifusão para frequências na faixa de ondas médias (de 300 kHz a 3MHz). Esta metodologia gera valores de campo elétrico estimados a partir dos valores de altitude do terreno (através de análises de tabelas de probabilidade condicional) e estabelece a comparação destes com valores de campo elétrico medidos. Os dados utilizados neste trabalho foram coletados na região central do Brasil, próximo à cidade de Brasília. O sinal transmitido era um sinal de rádio AM transmitido na frequência de 980 kHz. De posse dos dados coletados durante as campanhas de medição, foram realizadas simulações utilizando tabelas de probabilidade condicional geradas por Inferências Bayesianas. Assim, é proposto um método para predizer valores de campo elétrico com base na correlação entre o campo elétrico medido e altitude, através da utilização de inteligência computacional. Se comparados a inúmeros trabalhos existentes na literatura que têm o mesmo objetivo, os resultados encontrados neste trabalho validam o uso da metodologia para determinar o campo elétrico de radiodifusão sonora em ondas médias utilizando Inferências Bayesianas.
Resumo:
With the development of Digital TV, the equipments are becoming more and more modernized in order to popular- ize the information that soon might reach all Brazilian families. That way, we open a space for discussion about the many directions that the usability applied on ISDB-Tb interactivity (Brazilian System of Digital Television) can take. This paper approaches the questions connected to the concept of usability and also the subjects related to the life cycle of some technologies (existence time, obsolescence) Also talks with the definition of interactivityon Digital Television since it is responsible for the emergence of a new contingent of interacting people which goes from the computer and portable equipments users to the passive TV viewers. It’s possible to conclude that the Human-Digital TV Interaction (HDTVI) comprehends the synergy between three actants on Digital TV: the col- lective (or not) TV viewer; the interface and the issuer who can be represented by an Artificial Intelligence (AI) service.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Even though the digital processing of documents is increasingly widespread in industry, printed documents are still largely in use. In order to process electronically the contents of printed documents, information must be extracted from digital images of documents. When dealing with complex documents, in which the contents of different regions and fields can be highly heterogeneous with respect to layout, printing quality and the utilization of fonts and typing standards, the reconstruction of the contents of documents from digital images can be a difficult problem. In the present article we present an efficient solution for this problem, in which the semantic contents of fields in a complex document are extracted from a digital image.
Resumo:
Semi-supervised learning techniques have gained increasing attention in the machine learning community, as a result of two main factors: (1) the available data is exponentially increasing; (2) the task of data labeling is cumbersome and expensive, involving human experts in the process. In this paper, we propose a network-based semi-supervised learning method inspired by the modularity greedy algorithm, which was originally applied for unsupervised learning. Changes have been made in the process of modularity maximization in a way to adapt the model to propagate labels throughout the network. Furthermore, a network reduction technique is introduced, as well as an extensive analysis of its impact on the network. Computer simulations are performed for artificial and real-world databases, providing a numerical quantitative basis for the performance of the proposed method.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
A semi-autonomous unmanned underwater vehicle (UUV), named LAURS, is being developed at the Laboratory of Sensors and Actuators at the University of Sao Paulo. The vehicle has been designed to provide inspection and intervention capabilities in specific missions of deep water oil fields. In this work, a method of modeling and identification of yaw motion dynamic system model of an open-frame underwater vehicle is presented. Using an on-board low cost magnetic compass sensor the method is based on the utilization of an uncoupled 1-DOF (degree of freedom) dynamic system equation and the application of the integral method which is the classical least squares algorithm applied to the integral form of the dynamic system equations. Experimental trials with the actual vehicle have been performed in a test tank and diving pool. During these experiments, thrusters responsible for yaw motion are driven by sinusoidal voltage signal profiles. An assessment of the feasibility of the method reveals that estimated dynamic system models are more reliable when considering slow and small sinusoidal voltage signal profiles, i.e. with larger periods and with relatively small amplitude and offset.
Resumo:
Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of the values of a number of parameters (e.g., the kernel and regularization parameters). In the current work, we propose the combination of meta-learning and search algorithms to deal with the problem of SVM parameter selection. In this combination, given a new problem to be solved, meta-learning is employed to recommend SVM parameter values based on parameter configurations that have been successfully adopted in previous similar problems. The parameter values returned by meta-learning are then used as initial search points by a search technique, which will further explore the parameter space. In this proposal, we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space (i.e. they are closer to optimum solutions). Hence, the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution. In this work, we investigate the combination of meta-learning with two search algorithms: Particle Swarm Optimization and Tabu Search. The implemented hybrid algorithms were used to select the values of two SVM parameters in the regression domain. These combinations were compared with the use of the search algorithms without meta-learning. The experimental results on a set of 40 regression problems showed that, on average, the proposed hybrid methods obtained lower error rates when compared to their components applied in isolation.
Resumo:
This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.