876 resultados para Artificial Intelligence, Constraint Programming, set variables, representation
Resumo:
183 p.
Resumo:
150 p.
Resumo:
As ferramentas computacionais estão apoiando, de maneira crescente, o processo de ensino e aprendizagem em diversas áreas. Elas aumentam as possibilidades do docente para ministrar um conteúdo e interagir com seus alunos. Neste grupo de ferramentas estão as simulações baseadas em sistemas multiagentes. Neste contexto, este trabalho tem como objetivo apresentar um ambiente de simulação do crescimento populacional de uma colmeia para o ensino de Biologia. As variáveis do sistema podem ser alteradas visando analisar os diferentes resultados obtidos. Aspectos como duração e tempo da florada das plantações, conhecidos como campos de flores, podem ser manipulados pelo aluno. A abordagem multiagentes em Inteligência Artificial Distribuída foi a solução escolhida, para que o controle das atividades do aplicativo fosse feito de maneira automatizada. A Realidade Virtual foi utilizada para acrescentar aspectos importantes do processo que não podem ser visualizados pela simulação matemática. Uma síntese da utilização de tecnologias na educação, em especial da Informática, é discutida no trabalho. Aspectos da aplicação no ensino de Biologia são apresentados, assim como resultados iniciais de sua utilização.
Resumo:
In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence "reillumination" algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature. © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets. © 2010 Springer-Verlag.
Resumo:
POMDP algorithms have made significant progress in recent years by allowing practitioners to find good solutions to increasingly large problems. Most approaches (including point-based and policy iteration techniques) operate by refining a lower bound of the optimal value function. Several approaches (e.g., HSVI2, SARSOP, grid-based approaches and online forward search) also refine an upper bound. However, approximating the optimal value function by an upper bound is computationally expensive and therefore tightness is often sacrificed to improve efficiency (e.g., sawtooth approximation). In this paper, we describe a new approach to efficiently compute tighter bounds by i) conducting a prioritized breadth first search over the reachable beliefs, ii) propagating upper bound improvements with an augmented POMDP and iii) using exact linear programming (instead of the sawtooth approximation) for upper bound interpolation. As a result, we can represent the bounds more compactly and significantly reduce the gap between upper and lower bounds on several benchmark problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Resumo:
We propose a new learning method to infer a mid-level feature representation that combines the advantage of semantic attribute representations with the higher expressive power of non-semantic features. The idea lies in augmenting an existing attribute-based representation with additional dimensions for which an autoencoder model is coupled with a large-margin principle. This construction allows a smooth transition between the zero-shot regime with no training example, the unsupervised regime with training examples but without class labels, and the supervised regime with training examples and with class labels. The resulting optimization problem can be solved efficiently, because several of the necessity steps have closed-form solutions. Through extensive experiments we show that the augmented representation achieves better results in terms of object categorization accuracy than the semantic representation alone. © 2012 Springer-Verlag.
Resumo:
Many visual datasets are traditionally used to analyze the performance of different learning techniques. The evaluation is usually done within each dataset, therefore it is questionable if such results are a reliable indicator of true generalization ability. We propose here an algorithm to exploit the existing data resources when learning on a new multiclass problem. Our main idea is to identify an image representation that decomposes orthogonally into two subspaces: a part specific to each dataset, and a part generic to, and therefore shared between, all the considered source sets. This allows us to use the generic representation as un-biased reference knowledge for a novel classification task. By casting the method in the multi-view setting, we also make it possible to use different features for different databases. We call the algorithm MUST, Multitask Unaligned Shared knowledge Transfer. Through extensive experiments on five public datasets, we show that MUST consistently improves the cross-datasets generalization performance. © 2013 Springer-Verlag.
Resumo:
The need for more flexible, adaptable and customer-oriented warehouse operations has been increasingly identified as an important issue by today's warehouse companies due to the rapidly changing preferences of the customers that use their services. Motivated by manufacturing and other logistics operations, in this paper we argue on the potential application of product intelligence in warehouse operations as an approach that can help warehouse companies address these issues. We discuss the opportunities of such an approach using a real example of a third-party-logistics warehouse company and we present the benefits it can bring in their warehouse management systems. © 2013 Springer-Verlag.
Resumo:
Copyright © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. This paper presents the beginnings of an automatic statistician, focusing on regression problems. Our system explores an open-ended space of statistical models to discover a good explanation of a data set, and then produces a detailed report with figures and natural- language text. Our approach treats unknown regression functions non- parametrically using Gaussian processes, which has two important consequences. First, Gaussian processes can model functions in terms of high-level properties (e.g. smoothness, trends, periodicity, changepoints). Taken together with the compositional structure of our language of models this allows us to automatically describe functions in simple terms. Second, the use of flexible nonparametric models and a rich language for composing them in an open-ended manner also results in state- of-the-art extrapolation performance evaluated over 13 real time series data sets from various domains.
Resumo:
The discipline of Artificial Intelligence (AI) was born in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. Half of a century has passed, and AI has turned into an important field whose influence on our daily lives can hardly be overestimated. The original view of intelligence as a computer program - a set of algorithms to process symbols - has led to many useful applications now found in internet search engines, voice recognition software, cars, home appliances, and consumer electronics, but it has not yet contributed significantly to our understanding of natural forms of intelligence. Since the 1980s, AI has expanded into a broader study of the interaction between the body, brain, and environment, and how intelligence emerges from such interaction. This advent of embodiment has provided an entirely new way of thinking that goes well beyond artificial intelligence proper, to include the study of intelligent action in agents other than organisms or robots. For example, it supplies powerful metaphors for viewing corporations, groups of agents, and networked embedded devices as intelligent and adaptive systems acting in highly uncertain and unpredictable environments. In addition to giving us a novel outlook on information technology in general, this broader view of AI also offers unexpected perspectives into how to think about ourselves and the world around us. In this chapter, we briefly review the turbulent history of AI research, point to some of its current trends, and to challenges that the AI of the 21st century will have to face. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
University of Paderborn; Fraunhofer Inst. Exp. Softw. Eng. (IESE); Chinese Academy of Science (ISCAS)
Resumo:
Because of the complexity and particularity, especially the result is more depend on the expert' s experience, the calculate method which is based on the simplicity mathematical model can hardly have any effective role in the oilfield .The coalescent method of artificial intelligence and signal manage in the correlation of reservoir use log curve has been put forward.in this paper. Following the principle of "controlled by classification and correlation by deposit gyration ". The system of correlation has been setup, which can identify "standard layer" first by the improved method of gray connection system, and then on the basis of identified "standard layer", interpret the fault, and last identify the layer in the reservoir. A effective method of "the consistent character of a reservoir "has been adopt to solved the puzzle of interpret the fault. On the basis of sedimentary theory and the quantity analysis of log curve shape of different type microfacies, a serial of different type micofacies' s models has been build that use eight optimized parameters, five of eight rationed parameters being used to describe microfacies with log curve, the distribution area of every parameters for the microfacies has been give. Because the classical math can only be used in the areas that principles are very clearly, not be fit for the description of geology character, so The fuzzy math integrate judgment has been adopt in the using log curve to determine microfacies; the accordance ration is 85 percent. A set of software has been programmed which is on the system of Windows. the software has the integration function of data process, auto-contrast reservoir layer, determination of microfacies using log curve, character the connectivity of sandstones and plotting of geology map. Through the application, this system has high precision and has become a useful tool in the study of geology.
Resumo:
Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligenge has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.
Resumo:
There has been much interest in the area of model-based reasoning within the Artificial Intelligence community, particularly in its application to diagnosis and troubleshooting. The core issue in this thesis, simply put, is, model-based reasoning is fine, but whence the model? Where do the models come from? How do we know we have the right models? What does the right model mean anyway? Our work has three major components. The first component deals with how we determine whether a piece of information is relevant to solving a problem. We have three ways of determining relevance: derivational, situational and an order-of-magnitude reasoning process. The second component deals with the defining and building of models for solving problems. We identify these models, determine what we need to know about them, and importantly, determine when they are appropriate. Currently, the system has a collection of four basic models and two hybrid models. This collection of models has been successfully tested on a set of fifteen simple kinematics problems. The third major component of our work deals with how the models are selected.