849 resultados para Applied artificial intelligence
Resumo:
Navigation is a broad topic that has been receiving considerable attention from the mobile robotic community over the years. In order to execute autonomous driving in outdoor urban environments it is necessary to identify parts of the terrain that can be traversed and parts that should be avoided. This paper describes an analyses of terrain identification based on different visual information using a MLP artificial neural network and combining responses of many classifiers. Experimental tests using a vehicle and a video camera have been conducted in real scenarios to evaluate the proposed approach.
Resumo:
Chaotic synchronization has been discovered to be an important property of neural activities, which in turn has encouraged many researchers to develop chaotic neural networks for scene and data analysis. In this paper, we study the synchronization role of coupled chaotic oscillators in networks of general topology. Specifically, a rigorous proof is presented to show that a large number of oscillators with arbitrary geometrical connections can be synchronized by providing a sufficiently strong coupling strength. Moreover, the results presented in this paper not only are valid to a wide class of chaotic oscillators, but also cover the parameter mismatch case. Finally, we show how the obtained result can be applied to construct an oscillatory network for scene segmentation.
Resumo:
Synchronization and chaos play important roles in neural activities and have been applied in oscillatory correlation modeling for scene and data analysis. Although it is an extensively studied topic, there are still few results regarding synchrony in locally coupled systems. In this paper we give a rigorous proof to show that large numbers of coupled chaotic oscillators with parameter mismatch in a 2D lattice can be synchronized by providing a sufficiently large coupling strength. We demonstrate how the obtained result can be applied to construct an oscillatory network for scene segmentation. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or Virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that Could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients - batch learning and clutter detection - the NMF mechanism was capable to infer perfectly the correct object-word mapping. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Objective: To develop a method for objective quantification of PD motor symptoms related to Off episodes and peak dose dyskinesias, using spiral data gathered by using a touch screen telemetry device. The aim was to objectively characterize predominant motor phenotypes (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Background: A retrospective analysis was conducted on recordings from 65 patients with advanced idiopathic PD from nine different clinics in Sweden, recruited from January 2006 until August 2010. In addition to the patient group, 10 healthy elderly subjects were recruited. Upper limb movement data were collected using a touch screen telemetry device from home environments of the subjects. Measurements with the device were performed four times per day during week-long test periods. On each test occasion, the subjects were asked to trace pre-drawn Archimedean spirals, using the dominant hand. The pre-drawn spiral was shown on the screen of the device. The spiral test was repeated three times per test occasion and they were instructed to complete it within 10 seconds. The device had a sampling rate of 10Hz and measured both position and time-stamps (in milliseconds) of the pen tip. Methods: Four independent raters (FB, DH, AJ and DN) used a web interface that animated the spiral drawings and allowed them to observe different kinematic features during the drawing process and to rate task performance. Initially, a number of kinematic features were assessed including ‘impairment’, ‘speed’, ‘irregularity’ and ‘hesitation’ followed by marking the predominant motor phenotype on a 3-category scale: tremor, bradykinesia and/or choreatic dyskinesia. There were only 2 test occasions for which all the four raters either classified them as tremor or could not identify the motor phenotype. Therefore, the two main motor phenotype categories were bradykinesia and dyskinesia. ‘Impairment’ was rated on a scale from 0 (no impairment) to 10 (extremely severe) whereas ‘speed’, ‘irregularity’ and ‘hesitation’ were rated on a scale from 0 (normal) to 4 (extremely severe). The proposed data-driven method consisted of the following steps. Initially, 28 spatiotemporal features were extracted from the time series signals before being presented to a Multilayer Perceptron (MLP) classifier. The features were based on different kinematic quantities of spirals including radius, angle, speed and velocity with the aim of measuring the severity of involuntary symptoms and discriminate between PD-specific (bradykinesia) and/or treatment-induced symptoms (dyskinesia). A Principal Component Analysis was applied on the features to reduce their dimensions where 4 relevant principal components (PCs) were retained and used as inputs to the MLP classifier. Finally, the MLP classifier mapped these components to the corresponding visually assessed motor phenotype scores for automating the process of scoring the bradykinesia and dyskinesia in PD patients whilst they draw spirals using the touch screen device. For motor phenotype (bradykinesia vs. dyskinesia) classification, the stratified 10-fold cross validation technique was employed. Results: There were good agreements between the four raters when rating the individual kinematic features with intra-class correlation coefficient (ICC) of 0.88 for ‘impairment’, 0.74 for ‘speed’, 0.70 for ‘irregularity’, and moderate agreements when rating ‘hesitation’ with an ICC of 0.49. When assessing the two main motor phenotype categories (bradykinesia or dyskinesia) in animated spirals the agreements between the four raters ranged from fair to moderate. There were good correlations between mean ratings of the four raters on individual kinematic features and computed scores. The MLP classifier classified the motor phenotype that is bradykinesia or dyskinesia with an accuracy of 85% in relation to visual classifications of the four movement disorder specialists. The test-retest reliability of the four PCs across the three spiral test trials was good with Cronbach’s Alpha coefficients of 0.80, 0.82, 0.54 and 0.49, respectively. These results indicate that the computed scores are stable and consistent over time. Significant differences were found between the two groups (patients and healthy elderly subjects) in all the PCs, except for the PC3. Conclusions: The proposed method automatically assessed the severity of unwanted symptoms and could reasonably well discriminate between PD-specific and/or treatment-induced motor symptoms, in relation to visual assessments of movement disorder specialists. The objective assessments could provide a time-effect summary score that could be useful for improving decision-making during symptom evaluation of individualized treatment when the goal is to maximize functional On time for patients while minimizing their Off episodes and troublesome dyskinesias.
Resumo:
This paper reports the findings of using multi-agent based simulation model to evaluate the sawmill yard operations within a large privately owned sawmill in Sweden, Bergkvist Insjön AB in the current case. Conventional working routines within sawmill yard threaten the overall efficiency and thereby limit the profit margin of sawmill. Deploying dynamic work routines within the sawmill yard is not readily feasible in real time, so discrete event simulation model has been investigated to be able to report optimal work order depending on the situations. Preliminary investigations indicate that the results achieved by simulation model are promising. It is expected that the results achieved in the current case will support Bergkvist-Insjön AB in making optimal decisions by deploying efficient work order in sawmill yard.
Resumo:
The requirement for Grid middleware to be largely transparent to individual users and at the same time act in accordance with their personal needs is a difficult challenge. In e-science scenarios, users cannot be repeatedly interrogated for each operational decision made when enacting experiments on the Grid. It is thus important to specify and enforce policies that enable the environment to be configured to take user preferences into account automatically. In particular, we need to consider the context in which these policies are applied, because decisions are based not only on the rules of the policy but also on the current state of the system. Consideration of context is explicitly addressed, in the agent perspective, when deciding how to balance the achievement of goals and reaction to the environment. One commonly-applied abstraction that balances reaction to multiple events with context-based reasoning in the way suggested by our requirements is the belief-desire-intention (BDI) architecture, which has proven successful in many applications. In this paper, we argue that BDI is an appropriate model for policy enforcement, and describe the application of BDI to policy enforcement in personalising Grid service discovery. We show how this has been implemented in the myGrid registry to provide bioinformaticians with control over the services returned to them by the service discovery process.
Resumo:
A crucial aspect of evidential reasoning in crime investigation involves comparing the support that evidence provides for alternative hypotheses. Recent work in forensic statistics has shown how Bayesian Networks (BNs) can be employed for this purpose. However, the specification of BNs requires conditional probability tables describing the uncertain processes under evaluation. When these processes are poorly understood, it is necessary to rely on subjective probabilities provided by experts. Accurate probabilities of this type are normally hard to acquire from experts. Recent work in qualitative reasoning has developed methods to perform probabilistic reasoning using coarser representations. However, the latter types of approaches are too imprecise to compare the likelihood of alternative hypotheses. This paper examines this shortcoming of the qualitative approaches when applied to the aforementioned problem, and identifies and integrates techniques to refine them.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. Although a complete and correct resolution-style calculus has already been suggested for this specific fragment, this calculus involves constructions too complex to be of practical value. In this paper, we develop a machine-oriented clausal resolution method which features radically simplified proof search. We first define a normal form for monodic formulae and then introduce a novel resolution calculus that can be applied to formulae in this normal form. By careful encoding, parts of the calculus can be implemented using classical first-order resolution and can, thus, be efficiently implemented. We prove correctness and completeness results for the calculus and illustrate it on a comprehensive example. An implementation of the method is briefly discussed.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments including the guarded fragment with equality. In this paper, we specialise the monodic resolution method to the guarded monodic fragment with equality and first-order temporal logic over expanding domains. We introduce novel resolution calculi that can be applied to formulae in the normal form associated with the clausal resolution method, and state correctness and completeness results.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. In this paper, we develop a clausal resolution method for the monodic fragment of first-order temporal logic over expanding domains. We first define a normal form for monodic formulae and then introduce novel resolution calculi that can be applied to formulae in this normal form. We state correctness and completeness results for the method. We illustrate the method on a comprehensive example. The method is based on classical first-order resolution and can, thus, be efficiently implemented.
Resumo:
In this paper, we show how the clausal temporal resolution technique developed for temporal logic provides an effective method for searching for invariants, and so is suitable for mechanising a wide class of temporal problems. We demonstrate that this scheme of searching for invariants can be also applied to a class of multi-predicate induction problems represented by mutually recursive definitions. Completeness of the approach, examples of the application of the scheme, and overview of the implementation are described.
Resumo:
O modelo racional de decisão tem sido objeto de estudo constante na academia de vários países, contribuindo para evolução do ser racional como importante tomador de decisão. A evolução destes estudos tem aberto questionamentos quanto à capacidade de racionalidade que temos como tomadores de decisão, deleitando assim em várias teorias novas que pesquisam estas limitações no decidir. Especialmente aplicadas a teorias econômicas, estudos como Inteligência Artificial, Contabilidade Mental, Teoria dos Prospectos, Teoria dos Jogos entre outras se destacam neste cenário de estudo das finanças comportamentais. A contabilidade como ferramenta de apoio as decisões financeiras ocupa posição de destaque. Esta tem em seu escopo de trabalho normas (aquilo que deveria ser feito) que regulam sua atuação, em alguns casos esta regulamentação não é precisa em suas especificações, deixando janelas que levam seus profissionais a erros de interpretação. A imprecisão contábil pode causar viés em suas classificações. Os profissionais, deparados com este legado podem se utilizar de heurísticas para interpretar da melhor maneira possível os acontecimentos que são registrados na contabilidade. Este trabalho tem a intenção de análise de alguns pontos que consideramos importantes quando temos imprecisão contábil, respondendo as seguintes perguntas: a imprecisão de normas contábeis causa viés na decisão? O profissional que se depara com imprecisão contábil se utiliza de Heurística para decidir? Quais os erros mais comuns de interpretação sob incerteza contábil? Para que o assunto fosse abordado com imparcialidade de maneira a absorver retamente quais são as experiências dos profissionais que atuam na área contábil, foi elaborado um questionário composto por uma situação possível que leva o respondente a um ambiente de tomada de decisões que envolva a prática contábil. O questionário era dividido em duas partes principais, com a preocupação de identificar através das respostas se existe imprecisão contábil (sob a luz do princípio da prudência) e quais heurísticas que os respondentes se utilizam com mais freqüência, sendo o mesmo aplicado em profissionais que atuam na área contábil e que detenham experiências profissionais relacionadas à elaboração, auditoria ou análise de demonstrações contábeis. O questionário aplicado na massa respondente determinou, através das respostas, que existe, segundo os profissionais, interpretações diferentes para os mesmos dados, caracterizando assim zona cinzenta, segundo Penno (2008), ou seja, interpretações que podem ser mais agressivas ou mais conservadoras conforme a interpretação do profissional. Já quanto às estratégias simplificadoras, ou heurísticas, que causam algum tipo de enviesamento no processo decisório, alguns foram identificadas como: associações pressupostas, interpretação errada da chance, regressão a media e eventos disjuntivos e eventos conjuntivos, que reforçam a pesquisa dando indícios de que os respondentes podem estar tomando decisões enviesadas. Porém, não se identificou no estudo tomada de decisões com enviesamentos como recuperabilidade e insensibilidades ao tamanho da amostra. Ao final do estudo concluímos que os respondentes têm interpretações diferenciadas sobre o mesmo assunto, mesmo sob a luz do princípio contábil da prudência, e ainda se utilizam de estratégias simplificadoras para resolverem assuntos quotidianos.
Resumo:
Este trabalho minera as informações coletadas no processo de vestibular entre 2009 e 2012 para o curso de graduação de administração de empresas da FGV-EAESP, para estimar classificadores capazes de calcular a probabilidade de um novo aluno ter bom desempenho. O processo de KDD (Knowledge Discovery in Database) desenvolvido por Fayyad et al. (1996a) é a base da metodologia adotada e os classificadores serão estimados utilizando duas ferramentas matemáticas. A primeira é a regressão logística, muito usada por instituições financeiras para avaliar se um cliente será capaz de honrar com seus pagamentos e a segunda é a rede Bayesiana, proveniente do campo de inteligência artificial. Este estudo mostre que os dois modelos possuem o mesmo poder discriminatório, gerando resultados semelhantes. Além disso, as informações que influenciam a probabilidade de o aluno ter bom desempenho são a sua idade no ano de ingresso, a quantidade de vezes que ele prestou vestibular da FGV/EAESP antes de ser aprovado, a região do Brasil de onde é proveniente e as notas das provas de matemática fase 01 e fase 02, inglês, ciências humanas e redação. Aparentemente o grau de formação dos pais e o grau de decisão do aluno em estudar na FGV/EAESP não influenciam nessa probabilidade.
Resumo:
Most face recognition approaches require a prior training where a given distribution of faces is assumed to further predict the identity of test faces. Such an approach may experience difficulty in identifying faces belonging to distributions different from the one provided during the training. A face recognition technique that performs well regardless of training is, therefore, interesting to consider as a basis of more sophisticated methods. In this work, the Census Transform is applied to describe the faces. Based on a scanning window which extracts local histograms of Census Features, we present a method that directly matches face samples. With this simple technique, 97.2% of the faces in the FERET fa/fb test were correctly recognized. Despite being an easy test set, we have found no other approaches in literature regarding straight comparisons of faces with such a performance. Also, a window for further improvement is presented. Among other techniques, we demonstrate how the use of SVMs over the Census Histogram representation can increase the recognition performance.