886 resultados para Engineering, Industrial|Artificial Intelligence
Resumo:
Password Authentication Protocol (PAP) is widely used in the Wireless Fidelity Point-to-Point Protocol to authenticate an identity and password for a peer. This paper uses a new knowledge-based framework to verify the PAP protocol and a fixed version. Flaws are found in both the original and the fixed versions. A new enhanced protocol is provided and the security of it is proved The whole process is implemented in a mechanical reasoning platform, Isabelle. It only takes a few seconds to find flaws in the original and the fixed protocol and to verify that the enhanced version of the PAP protocol is secure.
Resumo:
In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"
Resumo:
This article looks at the use of cultured neural networks as the decision-making mechanism of a control system. In this case biological neurons are grown and trained to act as an artificial intelligence engine. Such research has immediate medical implications as well as enormous potential in computing and robotics. An experimental system involving closed-loop control of a mobile robot by a culture of neurons has been successfully created and is described here. This article gives a brief overview of the problem area and ongoing research. Questions are asked as to where this will lead in the future.
Resumo:
In this paper, practical generation of identification keys for biological taxa using a multilayer perceptron neural network is described. Unlike conventional expert systems, this method does not require an expert for key generation, but is merely based on recordings of observed character states. Like a human taxonomist, its judgement is based on experience, and it is therefore capable of generalized identification of taxa. An initial study involving identification of three species of Iris with greater than 90% confidence is presented here. In addition, the horticulturally significant genus Lithops (Aizoaceae/Mesembryanthemaceae), popular with enthusiasts of succulent plants, is used as a more practical example, because of the difficulty of generation of a conventional key to species, and the existence of a relatively recent monograph. It is demonstrated that such an Artificial Neural Network Key (ANNKEY) can identify more than half (52.9%) of the species in this genus, after training with representative data, even though data for one character is completely missing.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Planning is one of the key problems for autonomous vehicles operating in road scenarios. Present planning algorithms operate with the assumption that traffic is organised in predefined speed lanes, which makes it impossible to allow autonomous vehicles in countries with unorganised traffic. Unorganised traffic is though capable of higher traffic bandwidths when constituting vehicles vary in their speed capabilities and sizes. Diverse vehicles in an unorganised exhibit unique driving behaviours which are analysed in this paper by a simulation study. The aim of the work reported here is to create a planning algorithm for mixed traffic consisting of both autonomous and non-autonomous vehicles without any inter-vehicle communication. The awareness (e.g. vision) of every vehicle is restricted to nearby vehicles only and a straight infinite road is assumed for decision making regarding navigation in the presence of multiple vehicles. Exhibited behaviours include obstacle avoidance, overtaking, giving way for vehicles to overtake from behind, vehicle following, adjusting the lateral lane position and so on. A conflict of plans is a major issue which will almost certainly arise in the absence of inter-vehicle communication. Hence each vehicle needs to continuously track other vehicles and rectify plans whenever a collision seems likely. Further it is observed here that driver aggression plays a vital role in overall traffic dynamics, hence this has also been factored in accordingly. This work is hence a step forward towards achieving autonomous vehicles in unorganised traffic, while similar effort would be required for planning problems such as intersections, mergers, diversions and other modules like localisation.
Resumo:
One of the most challenging tasks in financial management for large governmental and industrial organizations is Planning and Budgeting (P&B). The processes involved with P&B are cost and time intensive, especially when dealing with uncertainties and budget adjustments during the planning horizon. This work builds on our previous research in which we proposed and evaluated a fuzzy approach that allows optimizing the budget interactively beyond the initial planning stage. In this research we propose an extension that handles financial stress (i.e. drastic budget cuts) occurred during the budget period. This is done by introducing fuzzy stress parameters which are used to re-distribute the budget in order to minimize the negative impact of the financial stress. The benefits and possible issues of this approach are analyzed critically using a real world case study from the Nuremberg Institute of Technology (NIT). Additionally, ongoing and future research directions are presented.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
Techniques devoted to generating triangular meshes from intensity images either take as input a segmented image or generate a mesh without distinguishing individual structures contained in the image. These facts may cause difficulties in using such techniques in some applications, such as numerical simulations. In this work we reformulate a previously developed technique for mesh generation from intensity images called Imesh. This reformulation makes Imesh more versatile due to an unified framework that allows an easy change of refinement metric, rendering it effective for constructing meshes for applications with varied requirements, such as numerical simulation and image modeling. Furthermore, a deeper study about the point insertion problem and the development of geometrical criterion for segmentation is also reported in this paper. Meshes with theoretical guarantee of quality can also be obtained for each individual image structure as a post-processing step, a characteristic not usually found in other methods. The tests demonstrate the flexibility and the effectiveness of the approach.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Resumo:
Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.
Resumo:
Navigation is a broad topic that has been receiving considerable attention from the mobile robotic community over the years. In order to execute autonomous driving in outdoor urban environments it is necessary to identify parts of the terrain that can be traversed and parts that should be avoided. This paper describes an analyses of terrain identification based on different visual information using a MLP artificial neural network and combining responses of many classifiers. Experimental tests using a vehicle and a video camera have been conducted in real scenarios to evaluate the proposed approach.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.