16 resultados para 280213 Other Artificial Intelligence
em CentAUR: Central Archive University of Reading - UK
Resumo:
Artificial Intelligence: The Basics is a concise and cutting-edge introduction to the fast moving world of AI. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. Topics covered include: how intelligence can be defined, whether machines can 'think', sensory input in machine systems, the nature of consciousness, the controversial culturing of human neurons. Exploring issues at the heart of the subject, this book is suitable for anyone interested in AI, and provides an illuminating and accessible introduction to this fascinating subject.
Resumo:
One of the essential needs to implement a successful e-Government web application is security. Web application firewalls (WAF) are the most important tool to secure web applications against the increasing number of web application attacks nowadays. WAFs work in different modes depending on the web traffic filtering approach used, such as positive security mode, negative security mode, session-based mode, or mixed modes. The proposed WAF, which is called (HiWAF), is a web application firewall that works in three modes: positive, negative and session based security modes. The new approach that distinguishes this WAF among other WAFs is that it utilizes the concepts of Artificial Intelligence (AI) instead of regular expressions or other traditional pattern matching techniques as its filtering engine. Both artificial neural networks and fuzzy logic concepts will be used to implement a hybrid intelligent web application firewall that works in three security modes.
Synapsing variable length crossover: An algorithm for crossing and comparing variable length genomes
Resumo:
The Synapsing Variable Length Crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable length genomes. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be used with variable length genomes. Unlike other variable length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC Algorithm recurrently "glues" or synapses homogenous genetic sub-sequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable length test problem the SVLC algorithm is shown to outperform current variable length crossover techniques. The SVLC algorithm is also shown to work in a more realistic robot neural network controller evolution application.
Resumo:
In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"
Resumo:
In this paper, practical generation of identification keys for biological taxa using a multilayer perceptron neural network is described. Unlike conventional expert systems, this method does not require an expert for key generation, but is merely based on recordings of observed character states. Like a human taxonomist, its judgement is based on experience, and it is therefore capable of generalized identification of taxa. An initial study involving identification of three species of Iris with greater than 90% confidence is presented here. In addition, the horticulturally significant genus Lithops (Aizoaceae/Mesembryanthemaceae), popular with enthusiasts of succulent plants, is used as a more practical example, because of the difficulty of generation of a conventional key to species, and the existence of a relatively recent monograph. It is demonstrated that such an Artificial Neural Network Key (ANNKEY) can identify more than half (52.9%) of the species in this genus, after training with representative data, even though data for one character is completely missing.
Resumo:
This paper develops and tests formulas for representing playing strength at chess by the quality of moves played, rather than by the results of games. Intrinsic quality is estimated via evaluations given by computer chess programs run to high depth, ideally so that their playing strength is sufficiently far ahead of the best human players as to be a `relatively omniscient' guide. Several formulas, each having intrinsic skill parameters s for `sensitivity' and c for `consistency', are argued theoretically and tested by regression on large sets of tournament games played by humans of varying strength as measured by the internationally standard Elo rating system. This establishes a correspondence between Elo rating and the parameters. A smooth correspondence is shown between statistical results and the century points on the Elo scale, and ratings are shown to have stayed quite constant over time. That is, there has been little or no `rating inflation'. The theory and empirical results are transferable to other rational-choice settings in which the alternatives have well-defined utilities, but in which complexity and bounded information constrain the perception of the utility values.
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.
Resumo:
Planning is one of the key problems for autonomous vehicles operating in road scenarios. Present planning algorithms operate with the assumption that traffic is organised in predefined speed lanes, which makes it impossible to allow autonomous vehicles in countries with unorganised traffic. Unorganised traffic is though capable of higher traffic bandwidths when constituting vehicles vary in their speed capabilities and sizes. Diverse vehicles in an unorganised exhibit unique driving behaviours which are analysed in this paper by a simulation study. The aim of the work reported here is to create a planning algorithm for mixed traffic consisting of both autonomous and non-autonomous vehicles without any inter-vehicle communication. The awareness (e.g. vision) of every vehicle is restricted to nearby vehicles only and a straight infinite road is assumed for decision making regarding navigation in the presence of multiple vehicles. Exhibited behaviours include obstacle avoidance, overtaking, giving way for vehicles to overtake from behind, vehicle following, adjusting the lateral lane position and so on. A conflict of plans is a major issue which will almost certainly arise in the absence of inter-vehicle communication. Hence each vehicle needs to continuously track other vehicles and rectify plans whenever a collision seems likely. Further it is observed here that driver aggression plays a vital role in overall traffic dynamics, hence this has also been factored in accordingly. This work is hence a step forward towards achieving autonomous vehicles in unorganised traffic, while similar effort would be required for planning problems such as intersections, mergers, diversions and other modules like localisation.