889 resultados para Artificial intelligence -- Computer programs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tradicionalment, la reproducció del mon real se'ns ha mostrat a traves d'imatges planes. Aquestes imatges se solien materialitzar mitjançant pintures sobre tela o be amb dibuixos. Avui, per sort, encara podem veure pintures fetes a ma, tot i que la majoria d'imatges s'adquireixen mitjançant càmeres, i es mostren directament a una audiència, com en el cinema, la televisió o exposicions de fotografies, o be son processades per un sistema computeritzat per tal d'obtenir un resultat en particular. Aquests processaments s'apliquen en camps com en el control de qualitat industrial o be en la recerca mes puntera en intel·ligència artificial. Aplicant algorismes de processament de nivell mitja es poden obtenir imatges 3D a partir d'imatges 2D, utilitzant tècniques ben conegudes anomenades Shape From X, on X es el mètode per obtenir la tercera dimensió, i varia en funció de la tècnica que s'utilitza a tal nalitat. Tot i que l'evolució cap a la càmera 3D va començar en els 90, cal que les tècniques per obtenir les formes tridimensionals siguin mes i mes acurades. Les aplicacions dels escàners 3D han augmentat considerablement en els darrers anys, especialment en camps com el lleure, diagnosi/cirurgia assistida, robòtica, etc. Una de les tècniques mes utilitzades per obtenir informació 3D d'una escena, es la triangulació, i mes concretament, la utilització d'escàners laser tridimensionals. Des de la seva aparició formal en publicacions científiques al 1971 [SS71], hi ha hagut contribucions per solucionar problemes inherents com ara la disminució d'oclusions, millora de la precisió, velocitat d'adquisició, descripció de la forma, etc. Tots i cadascun dels mètodes per obtenir punts 3D d'una escena te associat un procés de calibració, i aquest procés juga un paper decisiu en el rendiment d'un dispositiu d'adquisició tridimensional. La nalitat d'aquesta tesi es la d'abordar el problema de l'adquisició de forma 3D, des d'un punt de vista total, reportant un estat de l'art sobre escàners laser basats en triangulació, provant el funcionament i rendiment de diferents sistemes, i fent aportacions per millorar la precisió en la detecció del feix laser, especialment en condicions adverses, i solucionant el problema de la calibració a partir de mètodes geomètrics projectius.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For fifty years, computer chess has pursued an original goal of Artificial Intelligence, to produce a chess-engine to compete at the highest level. The goal has arguably been achieved, but that success has made it harder to answer questions about the relative playing strengths of man and machine. The proposal here is to approach such questions in a counter-intuitive way, handicapping or stopping-down chess engines so that they play less well. The intrinsic lack of man-machine games may be side-stepped by analysing existing games to place computer engines as accurately as possible on the FIDE ELO scale of human play. Move-sequences may also be assessed for likelihood if computer-assisted cheating is suspected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe how we generated written explanations to ‘indirect users’ of a knowledge-based system in the domain of drug prescription. We call ‘indirect users’ the intended recipients of explanations, to distinguish them from the prescriber (the ‘direct’ user) who interacts with the system. The Explanation Generator was designed after several studies about indirect users' information needs and physicians' explanatory attitudes in this domain. It integrates text planning techniques with ATN-based surface generation. A double modeling component enables adapting the information content, order and style to the indirect user to whom explanation is addressed. Several examples of computer-generated texts are provided, and they are contrasted with the physicians' explanations to discuss advantages and limits of the approach adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, practical generation of identification keys for biological taxa using a multilayer perceptron neural network is described. Unlike conventional expert systems, this method does not require an expert for key generation, but is merely based on recordings of observed character states. Like a human taxonomist, its judgement is based on experience, and it is therefore capable of generalized identification of taxa. An initial study involving identification of three species of Iris with greater than 90% confidence is presented here. In addition, the horticulturally significant genus Lithops (Aizoaceae/Mesembryanthemaceae), popular with enthusiasts of succulent plants, is used as a more practical example, because of the difficulty of generation of a conventional key to species, and the existence of a relatively recent monograph. It is demonstrated that such an Artificial Neural Network Key (ANNKEY) can identify more than half (52.9%) of the species in this genus, after training with representative data, even though data for one character is completely missing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advance of information technology capabilities, and the importance of human computer interfaces within society there has been a significant increase in research activity within the field of human computer interaction (HCI). This paper summarizes some of the work undertaken to date, paying particular attention to methods applicable to on-line control and monitoring systems such as those employed by The National Grid Company plc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Twitter network has been labelled the most commonly used microblogging application around today. With about 500 million estimated registered users as of June, 2012, Twitter has become a credible medium of sentiment/opinion expression. It is also a notable medium for information dissemination; including breaking news on diverse issues since it was launched in 2007. Many organisations, individuals and even government bodies follow activities on the network in order to obtain knowledge on how their audience reacts to tweets that affect them. We can use postings on Twitter (known as tweets) to analyse patterns associated with events by detecting the dynamics of the tweets. A common way of labelling a tweet is by including a number of hashtags that describe its contents. Association Rule Mining can find the likelihood of co-occurrence of hashtags. In this paper, we propose the use of temporal Association Rule Mining to detect rule dynamics, and consequently dynamics of tweets. We coined our methodology Transaction-based Rule Change Mining (TRCM). A number of patterns are identifiable in these rule dynamics including, new rules, emerging rules, unexpected rules and ?dead' rules. Also the linkage between the different types of rule dynamics is investigated experimentally in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theorem-proving is a one-player game. The history of computer programs being the players goes back to 1956 and the ‘LT’ LOGIC THEORY MACHINE of Newell, Shaw and Simon. In game-playing terms, the ‘initial position’ is the core set of axioms chosen for the particular logic and the ‘moves’ are the rules of inference. Now, the Univalent Foundations Program at IAS Princeton and the resulting ‘HoTT’ book on Homotopy Type Theory have demonstrated the success of a new kind of experimental mathematics using computer theorem proving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Paraconsistent logics are non-classical logics which allow non-trivial and consistent reasoning about inconsistent axioms. They have been pro- posed as a formal basis for handling inconsistent data, as commonly arise in human enterprises, and as methods for fuzzy reasoning, with applica- tions in Artificial Intelligence and the control of complex systems. Formalisations of paraconsistent logics usually require heroic mathe- matical efforts to provide a consistent axiomatisation of an inconsistent system. Here we use transreal arithmetic, which is known to be consis- tent, to arithmetise a paraconsistent logic. This is theoretically simple and should lead to efficient computer implementations. We introduce the metalogical principle of monotonicity which is a very simple way of making logics paraconsistent. Our logic has dialetheaic truth values which are both False and True. It allows contradictory propositions, allows variable contradictions, but blocks literal contradictions. Thus literal reasoning, in this logic, forms an on-the- y, syntactic partition of the propositions into internally consistent sets. We show how the set of all paraconsistent, possible worlds can be represented in a transreal space. During the development of our logic we discuss how other paraconsistent logics could be arithmetised in transreal arithmetic.