61 resultados para Artificial Intelligence, Constraint Programming, set variables, representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this document is to serve as the printed material for the seminar "An Introductory Course on Constraint Logic Programming". The intended audience of this seminar are industrial programmers with a degree in Computer Science but little previous experience with constraint programming. The seminar itself has been field tested, prior to the writing of this document, with a group of the application programmers of Esprit project P23182, "VOCAL", aimed at developing an application in scheduling of field maintenance tasks in the context of an electric utility company. The contents of this paper follow essentially the flow of the seminar slides. However, there are some differences. These differences stem from our perception from the experience of teaching the seminar, that the technical aspects are the ones which need more attention and clearer explanations in the written version. Thus, this document includes more examples than those in the slides, more exercises (and the solutions to them), as well as four additional programming projects, with which we hope the reader will obtain a clearer view of the process of development and tuning of programs using CLP. On the other hand, several parts of the seminar have been taken out: those related with the account of fields and applications in which C(L)P is useful, and the enumerations of C(L)P tools available. We feel that the slides are clear enough, and that for more information on available tools, the interested reader will find more up-to-date information by browsing the Web or asking the vendors directly. More details in this direction will actually boil down to summarizing a user manual, which is not the aim of this document.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic graphical models are a huge research field in artificial intelligence nowadays. The scope of this work is the study of directed graphical models for the representation of discrete distributions. Two of the main research topics related to this area focus on performing inference over graphical models and on learning graphical models from data. Traditionally, the inference process and the learning process have been treated separately, but given that the learned models structure marks the inference complexity, this kind of strategies will sometimes produce very inefficient models. With the purpose of learning thinner models, in this master thesis we propose a new model for the representation of network polynomials, which we call polynomial trees. Polynomial trees are a complementary representation for Bayesian networks that allows an efficient evaluation of the inference complexity and provides a framework for exact inference. We also propose a set of methods for the incremental compilation of polynomial trees and an algorithm for learning polynomial trees from data using a greedy score+search method that includes the inference complexity as a penalization in the scoring function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names.We sought to automatically classify digitally reconstructed interneuronal morphologies according tothis scheme. Simultaneously, we sought to discover possible subtypes of these types that might emergeduring automatic classification (clustering). We also investigated which morphometric properties weremost relevant for this classification.Materials and methods: A set of 118 digitally reconstructed interneuronal morphologies classified into thecommon basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of theworld?s leading neuroscientists, quantified by five simple morphometric properties of the axon and fourof the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. Wethen removed this class information for each type separately, and applied semi-supervised clustering tothose cells (keeping the others? cluster membership fixed), to assess separation from other types and lookfor the formation of new groups (subtypes). We performed this same experiment unlabeling the cells oftwo types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixtureof Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performedthe described experiments on three different subsets of the data, formed according to how many expertsagreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least26 (47 neurons).Results: Interneurons with more reliable type labels were classified more accurately. We classified HTcells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy,respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, andno subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette widthand ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively,confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a singletype also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometricproperties were more relevant that dendritic ones, with the axonal polar histogram length in the [pi, 2pi) angle interval being particularly useful.Conclusions: The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heteroge-neous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones fordistinguishing among the CB, HT, LB, and MA interneuron types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se estudia la representación, modelado y comparación de colecciones mediante el uso de ontologías en el ámbito de la Web Semántica. Las colecciones, entendidas como agrupaciones de objetos o elementos con entidad propia, son construcciones que aparecen frecuentemente en prácticamente todos los dominios del mundo real, y por tanto, es imprescindible disponer de conceptualizaciones de estas estructuras abstractas y de representaciones de estas conceptualizaciones en los sistemas informáticos, que definan adecuadamente su semántica. Mientras que en muchos ámbitos de la Informática y la Inteligencia Artificial, como por ejemplo la programación, las bases de datos o la recuperación de información, las colecciones han sido ampliamente estudiadas y se han desarrollado representaciones que responden a multitud de conceptualizaciones, en el ámbito de la Web Semántica, sin embargo, su estudio ha sido bastante limitado. De hecho hasta la fecha existen pocas propuestas de representación de colecciones mediante ontologías, y las que hay sólo cubren algunos tipos de colecciones y presentan importantes limitaciones. Esto impide la representación adecuada de colecciones y dificulta otras tareas comunes como la comparación de colecciones, algo crítico en operaciones habituales como las búsquedas semánticas o el enlazado de datos en la Web Semántica. Para solventar este problema esta tesis hace una propuesta de modelización de colecciones basada en una nueva clasificación de colecciones de acuerdo a sus características estructurales (homogeneidad, unicidad, orden y cardinalidad). Esta clasificación permite definir una taxonomía con hasta 16 tipos de colecciones distintas. Entre otras ventajas, esta nueva clasificación permite aprovechar la semántica de las propiedades estructurales de cada tipo de colección para realizar comparaciones utilizando las funciones de similitud y disimilitud más apropiadas. De este modo, la tesis desarrolla además un nuevo catálogo de funciones de similitud para las distintas colecciones, donde se han recogido las funciones de (di)similitud más conocidas y también algunas nuevas. Esta propuesta se ha implementado mediante dos ontologías paralelas, la ontología E-Collections, que representa los distintos tipos de colecciones de la taxonomía y su axiomática, y la ontología SIMEON (Similarity Measures Ontology) que representa los tipos de funciones de (di)similitud para cada tipo de colección. Gracias a estas ontologías, para comparar dos colecciones, una vez representadas como instancias de la clase más apropiada de la ontología E-Collections, automáticamente se sabe qué funciones de (di)similitud de la ontología SIMEON pueden utilizarse para su comparación. Abstract This thesis studies the representation, modeling and comparison of collections in the Semantic Web using ontologies. Collections, understood as groups of objects or elements with their own identities, are constructions that appear frequently in almost all areas of the real world. Therefore, it is essential to have conceptualizations of these abstract structures and representations of these conceptualizations in computer systems, that define their semantic properly. While in many areas of Computer Science and Artificial Intelligence, such as Programming, Databases or Information Retrieval, the collections have been extensively studied and there are representations that match many conceptualizations, in the field Semantic Web, however, their study has been quite limited. In fact, there are few representations of collections using ontologies so far, and they only cover some types of collections and have important limitations. This hinders a proper representation of collections and other common tasks like comparing collections, something critical in usual operations such as semantic search or linking data on the Semantic Web. To solve this problem this thesis makes a proposal for modelling collections based on a new classification of collections according to their structural characteristics (homogeneity, uniqueness, order and cardinality). This classification allows to define a taxonomy with up to 16 different types of collections. Among other advantages, this new classification can leverage the semantics of the structural properties of each type of collection to make comparisons using the most appropriate (dis)similarity functions. Thus, the thesis also develops a new catalog of similarity functions for the different types of collections. This catalog contains the most common (dis)similarity functions as well as new ones. This proposal is implemented through two parallel ontologies, the E-Collections ontology that represents the different types of collections in the taxonomy and their axiomatic, and the SIMEON ontology (Similarity Measures Ontology) that represents the types of (dis)similarity functions for each type of collection. Thanks to these ontologies, to compare two collections, once represented as instances of the appropriate class of E-Collections ontology, we can know automatically which (dis)similarity functions of the SIMEON ontology are suitable for the comparison. Finally, the feasibility and usefulness of this modeling and comparison of collections proposal is proved in the field of oenology, applying both E-Collections and SIMEON ontologies to the representation and comparison of wines with the E-Baco ontology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El empleo de refuerzos de FRP en vigas de hormigón armado es cada vez más frecuente por sus numerosas ventajas frente a otros métodos más tradicionales. Durante los últimos años, la técnica FRP-NSM, consistente en introducir barras de FRP sobre el recubrimiento de una viga de hormigón, se ha posicionado como uno de los mejores métodos de refuerzo y rehabilitación de estructuras de hormigón armado, tanto por su facilidad de montaje y mantenimiento, como por su rendimiento para aumentar la capacidad resistente de dichas estructuras. Si bien el refuerzo a flexión ha sido ampliamente desarrollado y estudiado hasta la fecha, no sucede lo mismo con el refuerzo a cortante, debido principalmente a su gran complejidad. Sin embargo, se debería dedicar más estudio a este tipo de refuerzo si se pretenden conservar los criterios de diseño en estructuras de hormigón armado, los cuales están basados en evitar el fallo a cortante por sus consecuencias catastróficas Esta ausencia de información y de normativa es la que justifica esta tesis doctoral. En este pro-yecto se van a desarrollar dos metodologías alternativas, que permiten estimar la capacidad resistente de vigas de hormigón armado, reforzadas a cortante mediante la técnica FRP-NSM. El primer método aplicado consiste en la implementación de una red neuronal artificial capaz de predecir adecuadamente la resistencia a cortante de vigas reforzadas con este método a partir de experimentos anteriores. Asimismo, a partir de la red se han llevado a cabo algunos estudios a fin de comprender mejor la influencia real de algunos parámetros de la viga y del refuerzo sobre la resistencia a cortante con el propósito de lograr diseños más seguros de este tipo de refuerzo. Una configuración óptima de la red requiere discriminar adecuadamente de entre los numerosos parámetros (geométricos y de material) que pueden influir en el compor-tamiento resistente de la viga, para lo cual se han llevado a cabo diversos estudios y pruebas. Mediante el segundo método, se desarrolla una ecuación de proyecto que permite, de forma sencilla, estimar la capacidad de vigas reforzadas a cortante con FRP-NSM, la cual podría ser propuesta para las principales guías de diseño. Para alcanzar este objetivo, se plantea un pro-blema de optimización multiobjetivo a partir de resultados de ensayos experimentales llevados a cabo sobre vigas de hormigón armado con y sin refuerzo de FRP. El problema multiobjetivo se resuelve mediante algoritmos genéticos, en concreto el algoritmo NSGA-II, por ser más apropiado para problemas con varias funciones objetivo que los métodos de optimización clásicos. Mediante una comparativa de las predicciones realizadas con ambos métodos y de los resulta-dos de ensayos experimentales se podrán establecer las ventajas e inconvenientes derivadas de la aplicación de cada una de las dos metodologías. Asimismo, se llevará a cabo un análisis paramétrico con ambos enfoques a fin de intentar determinar la sensibilidad de aquellos pa-rámetros más sensibles a este tipo de refuerzo. Finalmente, se realizará un análisis estadístico de la fiabilidad de las ecuaciones de diseño deri-vadas de la optimización multiobjetivo. Con dicho análisis se puede estimar la capacidad resis-tente de una viga reforzada a cortante con FRP-NSM dentro de un margen de seguridad espe-cificado a priori. ABSTRACT The use of externally bonded (EB) fibre-reinforced polymer (FRP) composites has gained acceptance during the last two decades in the construction engineering community, particularly in the rehabilitation of reinforced concrete (RC) structures. Currently, to increase the shear resistance of RC beams, FRP sheets are externally bonded (EB-FRP) and applied on the external side surface of the beams to be strengthened with different configurations. Of more recent application, the near-surface mounted FRP bar (NSM-FRP) method is another technique successfully used to increase the shear resistance of RC beams. In the NSM method, FRP rods are embedded into grooves intentionally prepared in the concrete cover of the side faces of RC beams. While flexural strengthening has been widely developed and studied so far, the same doesn´t occur to shearing strength mainly due to its great complexity. Nevertheless, if design criteria are to be preserved more research should be done to this sort of strength, which are based on avoiding shear failure and its catastrophic consequences. However, in spite of this, accurately calculating the shear capacity of FRP shear strengthened RC beams remains a complex challenge that has not yet been fully resolved due to the numerous variables involved in the procedure. The objective of this Thesis is to develop methodologies to evaluate the capacity of FRP shear strengthened RC beams by dealing with the problem from a different point of view to the numerical modeling approach by using artificial intelligence techniques. With this purpose two different approaches have been developed: one concerned with the use of artificial neural networks and the other based on the implementation of an optimization approach developed jointly with the use of artificial neural networks (ANNs) and solved with genetic algorithms (GAs). With these approaches some of the difficulties concerned regarding the numerical modeling can be overcome. As an alternative tool to conventional numerical techniques, neural networks do not provide closed form solutions for modeling problems but do, however, offer a complex and accurate solution based on a representative set of historical examples of the relationship. Furthermore, they can adapt solutions over time to include new data. On the other hand, as a second proposal, an optimization approach has also been developed to implement simple yet accurate shear design equations for this kind of strengthening. This approach is developed in a multi-objective framework by considering experimental results of RC beams with and without NSM-FRP. Furthermore, the results obtained with the previous scheme based on ANNs are also used as a filter to choose the parameters to include in the design equations. Genetic algorithms are used to solve the optimization problem since they are especially suitable for solving multi-objective problems when compared to standard optimization methods. The key features of the two proposed procedures are outlined and their performance in predicting the capacity of NSM-FRP shear strengthened RC beams is evaluated by comparison with results from experimental tests and with predictions obtained using a simplified numerical model. A sensitivity study of the predictions of both models for the input parameters is also carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data acquired by Remote Sensing systems allow obtaining thematic maps of the earth's surface, by means of the registered image classification. This implies the identification and categorization of all pixels into land cover classes. Traditionally, methods based on statistical parameters have been widely used, although they show some disadvantages. Nevertheless, some authors indicate that those methods based on artificial intelligence, may be a good alternative. Thus, fuzzy classifiers, which are based on Fuzzy Logic, include additional information in the classification process through based-rule systems. In this work, we propose the use of a genetic algorithm (GA) to select the optimal and minimum set of fuzzy rules to classify remotely sensed images. Input information of GA has been obtained through the training space determined by two uncorrelated spectral bands (2D scatter diagrams), which has been irregularly divided by five linguistic terms defined in each band. The proposed methodology has been applied to Landsat-TM images and it has showed that this set of rules provides a higher accuracy level in the classification process

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The progressive depletion of fossil fuels and their high contribution to the energy supply in this modern society forces that will be soon replaced by renewable fuels. But the dispersion and alternation of renewable energy production also undertake to reduce their costs to use as energy storage and hydrogen carrier. It is necessary to develop technologies for hydrogen production from all renewable energy storage technologies and the development of energy production from hydrogen fuel cells and cogeneration and tri generation systems. In order to propel this technological development discussed where the hydrogen plays a key role as energy storage and renewable energy, the National Centre of Hydrogen and Fuel Cell Technology Experimentation in Spain equipped with installations that enable scientific and technological design, develop, verify, certify, approve, test, measure and, more importantly, the facility ensures continuous operation for 24 hours a day, 365 days year. At the same time, the system is scalable so as to allow continuous adaptation of new technologies are developed and incorporated into the assembly to verify integration at the same time it checks the validity of their development. The transformation sector can be said to be the heart of the system, because without neglecting the other sectors, this should prove the validity of hydrogen as a carrier - energy storage are important efforts that have to do to demonstrate the suitability of fuel cells or internal combustion systems to realize the energy stored in hydrogen at prices competitive with conventional systems. The multiple roles to meet the fuel cells under different conditions of operation require to cover their operating conditions, many different sizes and applications. The fourth area focuses on integration is an essential complement within the installation. We must integrate not only the electricity produced, but also hydrogen is used and the heat generated in the process of using hydrogen energy. The energy management in its three forms: hydrogen chemical, electrical and thermal integration requires complicated and require a logic and artificial intelligence extremes to ensure maximum energy efficiency at the same time optimum utilization is achieved. Verification of the development and approval in the entire production system and, ultimately, as a demonstrator set to facilitate the simultaneous evolution of production technology, storage and distribution of hydrogen fuel cells has been assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DynaLearn (http://www.DynaLearn.eu) develops a cognitive artefact that engages learners in an active learning by modelling process to develop conceptual system knowledge. Learners create external representations using diagrams. The diagrams capture conceptual knowledge using the Garp3 Qualitative Reasoning (QR) formalism [2]. The expressions can be simulated, confronting learners with the logical consequences thereof. To further aid learners, DynaLearn employs a sequence of knowledge representations (Learning Spaces, LS), with increasing complexity in terms of the modelling ingredients a learner can use [1]. An online repository contains QR models created by experts/teachers and learners. The server runs semantic services [4] to generate feedback at the request of learners via the workbench. The feedback is communicated to the learner via a set of virtual characters, each having its own competence [3]. A specific feedback thus incorporates three aspects: content, character appearance, and a didactic setting (e.g. Quiz mode). In the interactive event we will demonstrate the latest achievements of the DynaLearn project. First, the 6 learning spaces for learners to work with. Second, the generation of feedback relevant to the individual needs of a learner using Semantic Web technology. Third, the verbalization of the feedback via different animated virtual characters, notably: Basic help, Critic, Recommender, Quizmaster & Teachable agen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diabetes is the most common disease nowadays in all populations and in all age groups. Different techniques of artificial intelligence has been applied to diabetes problem. This research proposed the artificial metaplasticity on multilayer perceptron (AMMLP) as prediction model for prediction of diabetes. The Pima Indians diabetes was used to test the proposed model AMMLP. The results obtained by AMMLP were compared with other algorithms, recently proposed by other researchers, that were applied to the same database. The best result obtained so far with the AMMLP algorithm is 89.93%