915 resultados para Distributed artificial intelligence - multiagent systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this chapter is to discuss the applicability of recently proposed knowledge modelling tools to the development of agent-based systems. The discussion is derived from the real world experience of a particular software tool called KSM (Knowledge Structure Manager). The chapter provides details about this tool and then proceeds to show in which forms the software may be used to support the development of agent-based systems. Two multiagent systems, one in the field of telecommunications management and the other one in the field of flood control, are described. Conclusions about these studies are presented, summarizing the main contributions that knowledge modelling tools can bring to the development of agent-based systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a new technique referred to as watched subgraphs which improves the performance of BBMC, a leading state of the art exact maximum clique solver (MCP). It is based on watched literals employed by modern SAT solvers for boolean constraint propagation. In efficient SAT algorithms, a list of clauses is kept for each literal (it is said that the clauses watch the literal) so that only those in the list are checked for constraint propagation when a (watched) literal is assigned during search. BBMC encodes vertex sets as bit strings, a bit block representing a subset of vertices (and the corresponding induced subgraph) the size of the CPU register word. The paper proposes to watch two subgraphs of critical sets during MCP search to efficiently compute a number of basic operations. Reported results validate the approach as the size and density of problem instances rise, while achieving comparable performance in the general case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis tiene por objeto estudiar las posibilidades de realizar en castellano tareas relativas a la resolución de problemas con sistemas basados en el conocimiento. En los dos primeros capítulos se plantea un análisis de la trayectoria seguida por las técnicas de tratamiento del lenguaje natural, prestando especial interés a los formalismos lógicos para la comprensión del lenguaje. Seguidamente, se plantea una valoración de la situación actual de los sistemas de tratamiento del lenguaje natural. Finalmente, se presenta lo que constituye el núcleo de este trabajo, un sistema llamado Sirena, que permite realizar tareas de adquisición, comprensión, recuperación y explicación de conocimiento en castellano con sistemas basados en el conocimiento. Este sistema contiene un subconjunto del castellano amplio pero simple formalizado con una gramática lógica. El significado del conocimiento se basa en la lógica y ha sido implementado en el lenguaje de programación lógica Prolog II vS. Palabras clave: Programación Lógica, Comprensión del Lenguaje Natural, Resolución de Problemas, Gramáticas Lógicas, Lingüistica Computacional, Inteligencia Artificial.---ABSTRACT---The purpose of this thesis is to study the possibi1 ities of performing in Spanish problem solving tasks with knowledge based systems. Ule study the development of the techniques for natural language processing with a particular interest in the logical formalisms that have been used to understand natural languages. Then, we present an evaluation of the current state of art in the field of natural language processing systems. Finally, we introduce the main contribution of our work, Sirena a system that allows the adquisition, understanding, retrieval and explanation of knowledge in Spanish with knowledge based systems. Sirena can deal with a large, although simple» subset of Spanish. This subset has been formalised by means of a logic grammar and the meaning of knowledge is based on logic. Sirena has been implemented in the programming language Prolog II v2. Keywords: Logic Programming, Understanding Natural Language, Problem Solving, Logic Grammars, Cumputational Linguistic, Artificial Intelligence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the application of language translation technologies for generating bus information in Spanish Sign Language (LSE: Lengua de Signos Española). In this work, two main systems have been developed: the first for translating text messages from information panels and the second for translating spoken Spanish into natural conversations at the information point of the bus company. Both systems are made up of a natural language translator (for converting a word sentence into a sequence of LSE signs), and a 3D avatar animation module (for playing back the signs). For the natural language translator, two technological approaches have been analyzed and integrated: an example-based strategy and a statistical translator. When translating spoken utterances, it is also necessary to incorporate a speech recognizer for decoding the spoken utterance into a word sequence, prior to the language translation module. This paper includes a detailed description of the field evaluation carried out in this domain. This evaluation has been carried out at the customer information office in Madrid involving both real bus company employees and deaf people. The evaluation includes objective measurements from the system and information from questionnaires. In the field evaluation, the whole translation presents an SER (Sign Error Rate) of less than 10% and a BLEU greater than 90%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obtained after subjective assessment it is possible to create psychovisual models based on weighting pixels belonging to different regions of interest distributed by color, position, motion or content. Interesting results are obtained in subjective assessment which demonstrates the necessity of new metrics adapted to human visual system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic and Partial Reconfiguration allows systems to change some parts of their hardware at run time. This feature favours the inclusion of evolutionary strategies to provide optimised solutions to the same problem so that they can be mixed and compared in a way that only the best ones prevail. At the same time, distributed intelligence permits systems to work in a collaborative way to jointly improve their global capabilities. This work presents a combination of both approaches where hardware evolution is performed both at local and network level in order to improve an image filter application in terms of performance, robustness and providing the capacity of avoiding local minimums, which is the main drawback of some evolutionary approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parte de la investigación biomédica actual se encuentra centrada en el análisis de datos heterogéneos. Estos datos pueden tener distinto origen, estructura, y semántica. Gran cantidad de datos de interés para los investigadores se encuentran en bases de datos públicas, que recogen información de distintas fuentes y la ponen a disposición de la comunidad de forma gratuita. Para homogeneizar estas fuentes de datos públicas con otras de origen privado, existen diversas herramientas y técnicas que permiten automatizar los procesos de homogeneización de datos heterogéneos. El Grupo de Informática Biomédica (GIB) [1] de la Universidad Politécnica de Madrid colabora en el proyecto europeo P-medicine [2], cuya finalidad reside en el desarrollo de una infraestructura que facilite la evolución de los procedimientos médicos actuales hacia la medicina personalizada. Una de las tareas enmarcadas en el proyecto P-medicine que tiene asignado el grupo consiste en elaborar herramientas que ayuden a usuarios en el proceso de integración de datos contenidos en fuentes de información heterogéneas. Algunas de estas fuentes de información son bases de datos públicas de ámbito biomédico contenidas en la plataforma NCBI [3] (National Center for Biotechnology Information). Una de las herramientas que el grupo desarrolla para integrar fuentes de datos es Ontology Annotator. En una de sus fases, la labor del usuario consiste en recuperar información de una base de datos pública y seleccionar de forma manual los resultados relevantes. Para automatizar el proceso de búsqueda y selección de resultados relevantes, por un lado existe un gran interés en conseguir generar consultas que guíen hacia resultados lo más precisos y exactos como sea posible, por otro lado, existe un gran interés en extraer información relevante de elevadas cantidades de documentos, lo cual requiere de sistemas que analicen y ponderen los datos que caracterizan a los mismos. En el campo informático de la inteligencia artificial, dentro de la rama de la recuperación de la información, existen diversos estudios acerca de la expansión de consultas a partir de retroalimentación relevante que podrían ser de gran utilidad para dar solución a la cuestión. Estos estudios se centran en técnicas para reformular o expandir la consulta inicial utilizando como realimentación los resultados que en una primera instancia fueron relevantes para el usuario, de forma que el nuevo conjunto de resultados tenga mayor proximidad con los que el usuario realmente desea. El objetivo de este trabajo de fin de grado consiste en el estudio, implementación y experimentación de métodos que automaticen el proceso de extracción de información trascendente de documentos, utilizándola para expandir o reformular consultas. De esta forma se pretende mejorar la precisión y el ranking de los resultados asociados. Dichos métodos serán integrados en la herramienta Ontology Annotator y enfocados a la fuente de datos de PubMed [4].---ABSTRACT---Part of the current biomedical research is focused on the analysis of heterogeneous data. These data may have different origin, structure and semantics. A big quantity of interesting data is contained in public databases which gather information from different sources and make it open and free to be used by the community. In order to homogenize thise sources of public data with others which origin is private, there are some tools and techniques that allow automating the processes of integration heterogeneous data. The biomedical informatics group of the Universidad Politécnica de Madrid cooperates with the European project P-medicine which main purpose is to create an infrastructure and models to facilitate the transition from current medical practice to personalized medicine. One of the tasks of the project that the group is in charge of consists on the development of tools that will help users in the process of integrating data from diverse sources. Some of the sources are biomedical public data bases from the NCBI platform (National Center for Biotechnology Information). One of the tools in which the group is currently working on for the integration of data sources is called the Ontology Annotator. In this tool there is a phase in which the user has to retrieve information from a public data base and select the relevant data contained in it manually. For automating the process of searching and selecting data on the one hand, there is an interest in automatically generating queries that guide towards the more precise results as possible. On the other hand, there is an interest on retrieve relevant information from large quantities of documents. The solution requires systems that analyze and weigh the data allowing the localization of the relevant items. In the computer science field of the artificial intelligence, in the branch of information retrieval there are diverse studies about the query expansion from relevance feedback that could be used to solve the problem. The main purpose of this studies is to obtain a set of results that is the closer as possible to the information that the user really wants to retrieve. In order to reach this purpose different techniques are used to reformulate or expand the initial query using a feedback the results that where relevant for the user, with this method, the new set of results will have more proximity with the ones that the user really desires. The goal of this final dissertation project consists on the study, implementation and experimentation of methods that automate the process of extraction of relevant information from documents using this information to expand queries. This way, the precision and the ranking of the results associated will be improved. These methods will be integrated in the Ontology Annotator tool and will focus on the PubMed data source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a fuzzy feedback linearization is used to control nonlinear systems described by Takagi-Suengo (T-S) fuzzy systems. In this work, an optimal controller is designed using the linear quadratic regulator (LQR). The well known weighting parameters approach is applied to optimize local and global approximation and modelling capability of T-S fuzzy model to improve the choice of the performance index and minimize it. The approach used here can be considered as a generalized version of T-S method. Simulation results indicate the potential, simplicity and generality of the estimation method and the robustness of the proposed optimal LQR algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se estudia la representación, modelado y comparación de colecciones mediante el uso de ontologías en el ámbito de la Web Semántica. Las colecciones, entendidas como agrupaciones de objetos o elementos con entidad propia, son construcciones que aparecen frecuentemente en prácticamente todos los dominios del mundo real, y por tanto, es imprescindible disponer de conceptualizaciones de estas estructuras abstractas y de representaciones de estas conceptualizaciones en los sistemas informáticos, que definan adecuadamente su semántica. Mientras que en muchos ámbitos de la Informática y la Inteligencia Artificial, como por ejemplo la programación, las bases de datos o la recuperación de información, las colecciones han sido ampliamente estudiadas y se han desarrollado representaciones que responden a multitud de conceptualizaciones, en el ámbito de la Web Semántica, sin embargo, su estudio ha sido bastante limitado. De hecho hasta la fecha existen pocas propuestas de representación de colecciones mediante ontologías, y las que hay sólo cubren algunos tipos de colecciones y presentan importantes limitaciones. Esto impide la representación adecuada de colecciones y dificulta otras tareas comunes como la comparación de colecciones, algo crítico en operaciones habituales como las búsquedas semánticas o el enlazado de datos en la Web Semántica. Para solventar este problema esta tesis hace una propuesta de modelización de colecciones basada en una nueva clasificación de colecciones de acuerdo a sus características estructurales (homogeneidad, unicidad, orden y cardinalidad). Esta clasificación permite definir una taxonomía con hasta 16 tipos de colecciones distintas. Entre otras ventajas, esta nueva clasificación permite aprovechar la semántica de las propiedades estructurales de cada tipo de colección para realizar comparaciones utilizando las funciones de similitud y disimilitud más apropiadas. De este modo, la tesis desarrolla además un nuevo catálogo de funciones de similitud para las distintas colecciones, donde se han recogido las funciones de (di)similitud más conocidas y también algunas nuevas. Esta propuesta se ha implementado mediante dos ontologías paralelas, la ontología E-Collections, que representa los distintos tipos de colecciones de la taxonomía y su axiomática, y la ontología SIMEON (Similarity Measures Ontology) que representa los tipos de funciones de (di)similitud para cada tipo de colección. Gracias a estas ontologías, para comparar dos colecciones, una vez representadas como instancias de la clase más apropiada de la ontología E-Collections, automáticamente se sabe qué funciones de (di)similitud de la ontología SIMEON pueden utilizarse para su comparación. Abstract This thesis studies the representation, modeling and comparison of collections in the Semantic Web using ontologies. Collections, understood as groups of objects or elements with their own identities, are constructions that appear frequently in almost all areas of the real world. Therefore, it is essential to have conceptualizations of these abstract structures and representations of these conceptualizations in computer systems, that define their semantic properly. While in many areas of Computer Science and Artificial Intelligence, such as Programming, Databases or Information Retrieval, the collections have been extensively studied and there are representations that match many conceptualizations, in the field Semantic Web, however, their study has been quite limited. In fact, there are few representations of collections using ontologies so far, and they only cover some types of collections and have important limitations. This hinders a proper representation of collections and other common tasks like comparing collections, something critical in usual operations such as semantic search or linking data on the Semantic Web. To solve this problem this thesis makes a proposal for modelling collections based on a new classification of collections according to their structural characteristics (homogeneity, uniqueness, order and cardinality). This classification allows to define a taxonomy with up to 16 different types of collections. Among other advantages, this new classification can leverage the semantics of the structural properties of each type of collection to make comparisons using the most appropriate (dis)similarity functions. Thus, the thesis also develops a new catalog of similarity functions for the different types of collections. This catalog contains the most common (dis)similarity functions as well as new ones. This proposal is implemented through two parallel ontologies, the E-Collections ontology that represents the different types of collections in the taxonomy and their axiomatic, and the SIMEON ontology (Similarity Measures Ontology) that represents the types of (dis)similarity functions for each type of collection. Thanks to these ontologies, to compare two collections, once represented as instances of the appropriate class of E-Collections ontology, we can know automatically which (dis)similarity functions of the SIMEON ontology are suitable for the comparison. Finally, the feasibility and usefulness of this modeling and comparison of collections proposal is proved in the field of oenology, applying both E-Collections and SIMEON ontologies to the representation and comparison of wines with the E-Baco ontology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La premisa inicial de la tesis examina cómo las secuelas de Segunda Guerra mundial motivaron una revisión general de la Ciencia y procuraron una nueva relación entre el hombre y su entorno. Matemáticas, Física y Biología gestaron las Ciencias de la Computación como disciplina de convergencia. En un momento de re-definición del objeto científico, una serie de arquitectos vislumbraron la oportunidad para transformar ciertas convenciones disciplinares. Mediante la incorporación de ontologías y procedimientos de cibernética y computación, trazaron un nuevo espacio arquitectónico. Legitimados por un despegue tecnológico incuestionable, desafían los límites de la profesión explorando campos abiertos a nuevos programas y acciones; amplían el dominio natural de la Arquitectura más allá del objeto(terminado) hacia el proceso(abierto). Se da inicio a la tesis describiendo los antecedentes que conducen a ese escenario de cambio. Se anotan aspectos de Teoría de Sistemas, Computación, Biología y de ciertos referentes de Arquitectura con relevancia para esa nuevo planteamiento. En esos antecedentes residen los argumentos para orientar la disciplina hacia el trabajo con procesos. La linea argumental central del texto aborda la obra de Christopher Alexander, Nicholas Negroponte y Cedric Price a través de una producción teórica y práctica transformada por la computación, y examina la contribución conceptual de cada autor. El análisis comparado de sus modelos se dispone mediante la disección de tres conceptos convergentes: Sistema, Código y Proceso. La discusión crítica se articula por una triangulación entre los autores, donde se identifican comparando por pares las coincidencias y controversias entre ellos. Sirve este procedimiento al propósito de tender un puente conceptual con el escenario arquitectónico actual estimando el impacto de sus propuestas. Se valora su contribución en la deriva del programa cerrado a la especulación , de lo formal a lo informal, de lo único a lo múltiple; del estudio de arquitectura al laboratorio de investigación. Para guiar ese recorrido por la significación de cada autor en el desarrollo digital de la disciplina, se incorporan a la escena dos predicados esenciales; expertos en computación que trabajaron de enlace entre los autores, matizando el significado de sus modelos. El trabajo de Gordon Pask y John Frazer constituye el vehículo de transmisión de los hallazgos de aquellos años, prolonga los caminos iniciados entonces, en la arquitectura de hoy y la que ya se está diseñando para mañana. ABSTRACT The initial premise of the thesis examines how the aftermath of second world war motivated a general revision of science and procure the basis of a new relation between mankind and its environment. Mathematics, Physics, and Biology gave birth to the Computer Sciences as a blend of different knowledge and procedures. In a time when the object of major sciences was being redefined, a few architects saw a promising opportunity for transforming the Architectural convention. By implementing the concepts, ontology and procedures of Cybernetics, Artificial Intelligence and Information Technology, they envisioned a new space for their discipline. In the verge of transgression three prescient architects proposed complete architectural systems through their writings and projects; New systems that challenged the profession exploring open fields through program and action, questioning the culture of conservatism; They shifted architectural endeavor from object to process. The thesis starts describing the scientific and architectural background that lead to that opportunity, annotating aspects of Systems Theory, Computing, Biology and previous Architecture form the process perspective. It then focuses on the Works of Christopher Alexander, Nicholas Negroponte and Cedric Price through their work, and examines each authors conceptual contribution. It proceeds to a critical analysis of their proposals on three key converging aspects: system, architectural encoding and process. Finally, the thesis provides a comparative discussion between the three authors, and unfolds the impact of their work in todays architectural scenario. Their contribution to shift from service to speculation, from formal to informal , from unitary to multiple; from orthodox architecture studio to open laboratories of praxis through research. In order to conclude that triangle of concepts, other contributions come into scene to provide relevant predicates and complete those models. A reference to Gordon Pask and John Frazer is then provided with particular interest in their role as link between those pioneers and todays perspective, pushing the boundaries of both what architecture was and what it could become.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As empresas que almejam garantir e melhorar sua posição dentro de em um mercado cada vez mais competitivo precisam estar sempre atualizadas e em constante evolução. Na busca contínua por essa evolução, investem em projetos de Pesquisa & Desenvolvimento (P&D) e em seu capital humano para promover a criatividade e a inovação organizacional. As pessoas têm papel fundamental no desenvolvimento da inovação, mas para que isso possa florescer de forma constante é preciso comprometimento e criatividade para a geração de ideias. Criatividade é pensar o novo; inovação é fazer acontecer. Porém, encontrar pessoas com essas qualidades nem sempre é tarefa fácil e muitas vezes é preciso estimular essas habilidades e características para que se tornem efetivamente criativas. Os cursos de graduação podem ser uma importante ferramenta para trabalhar esses aspectos, características e habilidades, usando métodos e práticas de ensino que auxiliem no desenvolvimento da criatividade, pois o ambiente ensino-aprendizagem pesa significativamente na formação das pessoas. O objetivo deste estudo é de identificar quais fatores têm maior influência sobre o desenvolvimento da criatividade em um curso de graduação em administração, analisando a influência das práticas pedagógicas dos docentes e as barreiras internas dos discentes. O referencial teórico se baseia principalmente nos trabalhos de Alencar, Fleith, Torrance e Wechsler. A pesquisa transversal de abordagem quantitativa teve como público-alvo os alunos do curso de Administração de uma universidade confessional da Grande São Paulo, que responderam 465 questionários compostos de três escalas. Para as práticas docentes foi adaptada a escala de Práticas Docentes em relação à Criatividade. Para as barreiras internas foi adaptada a escala de Barreiras da Criatividade Pessoal. Para a análise da percepção do desenvolvimento da criatividade foi construída e validada uma escala baseada no referencial de características de uma pessoa criativa. As análises estatísticas descritivas e fatoriais exploratórias foram realizadas no software Statistical Package for the Social Sciences (SPSS), enquanto as análises fatoriais confirmatórias e a mensuração da influência das práticas pedagógicas e das barreiras internas sobre a percepção do desenvolvimento da criatividade foram realizadas por modelagem de equação estrutural utilizando o algoritmo Partial Least Squares (PLS), no software Smart PLS 2.0. Os resultados apontaram que as práticas pedagógicas e as barreiras internas dos discentes explicam 40% da percepção de desenvolvimento da criatividade, sendo as práticas pedagógicas que exercem maior influencia. A pesquisa também apontou que o tipo de temática e o período em que o aluno está cursando não têm influência sobre nenhum dos três construtos, somente o professor influencia as práticas pedagógicas.