45 resultados para common agent architecture design
em Universidad Politécnica de Madrid
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
The definition of an agent architecture at the knowledge level makes emphasis on the knowledge role played by the data interchanged between the agent components and makes explicit this data interchange this makes easier the reuse of these knowledge structures independently of the implementation This article defines a generic task model of an agent architecture and refines some of these tasks using the interference diagrams. Finally, a operationalisation of this conceptual model using the rule-oriented language Jess is shown. knowledge level,
Resumo:
In this paper we present AMSIA, an agent architecture that combines the possibility of using di erent reasoning methods with a mechanism to control the resources needed by the agent to ful ll its high level objectives. The architecture is based on the blackboard paradigm which o ers the possibility of combining di erent reasoning techniques and opportunistic behavior. The AMSIA architecture adds a representation of plans of objectives allowing di erent reasoning activities to create plans to guide the future behavior of the agent. The opportunism is in the acquisition of high-level objectives and in the modi cation of the predicted activity when something doesn't happen as expected. A control mechanism is responsible for the translation of plans of objectives to concrete activities, considering resource-boundedness. To do so, all the activity in the agent (including control) is explicitly scheduled, but allowing the necessary exibility to make changes in the face of contingencies that are expected in dynamic environments. Experimental work is also presented.
Resumo:
Emergency management is one of the key aspects within the day-to-day operation procedures in a highway. Efficiency in the overall response in case of an incident is paramount in reducing the consequences of any incident. However, the approach of highway operators to the issue of incident management is still usually far from a systematic, standardized way. This paper attempts to address the issue and provide several hints on why this happens, and a proposal on how the situation could be overcome. An introduction to a performance based approach to a general system specification will be described, and then applied to a particular road emergency management task. A real testbed has been implemented to show the validity of the proposed approach. Ad-hoc sensors (one camera and one laser scanner) were efficiently deployed to acquire data, and advanced fusion techniques applied at the processing stage to reach the specific user requirements in terms of functionality, flexibility and accuracy.
Resumo:
Cooperative systems are suitable for many types of applications and nowadays these system are vastly used to improve a previously defined system or to coordinate multiple devices working together. This paper provides an alternative to improve the reliability of a previous intelligent identification system. The proposed approach implements a cooperative model based on multi-agent architecture. This new system is composed of several radar-based systems which identify a detected object and transmit its own partial result by implementing several agents and by using a wireless network to transfer data. The proposed topology is a centralized architecture where the coordinator device is in charge of providing the final identification result depending on the group behavior. In order to find the final outcome, three different mechanisms are introduced. The simplest one is based on majority voting whereas the others use two different weighting voting procedures, both providing the system with learning capabilities. Using an appropriate network configuration, the success rate can be improved from the initial 80% up to more than 90%.
Resumo:
This article proposes an agent-oriented methodology called MAS-CommonKADS and develops a case study. This methodology extends the knowledge engineering methodology CommonKADSwith techniquesfrom objectoriented and protocol engineering methodologies. The methodology consists of the development of seven models: Agent Model, that describes the characteristics of each agent; Task Model, that describes the tasks that the agents carry out; Expertise Model, that describes the knowledge needed by the agents to achieve their goals; Organisation Model, that describes the structural relationships between agents (software agents and/or human agents); Coordination Model, that describes the dynamic relationships between software agents; Communication Model, that describes the dynamic relationships between human agents and their respective personal assistant software agents; and Design Model, that refines the previous models and determines the most suitable agent architecture for each agent, and the requirements of the agent network.
Resumo:
In this paper, an innovative approach to perform distributed Bayesian inference using a multi-agent architecture is presented. The final goal is dealing with uncertainty in network diagnosis, but the solution can be of applied in other fields. The validation testbed has been a P2P streaming video service. An assessment of the work is presented, in order to show its advantages when it is compared with traditional manual processes and other previous systems.
Resumo:
The presented work aims to contribute towards the standardization and the interoperability off the Future Internet through an open and scalable architecture design. We present S³OiA as a syntactic/semantic Service-Oriented Architecture that allows the integration of any type of object or device, not mattering their nature, on the Internet of Things. Moreover, the architecture makes possible the use of underlying heterogeneous resources as a substrate for the automatic composition of complex applications through a semantic Triple Space paradigm. Created applications are dynamic and adaptive since they are able to evolve depending on the context where they are executed. The validation scenario of this architecture encompasses areas which are prone to involve human beings in order to promote personal autonomy, such as home-care automation environments and Ambient Assisted Living.
Resumo:
In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.
Resumo:
In this paper we propose a flexible Multi-Agent Architecture together with a methodology for indoor location which allows us to locate any mobile station (MS) such as a Laptop, Smartphone, Tablet or a robotic system in an indoor environment using wireless technology. Our technology is complementary to the GPS location finder as it allows us to locate a mobile system in a specific room on a specific floor using the Wi-Fi networks. The idea is that any MS will have an agent known at a Fuzzy Location Software Agent (FLSA) with a minimum capacity processing at its disposal which collects the power received at different Access Points distributed around the floor and establish its location on a plan of the floor of the building. In order to do so it will have to communicate with the Fuzzy Location Manager Software Agent (FLMSA). The FLMSAs are local agents that form part of the management infrastructure of the Wi-Fi network of the Organization. The FLMSA implements a location estimation methodology divided into three phases (measurement, calibration and estimation) for locating mobile stations (MS). Our solution is a fingerprint-based positioning system that overcomes the problem of the relative effect of doors and walls on signal strength and is independent of the network device manufacturer. In the measurement phase, our system collects received signal strength indicator (RSSI) measurements from multiple access points. In the calibration phase, our system uses these measurements in a normalization process to create a radio map, a database of RSS patterns. Unlike traditional radio map-based methods, our methodology normalizes RSS measurements collected at different locations on a floor. In the third phase, we use Fuzzy Controllers to locate an MS on the plan of the floor of a building. Experimental results demonstrate the accuracy of the proposed method. From these results it is clear that the system is highly likely to be able to locate an MS in a room or adjacent room.
Resumo:
Knowledge modeling tools are software tools that follow a modeling approach to help developers in building a knowledge-based system. The purpose of this article is to show the advantages of using this type of tools in the development of complex knowledge-based decision support systems. In order to do so, the article describes the development of a system called SAIDA in the domain of hydrology with the help of the KSM modeling tool. SAIDA operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.). It follows a multi-agent architecture to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation. KSM was especially useful to design and implement the complex knowledge based architecture in an efficient way.
Resumo:
People in industrial societies carry more and more portable electronic devices (e.g., smartphone or console) with some kind of wireles connectivity support. Interaction with auto-discovered target devices present in the environment (e.g., the air conditioning of a hotel) is not so easy since devices may provide inaccessible user interfaces (e.g., in a foreign language that the user cannot understand). Scalability for multiple concurrent users and response times are still problems in this domain. In this paper, we assess an interoperable architecture, which enables interaction between people with some kind of special need and their environment. The assessment, based on performance patterns and antipatterns, tries to detect performance issues and also tries to enhance the architecture design for improving system performance. As a result of the assessment, the initial design changed substantially. We refactorized the design according to the Fast Path pattern and The Ramp antipattern. Moreover, resources were correctly allocated. Finally, the required response time was fulfilled in all system scenarios. For a specific scenario, response time was reduced from 60 seconds to less than 6 seconds.
Resumo:
La usabilidad es un atributo de calidad de un sistema software que llega a ser crítico en sistemas altamente interactivos. Desde el campo de la Interacción Persona-Ordenador se proponen recomendaciones que permiten alcanzar un nivel adecuado de usabilidad en un sistema. En la disciplina de la Ingeniería de Software se ha establecido que algunas de estas recomendaciones afectan a la funcionalidad principal de los sistemas y no solo a la interfaz de usuario. Este tipo de recomendaciones de usabilidad se deben tener en cuenta desde las primeras actividades y durante todo el proceso de desarrollo, así como se hace con atributos tales como la seguridad, la facilidad de mantenimiento o el rendimiento. Desde la Ingeniería de Software se han hecho estudios y propuestas para abordar la usabilidad en las primeras actividades del desarrollo. En particular en la educción de requisitos y diseño de la arquitectura. Estas propuestas son de un alto nivel de abstracción. En esta investigación se aborda la usabilidad en actividades avanzadas del proceso de desarrollo: el diseño detallado y la programación. El objetivo de este trabajo es obtener, formalizar y validar soluciones reutilizables para la usabilidad en estas actividades. En este estudio se seleccionan tres funcionalidades de usabilidad identificadas como de alto impacto en el diseño: Abortar Operación, Retroalimentación de Progreso y Preferencias. Para la obtención de elementos reutilizables se utiliza un método inductivo. Se parte de la construcción de aplicaciones web particulares y se induce una solución general. Durante la construcción de las aplicaciones se mantiene la trazabilidad de los elementos relacionados con cada funcionalidad de usabilidad. Al finalizar se realiza un análisis de elementos comunes, y los hallazgos se formalizan como patrones de diseño orientados a la implementación y patrones de programación en cada uno de los lenguajes utilizados: PHP, VB .NET y Java. Las soluciones formalizadas como patrones se validan usando la metodología de estudio de casos. Desarrolladores independientes utilizan los patrones para la inclusión de las tres funcionalidades de usabilidad en dos nuevas aplicaciones web. Como resultado, los desarrolladores pueden usar con éxito las soluciones propuestas para dos de las funcionalidades: Abortar Operación y Preferencias. La funcionalidad Retroalimentación de Progreso no puede ser implementada completamente. Se concluye que es posible obtener elementos reutilizables para la implementación de cada funcionalidad de usabilidad. Estos elementos incluyen: escenarios de aplicación, que son la combinación de casuísticas que generan las funcionalidades de usabilidad, responsabilidades comunes necesarias para cubrir los escenarios, componentes comunes para cumplir con las responsabilidades, elementos de diseño asociados a los componentes y el código que implementa el diseño. Formalizar las soluciones como patrones resulta útil para comunicar los hallazgos a otros desarrolladores y los patrones se mejoran a través de su utilización en nuevos desarrollos. La implementación de funcionalidades de usabilidad presenta características que condicionan su reutilización, en particular, el nivel de acoplamiento de la funcionalidad de usabilidad con las funcionalidades de la aplicación, y la complejidad interna de la solución. ABSTRACT Usability is a critical quality attribute of highly interactive software systems. The humancomputer interaction field proposes recommendations for achieving an acceptable system usability level. The discipline of software engineering has established that some of these recommendations affect not only the user interface but also the core system functionality. This type of usability recommendations must be taken into account as of the early activities and throughout the software development process as in the case of attributes like security, ease of maintenance or performance. Software engineering has conducted studies and put forward proposals for tackling usability in the early development activities, particularly requirements elicitation and architecture design. These proposals have a high level of abstraction. This research addresses usability in later activities of the development process: detailed design and programming. The goal of this research is to discover, specify and validate reusable usability solutions for detailed design and programming. Abort Operation, Feedback and Preferences, three usability functionalities identified as having a high impact on design, are selected for the study. An inductive method, whereby a general solution is induced from particular web applications built for the purpose, is used to discover reusable elements. During the construction of the applications, the traceability of the elements related to each usability functionality is maintained. At the end of the process, the common and possibly reusable elements are analysed. The findings are specified as implementation-oriented design patterns and programming patterns for each of the languages used: PHP, VB .NET and Java. The solutions specified as patterns are validated using the case study methodology. Independent developers use the patterns in order to build the three usability functionalities into two new web applications. As a result, the developers successfully use the proposed solutions for two of the functionalities: Abort Operation and Preferences. The Progress Feedback functionality cannot be fully implemented. We conclude that it is possible to discover reusable elements for implementing each usability functionality. These elements include: application scenarios, which are combinations of cases that generate usability functionalities, common responsibilities to cover the scenarios, common components to fulfil the responsibilities, design elements associated with the components and code implementing the design. It is useful to specify solutions as patterns in order to communicate findings to other developers, and patterns improve through further use in other development projects. Reusability depends on the features of usability functionality implementation, particularly the level of coupling of the usability functionality with the application functionalities and the internal complexity of the solution.