798 resultados para Multi agent systems
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
The recent trends envisage multi-standard architectures as a promising solution for the future wireless transceivers. The computationally intensive decimation filter plays an important role in channel selection for multi-mode systems. An efficient reconfigurable implementation is a key to achieve low power consumption. To this end, this paper presents a dual-mode Residue Number System (RNS) based decimation filter which can be programmed for WCDMA and 802.11a standards. Decimation is done using multistage, multirate finite impulse response (FIR) filters. These FIR filters implemented in RNS domain offers high speed because of its carry free operation on smaller residues in parallel channels. Also, the FIR filters exhibit programmability to a selected standard by reconfiguring the hardware architecture. The total area is increased only by 33% to include WLANa compared to a single mode WCDMA transceiver. In each mode, the unused parts of the overall architecture is powered down and bypassed to attain power saving. The performance of the proposed decimation filter in terms of critical path delay and area are tabulated
Resumo:
This report describes a working autonomous mobile robot whose only goal is to collect and return empty soda cans. It operates in an unmodified office environment occupied by moving people. The robot is controlled by a collection of over 40 independent "behaviors'' distributed over a loosely coupled network of 24 processors. Together this ensemble helps the robot locate cans with its laser rangefinder, collect them with its on-board manipulator, and bring them home using a compass and an array of proximity sensors. We discuss the advantages of using such a multi-agent control system and show how to decompose the required tasks into component activities. We also examine the benefits and limitations of spatially local, stateless, and independent computation by the agents.
Resumo:
In this thesis I present a language for instructing a sheet of identically-programmed, flexible, autonomous agents (``cells'') to assemble themselves into a predetermined global shape, using local interactions. The global shape is described as a folding construction on a continuous sheet, using a set of axioms from paper-folding (origami). I provide a means of automatically deriving the cell program, executed by all cells, from the global shape description. With this language, a wide variety of global shapes and patterns can be synthesized, using only local interactions between identically-programmed cells. Examples include flat layered shapes, all plane Euclidean constructions, and a variety of tessellation patterns. In contrast to approaches based on cellular automata or evolution, the cell program is directly derived from the global shape description and is composed from a small number of biologically-inspired primitives: gradients, neighborhood query, polarity inversion, cell-to-cell contact and flexible folding. The cell programs are robust, without relying on regular cell placement, global coordinates, or synchronous operation and can tolerate a small amount of random cell death. I show that an average cell neighborhood of 15 is sufficient to reliably self-assemble complex shapes and geometric patterns on randomly distributed cells. The language provides many insights into the relationship between local and global descriptions of behavior, such as the advantage of constructive languages, mechanisms for achieving global robustness, and mechanisms for achieving scale-independent shapes from a single cell program. The language suggests a mechanism by which many related shapes can be created by the same cell program, in the manner of D'Arcy Thompson's famous coordinate transformations. The thesis illuminates how complex morphology and pattern can emerge from local interactions, and how one can engineer robust self-assembly.
Resumo:
The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time
Resumo:
Aunque el concepto de sabiduría ha sido ampliamente estudiado por expertos de áreas como la filosofía, la religión y la psicología, aún enfrenta limitaciones en cuanto a su definición y evaluación. Por esto, el presente trabajo tiene como objetivo, formular una definición del concepto de sabiduría que permita realizar una propuesta de evaluación del concepto como competencia en los gerentes. Para esto, se realizó un análisis documental de tipo cualitativo. De esta manera, se analizaron diversos textos sobre la historia, las definiciones y las metodologías para evaluar tanto la sabiduría como las competencias; diferenciando la sabiduría de otros constructos y analizando la diferencia entre las competencias generales y las gerenciales para posteriormente, definir la sabiduría como una competencia gerencial. Como resultado de este análisis se generó un prototipo de prueba denominado SAPIENS-O, a través del cuál se busca evaluar la sabiduría como competencia gerencial. Como alcances del instrumento se pueden identificar la posibilidad de medir la sabiduría como competencia en los gerentes, la posibilidad de dar un nuevo panorama a las dificultades teóricas y empíricas sobre la sabiduría y la posibilidad de facilitar el estudio de la sabiduría en ambientes reales, más específicamente en ambientes organizacionales.
Resumo:
Aquest projecte titulat: “Disseny de controladors òptims per al robot Pioneer”, té com a funció incloure en la recerca, que ja està iniciada, del control del Robot Pioneer 2DX, una nova versió d’agents go to per al funcionament del robot. La problemàtica que ens trobem és sobretot per al primer controlador. Fins ara el sistema multi-agent fet, feia servir un agent go to que generava la trajectòria a seguir i la controlava mitjançant un PID. Introduint un mètode geomètric com és el cas del pure pursuit la cosa es complica ja que és més complex l’ajustament del funcionament d’aquest. Centrant-nos en canvi el cas del segon controlador el problema es simplifica ja que l’ajustatge d’aquest mateix es pot realitzar de manera empírica i la problemàtica per a la situació en concret es millora amb major facilitat. És per aquest motiu, sobretot pel primer controlador, que s’han hagut de realitzar algunes modificacions en el plantejament del projecte al llarg d’aquest. En un principi estava pensat crear aquest controlador a través de Matlab® mitjançant l’eina Simulink® però per problemes de software en un moment donat hem hagut de redirigir el projecte cap al llenguatge base de l’estructura multi-agent com és el C++. Per aquest motiu també s’ha hagut de prescindir de la implementació d’aquests també en l’estructura LabView®.
Resumo:
En esta tesis se propone el uso de agentes inteligentes en entornos de aprendizaje en línea con el fin de mejorar la asistencia y motivación del estudiante a través de contenidos personalizados que tienen en cuenta el estilo de aprendizaje del estudiante y su nivel de conocimiento. Los agentes propuestos se desempeñan como asistentes personales que ayudan al estudiante a llevar a cabo las actividades de aprendizaje midiendo su progreso y motivación. El entorno de agentes se construye a través de una arquitectura multiagente llamada MASPLANG diseñada para dar soporte adaptativo (presentación y navegación adaptativa) a un sistema hipermedia educativo desarrollado en la Universitat de Girona para impartir educación virtual a través del web. Un aspecto importante de esta propuesta es la habilidad de construir un modelo de estudiante híbrido que comienza con un modelo estereotípico del estudiante basado en estilos de aprendizaje y se modifica gradualmente a medida que el estudiante interactúa con el sistema (gustos subjetivos). Dentro del contexto de esta tesis, el aprendizaje se define como el proceso interno que, bajo factores de cambio resulta en la adquisición de la representación interna de un conocimiento o de una actitud. Este proceso interno no se puede medir directamente sino a través de demostraciones observables externas que constituyen el comportamiento relacionado con el objeto de conocimiento. Finalmente, este cambio es el resultado de la experiencia o entrenamiento y tiene una durabilidad que depende de factores como la motivación y el compromiso. El MASPLANG está compuesto por dos niveles de agentes: los intermediarios llamados IA (agentes de información) que están en el nivel inferior y los de Interfaz llamados PDA (agentes asistentes) que están en el nivel superior. Los agentes asistentes atienden a los estudiantes cuando trabajan con el material didáctico de un curso o una lección de aprendizaje. Esta asistencia consiste en la recolección y análisis de las acciones de los estudiantes para ofrecer contenidos personalizados y en la motivación del estudiante durante el aprendizaje mediante el ofrecimiento de contenidos de retroalimentación, ejercicios adaptados al nivel de conocimiento y mensajes, a través de interfaces de usuario animadas y atractivas. Los agentes de información se encargan del mantenimiento de los modelos pedagógico y del dominio y son los que están en completa interacción con las bases de datos del sistema (compendio de actividades del estudiante y modelo del dominio). El escenario de funcionamiento del MASPLANG está definido por el tipo de usuarios y el tipo de contenidos que ofrece. Como su entorno es un sistema hipermedia educativo, los usuarios se clasifican en profesores quienes definen y preparan los contenidos para el aprendizaje adaptativo, y los estudiantes quienes llevan a cabo las actividades de aprendizaje de forma personalizada. El perfil de aprendizaje inicial del estudiante se captura a través de la evaluación del cuestionario ILS (herramienta de diagnóstico del modelo FSLSM de estilos de aprendizaje adoptado para este estudio) que se asigna al estudiante en su primera interacción con el sistema. Este cuestionario consiste en un conjunto de preguntas de naturaleza sicológica cuyo objetivo es determinar los deseos, hábitos y reacciones del estudiante que orientarán la personalización de los contenidos y del entorno de aprendizaje. El modelo del estudiante se construye entonces teniendo en cuenta este perfil de aprendizaje y el nivel de conocimiento obtenido mediante el análisis de las acciones del estudiante en el entorno.
Resumo:
The discovery of new molecular targets and the subsequent development of novel anticancer agents are opening new possibilities for drug combination therapy as anticancer treatment. Polymer-drug conjugates are well established for the delivery of a single therapeutic agent, but only in very recent years their use has been extended to the delivery of multi-agent therapy. These early studies revealed the therapeutic potential of this application but raised new challenges (namely, drug loading and drugs ratio, characterisation, and development of suitable carriers) that need to be addressed for a successful optimisation of the system towards clinical applications.
Resumo:
In multi-tasking systems when it is not possible to guarantee completion of all activities by specified times, the scheduling problem is not straightforward. Examples of this situation in real-time programming include the occurrence of alarm conditions and the buffering of output to peripherals in on-line facilities. The latter case is studied here with the hope of indicating one solution to the general problem.
Resumo:
This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.
Resumo:
As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.
Resumo:
Quantitative analysis by mass spectrometry (MS) is a major challenge in proteomics as the correlation between analyte concentration and signal intensity is often poor due to varying ionisation efficiencies in the presence of molecular competitors. However, relative quantitation methods that utilise differential stable isotope labelling and mass spectrometric detection are available. Many drawbacks inherent to chemical labelling methods (ICAT, iTRAQ) can be overcome by metabolic labelling with amino acids containing stable isotopes (e.g. 13C and/or 15N) in methods such as Stable Isotope Labelling with Amino acids in Cell culture (SILAC). SILAC has also been used for labelling of proteins in plant cell cultures (1) but is not suitable for whole plant labelling. Plants are usually autotrophic (fixing carbon from atmospheric CO2) and, thus, labelling with carbon isotopes becomes impractical. In addition, SILAC is expensive. Recently, Arabidopsis cell cultures were labelled with 15N in a medium containing nitrate as sole nitrogen source. This was shown to be suitable for quantifying proteins and nitrogen-containing metabolites from this cell culture (2,3). Labelling whole plants, however, offers the advantage of studying quantitatively the response to stimulation or disease of a whole multicellular organism or multi-organism systems at the molecular level. Furthermore, plant metabolism enables the use of inexpensive labelling media without introducing additional stress to the organism. And finally, hydroponics is ideal to undertake metabolic labelling under extremely well-controlled conditions. We demonstrate the suitability of metabolic 15N hydroponic isotope labelling of entire plants (HILEP) for relative quantitative proteomic analysis by mass spectrometry. To evaluate this methodology, Arabidopsis plants were grown hydroponically in 14N and 15N media and subjected to oxidative stress.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.