960 resultados para Système Multi-agents
Resumo:
The MTDL (multi-target-directed ligand) design strategy is used to develop single chemical entities that are able to simultaneously modulate multiple targets. The development of such compounds might disclose new avenues for the treatment of a variety of pathologies (e.g. cancer, AIDS, neurodegenerative diseases), for which an effective cure is urgently needed. This strategy has been successfully applied to Alzheimer’s disease (AD) due to its multifactorial nature, involving cholinergic dysfunction, amyloid aggregation, and oxidative stress. Despite many biological entities have been recognized as possible AD-relevant, only four achetylcholinesterase inhibitors (AChEIs) and one NMDA receptor antagonist are used in therapy. Unfortunately, such compounds are not disease-modifying agents behaving only as cognition enhancers. Therefore, MTDL strategy is emerging as a powerful drug design paradigm: pharmacophores of different drugs are combined in the same structure to afford hybrid molecules. In principle, each pharmacophore of these new drugs should retain the ability to interact with its specific site(s) on the target and, consequently, to produce specific pharmacological responses that, taken together, should slow or block the neurodegenerative process. To this end, the design and synthesis of several examples of MTDLs for combating neurodegenerative diseases have been published. This seems to be the more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling the multifactorial nature of AD, and hopefully stopping its progression. According to this emerging strategy, in this work thesis different classes of new molecular structures, based on the MTDL approach, have been developed. Moreover, curcumin and its constrained analogs have currently received remarkable interest as they have a unique conjugated structure which shows a pleiotropic profile that we considered a suitable framework in developing MTDLs. In fact, beside the well-known direct antioxidant activity, curcumin displays a wide range of biological properties including anti-inflammatory and anti-amyloidogenic activities and an indirect antioxidant action through activation of the cytoprotective enzyme heme oxygenase (HO-1). Thus, since many lines of evidence suggest that oxidative stess and mitochondria impairment have a cental role in age-related neurodegenerative diseases such as AD, we designed mitochondria-targeted antioxidants by connecting curcumin analogs to different polyamine chains that, with the aid of electrostatic force, might drive the selected antioxidant moiety into mitochondria.
Resumo:
While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.
Resumo:
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots. The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient. Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way. Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays. The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
Resumo:
Cancer is a multifactorial disease characterized by a very complex etiology. Basing on its complex nature, a promising therapeutic strategy could be based by the “Multi-Target-Directed Ligand” (MTDL) approach, based on the assumption that a single molecule could hit several targets responsible for the pathology. Several agents acting on DNA are clinically used, but the severe deriving side effects limit their therapeutic application. G-quadruplex structures are DNA secondary structures located in key zones of human genome; targeting quadruplex structures could allow obtaining an anticancer therapy more free from side effects. In the last years it has been proved that epigenetic modulation can control the expression of human genes, playing a crucial role in carcinogenesis and, in particular, an abnormal expression of histone deacetylase enzymes are related to tumor onset and progression. This thesis deals with the design and synthesis of new naphthalene diimide (NDI) derivatives endowed with anticancer activity, interacting with DNA together with other targets implicated in cancer development, such as HDACs. NDI-polyamine and NDI-polyamine-hydroxamic acid conjugates have been designed with the aim to provide potential MTDLs, in order to create molecules able simultaneously to interact with different targets involved in this pathology, specifically the G-quadruplex structures and HDAC, and to exploit the polyamine transport system to get selectively into cancer cells. Macrocyclic NDIs have been designed with the aim to improve the quadruplex targeting profile of the disubstituted NDIs. These compounds proved the ability to induce a high and selective stabilization of the quadruplex structures, together with cytotoxic activities in the micromolar range. Finally, trisubstituted NDIs have been developed as G-quadruplex-binders, potentially effective against pancreatic adenocarcinoma. In conclusion, all these studies may represent a promising starting point for the development of new interesting molecules useful for the treatment of cancer, underlining the versatility of the NDI scaffold.
Resumo:
Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.
Resumo:
Next to leisure, sport, and household activities, the most common activity resulting in medically consulted injuries and poisonings in the United States is work, with an estimated 4 million workplace related episodes reported in 2008 (U.S. Department of Health and Human Services, 2009). To address the risks inherent to various occupations, risk management programs are typically put in place that include worker training, engineering controls, and personal protective equipment. Recent studies have shown that such interventions alone are insufficient to adequately manage workplace risks, and that the climate in which the workers and safety program exist (known as the "safety climate") is an equally important consideration. The organizational safety climate is so important that many studies have focused on developing means of measuring it in various work settings. While safety climate studies have been reported for several industrial settings, published studies on assessing safety climate in the university work setting are largely absent. Universities are particularly unique workplaces because of the potential exposure to a diversity of agents representing both acute and chronic risks. Universities are also unique because readily detectable health and safety outcomes are relatively rare. The ability to measure safety climate in a work setting with rarely observed systemic outcome measures could serve as a powerful means of measure for the evaluation of safety risk management programs. ^ The goal of this research study was the development of a survey tool to measure safety climate specifically in the university work setting. The use of a standardized tool also allows for comparisons among universities throughout the United States. A specific study objective was accomplished to quantitatively assess safety climate at five universities across the United States. At five universities, 971 participants completed an online questionnaire to measure the safety climate. The average safety climate score across the five universities was 3.92 on a scale of 1 to 5, with 5 indicating very high perceptions of safety at these universities. The two lowest overall dimensions of university safety climate were "acknowledgement of safety performance" and "department and supervisor's safety commitment". The results underscore how the perception of safety climate is significantly influenced at the local level. A second study objective regarding evaluating the reliability and validity of the safety climate questionnaire was accomplished. A third objective fulfilled was to provide executive summaries resulting from the questionnaire to the participating universities' health & safety professionals and collect feedback on usefulness, relevance and perceived accuracy. Overall, the professionals found the survey and results to be very useful, relevant and accurate. Finally, the safety climate questionnaire will be offered to other universities for benchmarking purposes at the annual meeting of a nationally recognized university health and safety organization. The ultimate goal of the project was accomplished and was the creation of a standardized tool that can be used for measuring safety climate in the university work setting and can facilitate meaningful comparisons amongst institutions.^
Resumo:
High-frequency data collected continuously over a multiyear time frame are required for investigating the various agents that drive ecological and hydrodynamic processes in estuaries. Here, we present water quality and current in-situ observations from a fixed monitoring station operating from 2008 to 2014 in the lower Guadiana Estuary, southern Portugal (37°11.30' N, 7°24.67' W). The data were recorded by a multi-parametric probe providing hourly records (temperature, salinity, chlorophyll, dissolved oxygen, turbidity, and pH) at a water depth of ~1 m, and by a bottom-mounted acoustic Doppler current profiler measuring the pressure, near-bottom temperature, and flow velocity through the water column every 15 min. The time-series data, in particular the probe ones, present substantial gaps arising from equipment failure and maintenance, which are ineluctable with this type of observations in harsh environments. However, prolonged (months-long) periods of multi-parametric observations during contrasted external forcing conditions are available. The raw data are reported together with flags indicating the quality status of each record. River discharge data from two hydrographic stations located near the estuary head are also provided to support data analysis and interpretation.
Resumo:
This article presents a multi-agent expert system (SMAF) , that allows the input of incidents which occur in different elements of the telecommunications area. SMAF interacts with experts and general users, and each agent with all the agents? community, recording the incidents and their solutions in a knowledge base, without the analysis of their causes. The incidents are expressed using keywords taken from natural language (originally Spanish) and their main concepts are recorded with their severities as the users express them. Then, there is a search of the best solution for each incident, being helped by a human operator using a distancenotions between them.
Resumo:
Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
This article presents the design, kinematic model and communication architecture for the multi-agent robotic system called SMART. The philosophy behind this kind of system requires the communication architecture to contemplate the concurrence of the whole system. The proposed architecture combines different communication technologies (TCP/IP and Bluetooth) under one protocol designed for the cooperation among agents and other elements of the system such as IP-Cameras, image processing library, path planner, user Interface, control block and data block. The high level control is modeled by Work-Flow Petri nets and implemented in C++ and C♯♯. Experimental results show the performance of the designed architecture.
Resumo:
In this paper, an innovative approach to perform distributed Bayesian inference using a multi-agent architecture is presented. The final goal is dealing with uncertainty in network diagnosis, but the solution can be of applied in other fields. The validation testbed has been a P2P streaming video service. An assessment of the work is presented, in order to show its advantages when it is compared with traditional manual processes and other previous systems.
Resumo:
This paper presents a testing methodology to apply Behaviour Driven Development (BDD) techniques while developing Multi-Agent Systems (MAS), so called BEhavioural Agent Simple Testing (BEAST) methodology. It is supported by the developed open source framework (BEAST Tool) which automatically generates test cases skeletons from BDD scenarios specifications. The developed framework allows testing MASs based on JADE or JADEX platforms and offers a set of configurable Mock Agents which allow the execution of tests while the system is under development. BEAST tool has been validated in the development of a MAS for fault diagnosis in FTTH (Fiber To The Home) networks.
Resumo:
This paper focuses on the general problem of coordinating of multi-robot systems, more specifically, it addresses the self-election of heterogeneous and specialized tasks by autonomous robots. In this regard, it has proposed experimenting with two different techniques based chiefly on selforganization and emergence biologically inspired, by applying response threshold models as well as ant colony optimization. Under this approach it can speak of multi-tasks selection instead of multi-tasks allocation, that means, as the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. It has evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.