817 resultados para multi-agent learning
Resumo:
Background: The emergence of multiple-drug resistance bacteria has become a major threat and thus calls for an urgent need to search for new effective and safe anti-bacterial agents. Objectives: This study aims to evaluate the anticancer and antibacterial activities of secondary metabolites from Penicillium sp. , an endophytic fungus associated with leaves of Garcinia nobilis . Methods: The culture filtrate from the fermentation of Penicillium sp. was extracted and analyzed by liquid chromatography– mass spectrometry, and the major metabolites were isolated and identified by spectroscopic analyses and by comparison with published data. The antibacterial activity of the compounds was assessed by broth microdilution method while the anticancer activity was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay. Results: The fractionation of the crude extract afforded penialidin A-C (1-3), citromycetin (4), p-hydroxyphenylglyoxalaldoxime (5) and brefelfin A (6). All of the compounds tested here showed antibacterial activity (MIC = 0.50 – 128 μg/mL) against Gramnegative multi-drug resistance bacteria, Vibrio cholerae (causative agent of dreadful disease cholera) and Shigella flexneri (causative agent of shigellosis), as well as the significant anticancer activity (LC50 = 0.88 – 9.21 μg/mL) against HeLa cells. Conclusion: The results obtained indicate that compounds 1-6 showed good antibacterial and anticancer activities with no toxicity to human red blood cells and normal Vero cells.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. We apply agent-based simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. Our research so far has led us to conduct case study work with a top ten UK retailer. Based on our case study experience and data we are developing a simulator that can be used to investigate the impact of management practices (e.g. training, empowerment, teamwork) on customer satisfaction and retail productivity.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
Digital divide is an issue that concerns our technology dominated society. The parents of ubiquitous computing dreamt of a total proliferation of information technology. But the reality we live in is not yet prepared for this future. There is current a need to develop programs in order to diminish this difference between the digitally included and the excluded one. PROEJA-Transiarte is a project ran by Universidade de Brasília in the city of Ceilândia, Federal District of Brazil. It proposes a different approach on the issue of digital divide, by introducing the cooperative creation of cyberart, based on the life stories of each participant, into the regular curriculum of EJA (Educação de Jovens e Adultos) classes, thus implementing the concept of solidary education. This research project investigated the role played by the cooperative learning the students put in practice during the workshops of the project in the diminishing of the digital exclusion a great part of the students feel. It looked into their activities, analyzing the development of their cooperation, putting it next in the context of the digital and social inclusion. After a multi-dimensional research on the theme, in the context of PROEJA-Transiarte, the conclusion shows the impact cooperative learning has in the reduction of the digital divide, analyzing the perception of the currently involved students, the researchers active in the project, or the former students that had their lives improved because of the workshops they participated in.
Resumo:
Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.
Resumo:
The population of English Language Learners (ELLs) globally has been increasing substantially every year. In the United States alone, adult ELLs are the fastest growing portion of learners in adult education programs (Yang, 2005). There is a significant need to improve the teaching of English to ELLs in the United States and other English-speaking dominant countries. However, for many ELLs, speaking, especially to Native English Speakers (NESs), causes considerable language anxiety, which in turn plays a vital role in hindering their language development and academic progress (Pichette, 2009; Woodrow, 2006). Task-based Language Teaching (TBLT), such as simulation activities, has long been viewed as an effective approach for second-language development. The current advances in technology and rapid emergence of Multi-User Virtual Environments (MUVEs) have provided an opportunity for educators to consider conducting simulations online for ELLs to practice speaking English to NESs. Yet to date, empirical research on the effects of MUVEs on ELLs’ language development and speaking is limited (Garcia-Ruiz, Edwards, & Aquino-Santos, 2007). This study used a true experimental treatment control group repeated measures design to compare the perceived speaking anxiety levels (as measured by an anxiety scale administered per simulation activity) of 11 ELLs (5 in the control group, 6 in the experimental group) when speaking to Native English Speakers (NESs) during 10 simulation activities. Simulations in the control group were done face-to-face, while those in the experimental group were done in the MUVE of Second Life. The results of the repeated measures ANOVA revealed after the Huynh-Feldt epsilon correction, demonstrated for both groups a significant decrease in anxiety levels over time from the first simulation to the tenth and final simulation. When comparing the two groups, the results revealed a statistically significant difference, with the experimental group demonstrating a greater anxiety reduction. These results suggests that language instructors should consider including face-to-face and MUVE simulations with ELLs paired with NESs as part of their language instruction. Future investigations should investigate the use of other multi-user virtual environments and/or measure other dimensions of the ELL/NES interactions.
Resumo:
Integration, inclusion, and equity constitute fundamental dimensions of democracy in post-World War II societies and their institutions. The study presented here reports upon the ways in which individuals and institutions both use and account for the roles that technologies, including ICT, play in disabling and enabling access for learning in higher education for all. Technological innovations during the 20th and 21st centuries, including ICT, have been heralded as holding significant promise for revolutionizing issues of access in societal institutions like schools, healthcare services, etc. (at least in the global North). Taking a socially oriented perspective, the study presented in this paper focuses on an ethnographically framed analysis of two datasets that critically explores the role that technologies, including ICT, play in higher education for individuals who are “differently abled” and who constitute a variation on a continuum of capabilities. Functionality as a dimension of everyday life in higher education in the 21st century is explored through the analysis of (i) case studies of two “differently abled” students in Sweden and (ii) current support services at universities in Sweden. The findings make visible the work that institutions and their members do through analyses of the organization of time and space and the use of technologies in institutional settings against the backdrop of individuals’ accountings and life trajectories. This study also highlights the relevance of multi-scale data analyses for revisiting the ways in which identity positions become framed or understood within higher education.
Resumo:
In a globalized economy, the use of natural resources is determined by the demand of modern production and consumption systems, and by infrastructure development. Sustainable natural resource use will require good governance and management based on sound scientific information, data and indicators. There is a rich literature on natural resource management, yet the national and global scale and macro-economic policy making has been underrepresented. We provide an overview of the scholarly literature on multi-scale governance of natural resources, focusing on the information required by relevant actors from local to global scale. Global natural resource use is largely determined by national, regional, and local policies. We observe that in recent decades, the development of public policies of natural resource use has been fostered by an “inspiration cycle” between the research, policy and statistics community, fostering social learning. Effective natural resource policies require adequate monitoring tools, in particular indicators for the use of materials, energy, land, and water as well as waste and GHG emissions of national economies. We summarize the state-of-the-art of the application of accounting methods and data sources for national material flow accounts and indicators, including territorial and product-life-cycle based approaches. We show how accounts on natural resource use can inform the Sustainable Development Goals (SDGs) and argue that information on natural resource use, and in particular footprint indicators, will be indispensable for a consistent implementation of the SDGs. We recognize that improving the knowledge base for global natural resource use will require further institutional development including at national and international levels, for which we outline options.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
Las teorías administrativas se han basado, casi sin excepción, en los fundamentos y los modelos de la ciencia clásica (particularmente, en los modelos de la física newtoniana). Sin embargo, las organizaciones actualmente se enfrentan a un mundo globalizado, plagado de información (y no necesariamente conocimiento), hiperconectado, dinámico y cargado de incertidumbre, por lo que muchas de las teorías pueden mostrar limitaciones para las organizaciones. Y quizá no por la estructura, la lógica o el alcance de las mismas, sino por la falta de criterios que justifiquen su aplicación. En muchos casos, las organizaciones siguen utilizando la intuición, las suposiciones y las verdades a medias en la toma de decisiones. Este panorama pone de manifiesto dos hechos: de un lado, la necesidad de buscar un método que permita comprender las situaciones de cada organización para apoyar la toma de decisiones. De otro lado, la necesidad de potenciar la intuición con modelos y técnicas no tradicionales (usualmente provenientes o inspiradas por la ingeniería). Este trabajo busca anticipar los pilares de un posible método que permita apoyar la toma de decisiones por medio de la simulación de modelos computacionales, utilizando las posibles interacciones entre: la administración basada en modelos, la ciencia computacional de la organización y la ingeniería emergente.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
Reinforcement learning is a particular paradigm of machine learning that, recently, has proved times and times again to be a very effective and powerful approach. On the other hand, cryptography usually takes the opposite direction. While machine learning aims at analyzing data, cryptography aims at maintaining its privacy by hiding such data. However, the two techniques can be jointly used to create privacy preserving models, able to make inferences on the data without leaking sensitive information. Despite the numerous amount of studies performed on machine learning and cryptography, reinforcement learning in particular has never been applied to such cases before. Being able to successfully make use of reinforcement learning in an encrypted scenario would allow us to create an agent that efficiently controls a system without providing it with full knowledge of the environment it is operating in, leading the way to many possible use cases. Therefore, we have decided to apply the reinforcement learning paradigm to encrypted data. In this project we have applied one of the most well-known reinforcement learning algorithms, called Deep Q-Learning, to simple simulated environments and studied how the encryption affects the training performance of the agent, in order to see if it is still able to learn how to behave even when the input data is no longer readable by humans. The results of this work highlight that the agent is still able to learn with no issues whatsoever in small state spaces with non-secure encryptions, like AES in ECB mode. For fixed environments, it is also able to reach a suboptimal solution even in the presence of secure modes, like AES in CBC mode, showing a significant improvement with respect to a random agent; however, its ability to generalize in stochastic environments or big state spaces suffers greatly.
Resumo:
In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.