16 resultados para Agent System

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The global market has become increasingly dynamic, unpredictable and customer-driven. This has led to rising rates of new product introduction and turbulent demand patterns across product mixes. As a result, manufacturing enterprises were facing mounting challenges to be agile and responsive to cope with market changes, so as to achieve the competitiveness of producing and delivering products to the market timely and cost-effectively. This paper introduces a currency-based iterative agent bidding mechanism to effectively and cost-efficiently integrate the activities associated with production planning and control, so as to achieve an optimised process plan and schedule. The aim is to enhance the agility of manufacturing systems to accommodate dynamic changes in the market and production. The iterative bidding mechanism is executed based on currency-like metrics; each operation to be performed is assigned with a virtual currency value and agents bid for the operation if they make a virtual profit based on this value. These currency values are optimised iteratively and so does the bidding process based on new sets of values. This is aimed at obtaining better and better production plans, leading to near-optimality. A genetic algorithm is proposed to optimise the currency values at each iteration. In this paper, the implementation of the mechanism and the test case simulation results are also discussed. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

From a manufacturing perspective, the efficiency of manufacturing operations (such as process planning and production scheduling) are the key element for enhancing manufacturing competence. Process planning and production scheduling functions have been traditionally treated as two separate activities, and have resulted in a range of inefficiencies. These include infeasible process plans, non-available/overloaded resources, high production costs, long production lead times, and so on. Above all, it is unlikely that the dynamic changes can be efficiently dealt with. Despite much research has been conducted to integrate process planning and production scheduling to generate optimised solutions to improve manufacturing efficiency, there is still a gap to achieve the competence required for the current global competitive market. In this research, the concept of multi-agent system (MAS) is adopted as a means to address the aforementioned gap. A MAS consists of a collection of intelligent autonomous agents able to solve complex problems. These agents possess their individual objectives and interact with each other to fulfil the global goal. This paper describes a novel use of an autonomous agent system to facilitate the integration of process planning and production scheduling functions to cope with unpredictable demands, in terms of uncertainties in product mix and demand pattern. The novelty lies with the currency-based iterative agent bidding mechanism to allow process planning and production scheduling options to be evaluated simultaneously, so as to search for an optimised, cost-effective solution. This agent based system aims to achieve manufacturing competence by means of enhancing the flexibility and agility of manufacturing enterprises.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Agent-based technology is playing an increasingly important role in today’s economy. Usually a multi-agent system is needed to model an economic system such as a market system, in which heterogeneous trading agents interact with each other autonomously. Two questions often need to be answered regarding such systems: 1) How to design an interacting mechanism that facilitates efficient resource allocation among usually self-interested trading agents? 2) How to design an effective strategy in some specific market mechanisms for an agent to maximise its economic returns? For automated market systems, auction is the most popular mechanism to solve resource allocation problems among their participants. However, auction comes in hundreds of different formats, in which some are better than others in terms of not only the allocative efficiency but also other properties e.g., whether it generates high revenue for the auctioneer, whether it induces stable behaviour of the bidders. In addition, different strategies result in very different performance under the same auction rules. With this background, we are inevitably intrigued to investigate auction mechanism and strategy designs for agent-based economics. The international Trading Agent Competition (TAC) Ad Auction (AA) competition provides a very useful platform to develop and test agent strategies in Generalised Second Price auction (GSP). AstonTAC, the runner-up of TAC AA 2009, is a successful advertiser agent designed for GSP-based keyword auction. In particular, AstonTAC generates adaptive bid prices according to the Market-based Value Per Click and selects a set of keyword queries with highest expected profit to bid on to maximise its expected profit under the limit of conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. The TAC CAT tournament provides an environment for investigating the optimal design of mechanisms for double auction markets. AstonCAT-Plus is the post-tournament version of the specialist developed for CAT 2010. In our experiments, AstonCAT-Plus not only outperforms most specialist agents designed by other institutions but also achieves high allocative efficiencies, transaction success rates and average trader profits. Moreover, we reveal some insights of the CAT: 1) successful markets should maintain a stable and high market share of intra-marginal traders; 2) a specialist’s performance is dependent on the distribution of trading strategies. However, typical double auction models assume trading agents have a fixed trading direction of either buy or sell. With this limitation they cannot directly reflect the fact that traders in financial markets (the most popular application of double auction) decide their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Experiments are conducted under both dynamic and static settings of the continuous BDA market. We find that the allocative efficiency of a continuous BDA market mainly comes from rational selection of trading directions. Furthermore, we introduce a high-performance Kernel trading strategy in the BDA market which uses kernel probability density estimator built on historical transaction data to decide optimal order prices. Kernel trading strategy outperforms some popular intelligent double auction trading strategies including ZIP, GD and RE in the continuous BDA market by making the highest profit in static games and obtaining the best wealth in dynamic games.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Radio frequency identification (RFID) technology has gained increasing popularity in businesses to improve operational efficiency and maximise costs saving. However, there is a gap in the literature exploring the enhanced use of RFID to substantially add values to the supply chain operations, especially beyond what the RFID vendors could offer. This paper presents a multi-agent system, incorporating RFID technology, aimed at fulfilling the gap. The system is developed to model supply chain activities (in particular, logistics operations) and is comprised of autonomous and intelligent agents representing the key entities in the supply chain. With the advanced characteristics of RFID incorporated, the agent system examines ways logistics operations (i.e. distribution network) particular) can be efficiently reconfigured and optimised in response to dynamic changes in the market, production and at any stage in the supply chain. © 2012 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To meet changing needs of customers and to survive in the increasingly globalised and competitive environment, it is necessary for companies to equip themselves with intelligent tools, thereby enabling managerial levels to use the tactical decision in a better way. However, the implementation of an intelligent system is always a challenge in Small- and Medium-sized Enterprises (SMEs). Therefore, a new and simple approach with 'process rethinking' ability is proposed to generate ongoing process improvements over time. In this paper, a roadmap of the development of an agent-based information system is described. A case example has also been provided to show how the system can assist non-specialists, for example, managers and engineers to make right decisions for a continual process improvement. Copyright © 2006 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to prepare gas-filled lipid-coated microbubbles as potential MRI contrast agents for imaging of fluid pressure. Air-filled microbubbles were produced with phospholipid 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC) in the presence or absence of cholesterol and/or polyethylene-glycol distearate (PEG-distearate). Microbubbles were also prepared containing a fluorinated phospholipid, perfluoroalkylated glycerol-phosphatidylcholine, F-GPC shells encompassing perfluorohexane-saturated nitrogen gas. These microbubbles were evaluated in terms of physico-chemical characteristics such as size and stability. In parallel to these studies, DSPC microbubbles were also formulated containing nitrogen (N2) gas and compared to air-filled microbubbles. By preventing advection, signal drifts were used to assess their stability. DSPC microbubbles were found to have a drift of 20% signal change per bar of applied pressure in contrast to the F-GPC microbubbles which are considerably more stable with a lower drift of 5% signal change per bar of applied pressure. By increasing the pressure of the system and monitoring the MR signal intensity, the point at which the majority of the microbubbles have been damaged was determined. For the DSPC microbubbles this occurs at 1.3 bar whilst the F-GPC microbubbles withstand pressures up to 2.6 bar. For the comparison between air-filled and N2-filled microbubbles, the MRI sensitivity is assessed by cycling the pressure of the system and monitoring the MR signal intensity. It was found that the sensitivity exhibited by the N2-filled microbubbles remained constant, whilst the air-filled microbubbles demonstrated a continuous drop in sensitivity due to continuous bubble damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combination of dimethyl dioctadecyl ammonium bromide (DDA) and the synthetic cord factor trehalose dibehenate (TDB) with Ag85B-ESAT-6 (H1 fusion protein) has been found to promote strong protective immune responses against Mycobacterium tuberculosis. The development of a vaccine formulation that is able to facilitate the requirements of sterility, stability and generation of a vaccine product with acceptable composition, shelf-life and safety profile may necessitate selected alterations in vaccine formulation. This study describes the implementation of a sterilisation protocol and the use of selected lyoprotective agents in order to fulfil these requirements. Concomitantly, close analysis of any alteration in physico-chemical characteristics and parameters of immunogenicity have been examined for this promising DDA liposome-based tuberculosis vaccine. The study addresses the extensive guidelines on parameters for non-clinical assessment, suitable for liposomal vaccines and other vaccine delivery systems issued by the World Health Organisation (WHO) and the European Medicines Agency (EMEA). Physical and chemical stability was observed following alteration in formulations to include novel cryoprotectants and radiation sterilisation. Immunogenicity was maintained following these alterations and even improved by modification with lysine as the cryoprotective agent for sterilised formulations. Taken together, these results outline the successful alteration to a liposomal vaccine, representing improved formulations by rational modification, whilst maintaining biological activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents (especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In-Motes is a mobile agent middleware that generates an intelligent framework for deploying applications in Wireless Sensor Networks (WSNs). In-Motes is based on the injection of mobile agents into the network that can migrate or clone following specific rules and performing application specific tasks. By doing so, each mote is given a certain degree of perception, cognition and control, forming the basis for its intelligence. Our middleware incorporates technologies such as Linda-like tuplespaces and federated system architecture in order to obtain a high degree of collaboration and coordination for the agent society. A set of behavioral rules inspired by a community of bacterial strains is also generated as the means for robustness of the WSN. In this paper, we present In-Motes and provide a detailed evaluation of its implementation for MICA2 motes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethylene-propylene diene terpolymer (EPDM) was functionalized with glycidyl methacrylate (GMA) during melt processing by free radical grafting with peroxide initiation in the presence and absence of a reactive comonomer trimethylolpropane triacrylate (Tris). Increasing the peroxide concentration resulted in an increase in the GMA grafting yield, albeit the overall grafting level was low and was accompanied by higher degree of crosslinking of EPDM which was found to be the major competing reaction. The presence of Tris in the grafting system gave rise to higher grafting yield produced at a much lower peroxide concentration though the crosslinking reactions remained high but without the formation of GMA-homopolymer in either of the two systems. The use of these functionalized EPDM (f-EPDM) samples with PET as compatibilisers in binary and ternary blends of PET/EPDM/f-EPDM was evaluated. The influence of the different functionalisation routes of the rubber phase (in presence and absence of Tris) and the effect of the level of functionality and microstructure of the resultant f-EPDM on the extent of the interfacial reaction, morphology and mechanical properties was also investigated. It is suggested that the mechanical properties of the blends are strongly influenced by the performance of the graft copolymer, which is in turn, determined by the level of functionality, molecular structure of the functionalized rubber and the interfacial concentration of the graft copolymer across the interface. The cumulative evidence obtained from torque rheometry, scanning electron microscopy, SEM, dynamic mechanical analysis (DMA), tensile mechanical tests and Fourier transform infrared (FTIR) supports this. It was shown that binary and ternary blends prepared with f-EPDM in the absence of Tris and containing lower levels of g-GMA effected a significant improvement in mechanical properties. This increase, particularly in elongation to break, could be accounted for by the occurrence of a reaction between the epoxy groups of GMA and the hydroxyl/carboxyl end groups of PET that would result in a graft copolymer which could, most probably, preferentially locate at the interface, thereby acting as an 'emulsifier' responsible for decreasing the interfacial tension between the otherwise two immiscible phases. This is supported by results from FTIR analysis of the fractionated PET phase of these blends which confirm the formation of an interfacial reaction, DMA results which show a clear shift in the T s of the blend components and SEM results which reveal very fine morphology, suggesting effective compatibilisation that is concomitant with the improvement observed in their tensile properties. Although Tris has given rise to highest amount of g-GMA, it resulted in lower mechanical properties than the optimized blends produced in the absence of Tris. This was attributed to the difference in the microstructure of the graft and the level of functionality in these samples resulting in less favourable structure responsible for the coarser dispersion of the rubber phase observed by SEM, the lower extent of T shift of the PET phase (DMA), the lower height of the torque curve during reactive blending and FTIR analysis of the separated PET phase that has indicated a lower extent of the interfacial chemical reaction between the phases in this Tris-containing blend sample. © 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work attempts to shed light to the fundamental concepts behind the stability of Multi-Agent Systems. We view the system as a discrete time Markov chain with a potentially unknown transitional probability distribution. The system will be considered to be stable when its state has converged to an equilibrium distribution. Faced with the non-trivial task of establishing the convergence to such a distribution, we propose a hypothesis testing approach according to which we test whether the convergence of a particular system metric has occurred. We describe some artificial multi-agent ecosystems that were developed and we present results based on these systems which confirm that this approach qualitatively agrees with our intuition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the scalarization function and the learned policy makes the discovery of system wide trade-offs time consuming. Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi-objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.