37 resultados para Heterogeneous interacting-agent model
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
We investigate a simplified model of two fully connected magnetic systems maintained at different temperatures by virtue of being connected to two independent thermal baths while simultaneously being interconnected with each other. Using generating functional analysis, commonly used in statistical mechanics, we find exactly soluble expressions for their individual magnetization that define a two-dimensional nonlinear map, the equations of which have the same form as those obtained for densely connected equilibrium systems. Steady states correspond to the fixed points of this map, separating the parameter space into a rich set of nonequilibrium phases that we analyze in asymptotically high and low (nonequilibrium) temperature limits. The theoretical formalism is shown to revert to the classical nonequilibrium steady state problem for two interacting systems with a nonzero heat transfer between them that catalyzes a phase transition between ambient nonequilibrium states. © 2013 American Physical Society.
Resumo:
Lock-in is observed in real world markets of experience goods; experience goods are goods whose characteristics are difficult to determine in advance, but ascertained upon consumption. We create an agent-based simulation of consumers choosing between two experience goods available in a virtual market. We model consumers in a grid representing the spatial network of the consumers. Utilising simple assumptions, including identical distributions of product experience and consumers having a degree of follower tendency, we explore the dynamics of the model through simulations. We conduct simulations to create a lock-in before testing several hypotheses upon how to break an existing lock-in; these include the effect of advertising and free give-away. Our experiments show that the key to successfully breaking a lock-in required the creation of regions in a consumer population. Regions arise due to the degree of local conformity between agents within the regions, which spread throughout the population when a mildly superior competitor was available. These regions may be likened to a niche in a market, which gains in popularity to transition into the mainstream.
Resumo:
Computational Fluid Dynamics (CFD) has found great acceptance among the engineering community as a tool for research and design of processes that are practically difficult or expensive to study experimentally. One of these processes is the biomass gasification in a Circulating Fluidized Bed (CFB). Biomass gasification is the thermo-chemical conversion of biomass at a high temperature and a controlled oxygen amount into fuel gas, also sometime referred to as syngas. Circulating fluidized bed is a type of reactor in which it is possible to maintain a stable and continuous circulation of solids in a gas-solid system. The main objectives of this thesis are four folds: (i) Develop a three-dimensional predictive model of biomass gasification in a CFB riser using advanced Computational Fluid Dynamic (CFD) (ii) Experimentally validate the developed hydrodynamic model using conventional and advanced measuring techniques (iii) Study the complex hydrodynamics, heat transfer and reaction kinetics through modelling and simulation (iv) Study the CFB gasifier performance through parametric analysis and identify the optimum operating condition to maximize the product gas quality. Two different and complimentary experimental techniques were used to validate the hydrodynamic model, namely pressure measurement and particle tracking. The pressure measurement is a very common and widely used technique in fluidized bed studies, while, particle tracking using PEPT, which was originally developed for medical imaging, is a relatively new technique in the engineering field. It is relatively expensive and only available at few research centres around the world. This study started with a simple poly-dispersed single solid phase then moved to binary solid phases. The single solid phase was used for primary validations and eliminating unnecessary options and steps in building the hydrodynamic model. Then the outcomes from the primary validations were applied to the secondary validations of the binary mixture to avoid time consuming computations. Studies on binary solid mixture hydrodynamics is rarely reported in the literature. In this study the binary solid mixture was modelled and validated using experimental data from the both techniques mentioned above. Good agreement was achieved with the both techniques. According to the general gasification steps the developed model has been separated into three main gasification stages; drying, devolatilization and tar cracking, and partial combustion and gasification. The drying was modelled as a mass transfer from the solid phase to the gas phase. The devolatilization and tar cracking model consist of two steps; the devolatilization of the biomass which is used as a single reaction to generate the biomass gases from the volatile materials and tar cracking. The latter is also modelled as one reaction to generate gases with fixed mass fractions. The first reaction was classified as a heterogeneous reaction while the second reaction was classified as homogenous reaction. The partial combustion and gasification model consisted of carbon combustion reactions and carbon and gas phase reactions. The partial combustion considered was for C, CO, H2 and CH4. The carbon gasification reactions used in this study is the Boudouard reaction with CO2, the reaction with H2O and Methanation (Methane forming reaction) reaction to generate methane. The other gas phase reactions considered in this study are the water gas shift reaction, which is modelled as a reversible reaction and the methane steam reforming reaction. The developed gasification model was validated using different experimental data from the literature and for a wide range of operating conditions. Good agreement was observed, thus confirming the capability of the model in predicting biomass gasification in a CFB to a great accuracy. The developed model has been successfully used to carry out sensitivity and parametric analysis. The sensitivity analysis included: study of the effect of inclusion of various combustion reaction; and the effect of radiation in the gasification reaction. The developed model was also used to carry out parametric analysis by changing the following gasifier operating conditions: fuel/air ratio; biomass flow rates; sand (heat carrier) temperatures; sand flow rates; sand and biomass particle sizes; gasifying agent (pure air or pure steam); pyrolysis models used; steam/biomass ratio. Finally, based on these parametric and sensitivity analysis a final model was recommended for the simulation of biomass gasification in a CFB riser.
An agent approach to improving radio frequency identification enabled Returnable Transport Equipment
Resumo:
Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients.
Resumo:
Enantioselective catalysis is an increasingly important method of providing enantiomeric compounds for the pharmaceutical and agrochemical industries. To date, heterogeneous catalysts have failed to match the industrial impact achieved by homogeneous systems. One successful approach to the creation of heterogeneous enantioselective catalysts has involved the modification of conventional metal particle catalysts by the adsorption of chiral molecules. This article examines the contribution of effects such as chiral recognition and amplification to these types of system and how insight provided by surface science model studies may be exploited in the design of more effective catalysts.
Resumo:
Biodiesel production is a very promising area due to the relevance that it is an environmental-friendly diesel fuel alternative to fossil fuel derived diesel fuels. Nowadays, most industrial applications of biodiesel production are performed by the transesterification of renewable biological sources based on homogeneous acid catalysts, which requires downstream neutralization and separation leading to a series of technical and environmental problems. However, heterogeneous catalyst can solve these issues, and be used as a better alternative for biodiesel production. Thus, a heuristic diffusion-reaction kinetic model has been established to simulate the transesterification of alkyl ester with methanol over a series of heterogeneous Cs-doped heteropolyacid catalysts. The novelty of this framework lies in detailed modeling of surface reacting kinetic phenomena and integrating that with particle-level transport phenomena all the way through to process design and optimisation, which has been done for biodiesel production process for the first time. This multi-disciplinary research combining chemistry, chemical engineering and process integration offers better insights into catalyst design and process intensification for the industrial application of Cs-doped heteropolyacid catalysts for biodiesel production. A case study of the transesterification of tributyrin with methanol has been demonstrated to establish the effectiveness of this methodology.
Resumo:
Biodiesel production is a very promising area due to the relevance that it is an environmental-friendly diesel fuel alternative to fossil fuel derived diesel fuels. Nowadays, most industrial applications of biodiesel production are performed by the transesterification of renewable biological sources based on homogeneous acid catalysts, which requires downstream neutralization and separation leading to a series of technical and environmental problems. However, heterogeneous catalyst can solve these issues, and be used as a better alternative for biodiesel production. Thus, a heuristic diffusion-reaction kinetic model has been established to simulate the transesterification of alkyl ester with methanol over a series of heterogeneous Cs-doped heteropolyacid catalysts. The novelty of this framework lies in detailed modeling of surface reacting kinetic phenomena and integrating that with particle-level transport phenomena all the way through to process design and optimisation, which has been done for biodiesel production process for the first time. This multi-disciplinary research combining chemistry, chemical engineering and process integration offers better insights into catalyst design and process intensification for the industrial application of Cs-doped heteropolyacid catalysts for biodiesel production. A case study of the transesterification of tributyrin with methanol has been demonstrated to establish the effectiveness of this methodology.
Resumo:
Heterogeneous and incomplete datasets are common in many real-world visualisation applications. The probabilistic nature of the Generative Topographic Mapping (GTM), which was originally developed for complete continuous data, can be extended to model heterogeneous (i.e. containing both continuous and discrete values) and missing data. This paper describes and assesses the resulting model on both synthetic and real-world heterogeneous data with missing values.
Resumo:
Transmembrane proteins play crucial roles in many important physiological processes. The intracellular domain of membrane proteins is key for their function by interacting with a wide variety of cytosolic proteins. It is therefore important to examine this interaction. A recently developed method to study these interactions, based on the use of liposomes as a model membrane, involves the covalent coupling of the cytoplasmic domains of membrane proteins to the liposome membrane. This allows for the analysis of interaction partners requiring both protein and membrane lipid binding. This thesis further establishes the liposome recruitment system and utilises it to examine the intracellular interactome of the amyloid precursor protein (APP), most well-known for its proteolytic cleavage that results in the production and accumulation of amyloid beta fragments, the main constituent of amyloid plaques in Alzheimer’s disease pathology. Despite this, the physiological function of APP remains largely unclear. Through the use of the proteo-liposome recruitment system two novel interactions of APP’s intracellular domain (AICD) are examined with a view to gaining a greater insight into APP’s physiological function. One of these novel interactions is between AICD and the mTOR complex, a serine/threonine protein kinase that integrates signals from nutrients and growth factors. The kinase domain of mTOR directly binds to AICD and the N-terminal amino acids of AICD are crucial for this interaction. The second novel interaction is between AICD and the endosomal PIKfyve complex, a lipid kinase involved in the production of phosphatidylinositol-3,5-bisphosphate (PI(3,5)P2) from phosphatidylinositol-3-phosphate, which has a role in controlling ensdosome dynamics. The scaffold protein Vac14 of the PIKfyve complex binds directly to AICD and the C-terminus of AICD is important for its interaction with the PIKfyve complex. Using a recently developed intracellular PI(3,5)P2 probe it is shown that APP controls the formation of PI(3,5)P2 positive vesicular structures and that the PIKfyve complex is involved in the trafficking and degradation of APP. Both of these novel APP interactors have important implications of both APP function and Alzheimer’s disease. The proteo-liposome recruitment method is further validated through its use to examine the recruitment and assembly of the AP-2/clathrin coat from purified components to two membrane proteins containing different sorting motifs. Taken together this thesis highlights the proteo-liposome recruitment system as a valuable tool for the study of membrane proteins intracellular interactome. It allows for the mimicking of the protein in its native configuration therefore identifying weaker interactions that are not detected by more conventional methods and also detecting interactions that are mediated by membrane phospholipids.
Resumo:
This paper details the development and evaluation of AstonTAC, an energy broker that successfully participated in the 2012 Power Trading Agent Competition (Power TAC). AstonTAC buys electrical energy from the wholesale market and sells it in the retail market. The main focus of the paper is on the broker’s bidding strategy in the wholesale market. In particular, it employs Markov Decision Processes (MDP) to purchase energy at low prices in a day-ahead power wholesale market, and keeps energy supply and demand balanced. Moreover, we explain how the agent uses Non-Homogeneous Hidden Markov Model (NHHMM) to forecast energy demand and price. An evaluation and analysis of the 2012 Power TAC finals show that AstonTAC is the only agent that can buy energy at low price in the wholesale market and keep energy imbalance low.
Resumo:
Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.