58 resultados para Self-organisation, Nature-inspired coordination, Bio pattern, Biochemical tuple spaces
em Aston University Research Archive
Resumo:
We report the synthesis, characterisation and catalytic performance of two nature-inspired biomass-derived electro-catalysts for the oxygen reduction reaction in fuel cells. The catalysts were prepared via pyrolysis of a real food waste (lobster shells) or by mimicking the composition of lobster shells using chitin and CaCO3 particles followed by acid washing. The simplified model of artificial lobster was prepared for better reproducibility. The calcium carbonate in both samples acts as a pore agent, creating increased surface area and pore volume, though considerably higher in artificial lobster samples due to the better homogeneity of the components. Various characterisation techniques revealed the presence of a considerable amount of hydroxyapatite left in the real lobster samples after acid washing and a low content of carbon (23%), nitrogen and sulphur (<1%), limiting the surface area to 23 m2/g, and consequently resulting in rather poor catalytic activity. However, artificial lobster samples, with a surface area of ≈200 m2/g and a nitrogen doping of 2%, showed a promising onset potential, very similar to a commercially available platinum catalyst, with better methanol tolerance, though with lower stability in long time testing over 10,000 s.
Resumo:
Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
Purpose: The purpose of this paper is to explore attitudes of consumers who engage with brands through Facebook "likes". It explores the extent to which these brands are self-expressive and examines the relationship between brand "liking" and brand outcomes. Brand outcomes include brand love and advocacy, where advocacy incorporates WOM and brand acceptance. Design/methodology/approach: Findings are presented from a survey of Facebook users who engage with a brand by "liking" it. Findings: Brands "liked" are expressive of the inner or social self. The study identifies a positive relationship between the self-expressive nature of brands "liked" and brand love. Consumers who engage with inner self-expressive brands are more likely to offer WOM for that brand. By contrast, consumers who engage with socially self-expressive brands are more likely to accept wrongdoing from a brand. Research limitations/implications: The research is exploratory and is limited to consumers who are engaged with a brand through "liking" it on the Facebook social network. Practical implications: The study offers suggestions for managers seeking to enhance brand engagement through Facebook "liking", and to encourage positive brand outcomes (such as WOM) among consumers already engaged with a brand on Facebook. Originality/value: This paper provides new insights into consumer brand engagement evidenced through Facebook "liking". It charts the relationship between "liked" self-expressive brands and brand love. Distinctions are drawn between brand outcomes among consumers who "like" for socially self-expressive reasons, and consumers who are brand engaged by "liking" to express their inner selves. © Emerald Group Publishing Limited.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.
Resumo:
The deposition and properties of electroless nickel composite coatings containing graphite, PTFE and chromium were investigated. Solutions were developed for the codeposition of graphite and chromium with electroless nickel. Solutions for the deposition of graphite contained heavy metal ions for stability, with non-ionic and anionic surfactants to provide wetting and dispersion of the particles. Stability for the codeposition of chromium particles was achieved by oxidation of the chromium. Thin oxide layers of 200 nm thick prevented initiation of the electroless reaction onto the chromium. A mechanism for the formation of electroless composite coatings was considered based on the physical adsorption of particles and as a function of the adsorption of charged surfactants and metal cations from solution. The influence of variables such as particle concentration in solution, particle size, temperature, pH, and agitation on the volume percentage of particles codeposited was studied. The volume percentage of graphite codeposited was found to increase with concentration in solution and playing rate. An increase in particle size and agitation reduced the volume percentage codeposited. The hardness of nickel-graphite deposits was found to decrease with graphite content in the as-deposited and heat treated condition. The frictional and wear properties of electroless nickel-graphite were studied and compared to those of electroless nickel-PTFE. The self-lubricating nature of both coatings was found to be dependent on the ratio of coated area to uncoated area, the size and content of lubricating material in the deposit, and the load between contacting surfaces. The mechanism of self-lubrication was considered, concluding that graphite only produced an initial lubricating surface due to the orientation of flakes, unlike PTFE, which produced true self-lubrication throughout the coating life. Heat treatment of electroless nickel chromium deposits at 850oC for 8 and 16 hours produced nickel-iron-chromium alloy deposits with a phosphorus rich surface of high hardness. Coefficients of friction and wear rates were intially moderate for the phosphorus rich layer but increased for the nickel-iron-chromium region of the coating.
Resumo:
This study is concerned with quality and productivity aspects of traditional house building. The research focuses on these issues by concentrating on the services and finishing stages of the building process. These are work stages which have not been fully investigated in previous productivity related studies. The primary objective of the research is to promote an integrated design and construction led approach to traditional house building based on an original concept of 'development cycles'. This process involves the following: site monitoring; the analysis of work operations; implementing design and construction changes founded on unique information collected during site monitoring; and subsequent re-monitoring to measure and assess Ihe effect of change. A volume house building firm has been involved in this applied research and has allowed access to its sites for production monitoring purposes. The firm also assisted in design detailing for a small group of 'experimental' production houses where various design and construction changes were implemented. Results from the collaborative research have shown certain quality and productivity improvements to be possible using this approach, albeit on a limited scale at this early experimental stage. The improvements have been possible because an improved activity sampling technique, developed for, and employed by the study, has been able to describe why many quality and productivity related problems occur during site building work. Experience derived from the research has shown the following attributes to be important: positive attitudes towards innovation; effective communication; careful planning and organisation; and good coordination and control at site level. These are all essential aspects of quality led management and determine to a large extent the overall success of this approach. Future work recommendations must include a more widespread use of innovative practices so that further design and construction modifications can be made. By doing this, productivity can be improved, cost savings made and better quality afforded.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task allocation in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. The problem is constrained so that agents are penalised for switching mail types. When an agent process a mail batch of different type to the previous one, it must undergo a change-over, with repeated change-overs rendering the agent inactive. The efficiency (average amount of mail retrieved), and the flexibility (ability of the agents to react to changes in the environment) are investigated both in static and dynamic environments and with respect to sudden changes. New rules for mail selection and specialisation are proposed and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a evolutionary algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
Ant colony optimisation algorithms model the way ants use pheromones for marking paths to important locations in their environment. Pheromone traces are picked up, followed, and reinforced by other ants but also evaporate over time. Optimal paths attract more pheromone and less useful paths fade away. The main innovation of the proposed Multiple Pheromone Ant Clustering Algorithm (MPACA) is to mark objects using many pheromones, one for each value of each attribute describing the objects in multidimensional space. Every object has one or more ants assigned to each attribute value and the ants then try to find other objects with matching values, depositing pheromone traces that link them. Encounters between ants are used to determine when ants should combine their features to look for conjunctions and whether they should belong to the same colony. This paper explains the algorithm and explores its potential effectiveness for cluster analysis. © 2014 Springer International Publishing Switzerland.
Resumo:
This paper examines two concepts, social vulnerability and social resilience, often used to describe people and their relationship to a disaster. Social vulnerability is the exposure to harm resulting from demographic and socioeconomic factors that heighten the exposure to disaster. Social resilience is the ability to avoid disaster, cope with change and recover from disaster. Vulnerability to a space and social resilience through society is explored through a focus on the elderly, a group sometimes regarded as having low resilience while being particularly vulnerable. Our findings explore the degree to which an elderly group exposed to coastal flood risk exhibits social resilience through both cognitive strategies, such as risk perception and self-perception, as well as through coping mechanisms, such as accepting change and self-organisation. These attenuate and accentuate the resilience of individuals through their own preparations as well as their communities' preparations and also contribute to the delusion of resilience which leads individuals to act as if they are more resilient than they are in reality, which we call negative resilience. Thus, we draw attention to three main areas: the degree to which social vulnerability can disguise its social resilience; the role played by cognitive strategies and coping mechanisms on an individual's social resilience; and the high risk aspects of social resilience. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.
Resumo:
Over the past two years there have been several large-scale disasters (Haitian earthquake, Australian floods, UK riots, and the Japanese earthquake) that have seen wide use of social media for disaster response, often in innovative ways. This paper provides an analysis of the ways in which social media has been used in public-to-public communication and public-to-government organisation communication. It discusses four ways in which disaster response has been changed by social media: 1. Social media appears to be displacing the traditional media as a means of communication with the public during a crisis. In particular social media influences the way traditional media communication is received and distributed. 2. We propose that user-generated content may provide a new source of information for emergency management agencies during a disaster, but there is uncertainty with regards to the reliability and usefulness of this information. 3. There are also indications that social media provides a means for the public to self-organise in ways that were not previously possible. However, the type and usefulness of self-organisation sometimes works against efforts to mitigate the outcome of the disaster. 4. Social media seems to influence information flow during a disaster. In the past most information flowed in a single direction from government organisation to public, but social media negates this model. The public can diffuse information with ease, but also expect interaction with Government Organisations rather than a simple one-way information flow. These changes have implications for the way government organisations communicate with the public during a disaster. The predominant model for explaining this form of communication, the Crisis and Emergency Risk Communication (CERC), was developed in 2005 before social media achieved widespread popularity. We will present a modified form of the CERC model that integrates social media into the disaster communication cycle, and addresses the ways in which social media has changed communication between the public and government organisations during disasters.
Resumo:
Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.