20 resultados para Natural Systems

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural characteristics of liposomes have been widely investigated and there is certainly a strong understanding of their morphological characteristics. Imaging of these systems, using techniques such as freeze-fracturing methods, transmission electron microscopy, and cryo-electron imaging, has allowed us to appreciate their bilayer structures and factors that influence this. However, there are a few methods that study these systems in their natural hydrated state; commonly, the liposomes are visualized after drying, staining and/or fixation of the vesicles. Environmental scanning electron microscopy (ESEM) offers the ability to image a liposome in its hydrated state without the need for prior sample preparation. We were the first to use ESEM to study the liposomes and niosomes, and have been able to dynamically follow the hydration of lipid films and changes in liposome suspensions as water condenses onto, or evaporates from, the sample in real-time. This provides an insight into the resistance of liposomes to coalescence during dehydration, thereby providing an alternative assay for liposome formulation and stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biocomposite films comprising a non-crosslinked, natural polymer (collagen) and a synthetic polymer, poly(var epsilon-caprolactone) (PCL), have been produced by impregnation of lyophilised collagen mats with a solution of PCL in dichloromethane followed by solvent evaporation. This approach avoids the toxicity problems associated with chemical crosslinking. Distinct changes in film morphology, from continuous surface coating to open porous format, were achieved by variation of processing parameters such as collagen:PCL ratio and the weight of the starting lyophilised collagen mat. Collagenase digestion indicated that the collagen content of 1:4 and 1:8 collagen:PCL biocomposites was almost totally accessible for enzymatic digestion indicating a high degree of collagen exposure for interaction with other ECM proteins or cells contacting the biomaterial surface. Much reduced collagen exposure (around 50%) was measured for the 1:20 collagen:PCL materials. These findings were consistent with the SEM examination of collagen:PCL biocomposites which revealed a highly porous morphology for the 1:4 and 1:8 blends but virtually complete coverage of the collagen component by PCL in the1:20 samples. Investigations of the attachment and spreading characteristics of human osteoblast (HOB) cells on PCL films and collagen:PCL materials respectively, indicated that HOB cells poorly recognised PCL but attachment and spreading were much improved on the biocomposites. The non-chemically crosslinked, collagen:PCL biocomposites described are expected to provide a useful addition to the range of biomaterials and matrix systems for tissue engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liposome systems are well reported for their activity as vaccine adjuvants; however novel lipid-based microbubbles have also been reported to enhance the targeting of antigens into dendritic cells (DCs) in cancer immunotherapy (Suzuki et al 2009). This research initially focused on the formulation of gas-filled lipid coated microbubbles and their potential activation of macrophages using in vitro models. Further studies in the thesis concentrated on aqueous-filled liposomes as vaccine delivery systems. Initial work involved formulating and characterising four different methods of producing lipid-coated microbubbles (sometimes referred to as gas-filled liposomes), by homogenisation, sonication, a gas-releasing chemical reaction and agitation/pressurisation in terms of stability and physico-chemical characteristics. Two of the preparations were tested as pressure probes in MRI studies. The first preparation composed of a standard phospholipid (DSPC) filled with air or nitrogen (N2), whilst in the second method the microbubbles were composed of a fluorinated phospholipid (F-GPC) filled with a fluorocarbon saturated gas. The studies showed that whilst maintaining high sensitivity, a novel contrast agent which allows stable MRI measurements of fluid pressure over time, could be produced using lipid-coated microbubbles. The F-GPC microbubbles were found to withstand pressures up to 2.6 bar with minimal damage as opposed to the DSPC microbubbles, which were damaged at above 1.3 bar. However, it was also found that DSPC-filled with N2 microbubbles were also extremely robust to pressure and their performance was similar to that of F-GPC based microbubbles. Following on from the MRI studies, the DSPC-air and N2 filled lipid-based microbubbles were assessed for their potential activation of macrophages using in vitro models and compared to equivalent aqueous-filled liposomes. The microbubble formulations did not stimulate macrophage uptake, so studies thereafter focused on aqueous-filled liposomes. Further studies concentrated on formulating and characterising, both physico-chemically and immunologically, cationic liposomes based on the potent adjuvant dimethyldioctadecylammonium (DDA) and immunomodulatory trehalose dibehenate (TDB) with the addition of polyethylene glycol (PEG). One of the proposed hypotheses for the mechanism behind the immunostimulatory effect obtained with DDA:TDB is the ‘depot effect’ in which the liposomal carrier helps to retain the antigen at the injection site thereby increasing the time of vaccine exposure to the immune cells. The depot effect has been suggested to be primarily due to their cationic nature. Results reported within this thesis demonstrate that higher levels of PEG i.e. 25 % were able to significantly inhibit the formation of a liposome depot at the injection site and also severely limit the retention of antigen at the site. This therefore resulted in a faster drainage of the liposomes from the site of injection. The versatility of cationic liposomes based on DDA:TDB in combination with different immunostimulatory ligands including, polyinosinic-polycytidylic acid (poly (I:C), TLR 3 ligand), and CpG (TLR 9 ligand) either entrapped within the vesicles or adsorbed onto the liposome surface was investigated for immunogenic capacity as vaccine adjuvants. Small unilamellar (SUV) DDA:TDB vesicles (20-100 nm native size) with protein antigen adsorbed to the vesicle surface were the most potent in inducing both T cell (7-fold increase) and antibody (up to 2 log increase) antigen specific responses. The addition of TLR agonists poly(I:C) and CpG to SUV liposomes had small or no effect on their adjuvanticity. Finally, threitol ceramide (ThrCer), a new mmunostimulatory agent, was incorporated into the bilayers of liposomes composed of DDA or DSPC to investigate the uptake of ThrCer, by dendritic cells (DCs), and presentation on CD1d molecules to invariant natural killer T cells. These systems were prepared both as multilamellar vesicles (MLV) and Small unilamellar (SUV). It was demonstrated that the IFN-g secretion was higher for DDA SUV liposome formulation (p<0.05), suggesting that ThrCer encapsulation in this liposome formulation resulted in a higher uptake by DCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to develop a generic methodology for evaluating and selecting, at the conceptual design phase of a project, the best process technology for Natural Gas conditioning. A generic approach would be simple and require less time and would give a better understanding of why one process is to be preferred over another. This will lead to a better understanding of the problem. Such a methodology would be useful in evaluating existing, novel and hybrid technologies. However, to date no information is available in the published literature on such a generic approach to gas processing. It is believed that the generic methodology presented here is the first available for choosing the best or cheapest method of separation for natural gas dew-point control. Process cost data are derived from evaluations carried out by the vendors. These evaluations are then modelled using a steady-state simulation package. From the results of the modelling the cost data received are correlated and defined with respect to the design or sizing parameters. This allows comparisons between different process systems to be made in terms of the overall process. The generic methodology is based on the concept of a Comparative Separation Cost. This takes into account the efficiency of each process, the value of its products, and the associated costs. To illustrate the general applicability of the methodology, three different cases suggested by BP Exploration are evaluated. This work has shown that it is possible to identify the most competitive process operations at the conceptual design phase and illustrate why one process has an advantage over another. Furthermore, the same methodology has been used to identify and evaluate hybrid processes. It has been determined here that in some cases they offer substantial advantages over the separate process techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation of the different approaches used by Expert Systems researchers to solve problems in the domain of Mechanical Design and Expert Systems was carried out. The techniques used for conventional formal logic programming were compared with those used when applying Expert Systems concepts. A literature survey of design processes was also conducted with a view to adopting a suitable model of the design process. A model, comprising a variation on two established ones, was developed and applied to a problem within what are described as class 3 design tasks. The research explored the application of these concepts to Mechanical Engineering Design problems and their implementation on a microcomputer using an Expert System building tool. It was necessary to explore the use of Expert Systems in this manner so as to bridge the gap between their use as a control structure and for detailed analytical design. The former application is well researched into and this thesis discusses the latter. Some Expert System building tools available to the author at the beginning of his work were evaluated specifically for their suitability for Mechanical Engineering design problems. Microsynics was found to be the most suitable on which to implement a design problem because of its simple but powerful Semantic Net Knowledge Representation structure and the ability to use other types of representation schemes. Two major implementations were carried out. The first involved a design program for a Helical compression spring and the second a gearpair system design. Two concepts were proposed in the thesis for the modelling and implementation of design systems involving many equations. The method proposed enables equation manipulation and analysis using a combination of frames, semantic nets and production rules. The use of semantic nets for purposes other than for psychology and natural language interpretation, is quite new and represents one of the major contributions to knowledge by the author. The development of a purpose built shell program for this type of design problems was recommended as an extension of the research. Microsynics may usefully be used as a platform for this development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to identify the communication goal(s) of a user's information-seeking query out of a finite set of within-domain goals in natural language queries. It proposes using Tree-Augmented Naive Bayes networks (TANs) for goal detection. The problem is formulated as N binary decisions, and each is performed by a TAN. Comparative study has been carried out to compare the performance with Naive Bayes, fully-connected TANs, and multi-layer neural networks. Experimental results show that TANs consistently give better results when tested on the ATIS and DARPA Communicator corpora.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many natural, technological and social systems are inherently not in equilibrium. We show, by detailed analysis of exemplar models, the emergence of equilibriumlike behavior in localized or nonlocalized domains within nonequilibrium Ising spin systems. Equilibrium domains are shown to emerge either abruptly or gradually depending on the system parameters and disappear, becoming indistinguishable from the remainder of the system for other parameter values. © 2013 American Physical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive systems research involves the synthesis of ideas from natural and artificial systems in the analysis, understanding, and design of all intelligent systems. This chapter discusses the cognitive systems associated with the hippocampus (HC) of the human brain and their possible role in behaviour and neurodegenerative disease. The hippocampus (HC) is concerned with the analysis of highly abstract data derived from all sensory systems but its specific role remains controversial. Hence, there have been three major theories concerning its function, viz., the memory theory, the spatial theory, and the behavioral inhibition theory. The memory theory has its origin in the surgical destruction of the HC, which results in severe anterograde and partial retrograde amnesia. The spatial theory has its origin in the observation that neurons in the HC of animals show activity related to their location within the environment. By contrast, the behavioral inhibition theory suggests that the HC acts as a ‘comparator’, i.e., it compares current sensory events with expected or predicted events. If a set of expectations continues to be verified then no alteration of behavior occurs. If, however, a ‘mismatch’ is detected then the HC intervenes by initiating appropriate action by active inhibition of current motor programs and initiation of new data gathering. Understanding the cognitive systems of the hippocampus in humans may aid in the design of intelligent systems involved in spatial mapping, memory, and decision making. In addition, this information may lead to a greater understanding of the course of clinical dementia in the various neurodegenerative diseases in which there is significant damage to the HC.