38 resultados para Computations Driven Systems
em Aston University Research Archive
Resumo:
Based on recent advances in autonomic computing, we propose a methodology for the cost-effective development of self-managing systems starting from a model of the resources to be managed and using a general-purpose autonomic architecture.
Resumo:
Heat pumps are becoming increasingly popular, but poor electricity generating efficiency limits the potential energy savings of electrically powered units. Thus the work reported in this thesis concerns the development of a range of gas engine driven heat pumps for industrial and commercial heating applications, which recover heat from the prime mover, normally rejected to waste. Despite the convenience of using proprietary engine heat recovery packages, investigations have highlighted the necessity to ensure the engine and the heat recovery equipment are compatible. A problem common •to all air source heat pumps is the formation of frost on the evaporator, which must be removed periodically, with the expenditure of energy, to ensure the continued operation of the plant. An original fluidised bed defrosting mechanism is proposed, which prevents the build-up of this frost, and also improves system performance. Criticisms have been levelled against the rotary sliding vane compressor, in particular the effects of lubrication, which is essential. This thesis compares the rotary sliding vane compressor with other machines, and concludes that many of these criticisms are unfounded. A confidential market survey indicates an increasing demand for heat pumps up to and including 1990, and the technical support needed to penetrate this market is presented. Such support includes the development of a range of modular gas engine driven heat pumps, and a computer aided design for the selection of the optimum units. A case study of a gas engine driven heat pump for a swimming pool application which provided valuable experience is included.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
Engineering adaptive software is an increasingly complex task. Here, we demonstrate Genie, a tool that supports the modelling, generation, and operation of highly reconfigurable, component-based systems. We showcase how Genie is used in two case-studies: i) the development and operation of an adaptive flood warning system, and ii) a service discovery application. In this context, adaptation is enabled by the Gridkit reflective middleware platform.
Resumo:
Links the concept of market-driven business strategies with the design of production systems. It draws upon the case of a firm which, during the last decade, changed its strategy from being “technology led” to “market driven”. The research, based on interdisciplinary fieldwork involving long-term participant observation, investigated the factors which contribute to the successful design and implementation of flexible production systems in electronics assembly. These investigations were conducted in collaboration with a major computer manufacturer, with other electronics firms being studied for comparison. The research identified a number of strategies and actions seen as crucial to the development of efficient flexible production systems, namely: effective integration of subsystems, development of appropriate controls and performance measures, compatibility between production system design and organization structure, and the development of a climate conducive to organizational change. Overall, the analysis suggests that in the electronics industry there exists an extremely high degree of environmental complexity and turbulence. This serves to shape the strategic, technical and social structures that are developed to match this complexity, examples of which are niche marketing, flexible manufacturing and employee harmonization.
Resumo:
We present exact analytical results for the statistics of nonlinear coupled oscillators under the influence of additive white noise. We suggest a perturbative approach for analysing the statistics of such systems under the action of a deterministic perturbation, based on the exact expressions for probability density functions for noise-driven oscillators. Using our perturbation technique we show that our results can be applied to studying the optical signal propagation in noisy fibres at (nearly) zero dispersion as well as to weakly nonlinear lattice models with additive noise. The approach proposed can account for a wide spectrum of physically meaningful perturbations and is applicable to the case of large noise strength. © 2005 Elsevier B.V. All rights reserved.
Resumo:
The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
Most current 3D landscape visualisation systems either use bespoke hardware solutions, or offer a limited amount of interaction and detail when used in realtime mode. We are developing a modular, data driven 3D visualisation system that can be readily customised to specific requirements. By utilising the latest software engineering methods and bringing a dynamic data driven approach to geo-spatial data visualisation we will deliver an unparalleled level of customisation in near-photo realistic, realtime 3D landscape visualisation. In this paper we show the system framework and describe how this employs data driven techniques. In particular we discuss how data driven approaches are applied to the spatiotemporal management aspect of the application framework, and describe the advantages these convey.
Resumo:
Vaccination remains a key tool in the protection and eradication of diseases. However, the development of new safe and effective vaccines is not easy. Various live organism based vaccines currently licensed, exhibit high efficacy; however, this benefit is associated with risk, due to the adverse reactions found with these vaccines. Therefore, in the development of vaccines, the associated risk-benefit issues need to be addressed. Sub-unit proteins offer a much safer alternative; however, their efficacy is low. The use of adjuvanted systems have proven to enhance the immunogenicity of these sub-unit vaccines through protection (i.e. preventing degradation of the antigen in vivo) and enhanced targeting of these antigens to professional antigen-presenting cells. Understanding of the immunological implications of the related disease will enable validation for the design and development of potential adjuvant systems. Novel adjuvant research involves the combination of both pharmaceutical analysis accompanied by detailed immunological investigations, whereby, pharmaceutically designed adjuvants are driven by an increased understanding of mechanisms of adjuvant activity, largely facilitated by description of highly specific innate immune recognition of components usually associated with the presence of invading bacteria or virus. The majority of pharmaceutical based adjuvants currently being investigated are particulate based delivery systems, such as liposome formulations. As an adjuvant, liposomes have been shown to enhance immunity against the associated disease particularly when a cationic lipid is used within the formulation. In addition, the inclusion of components such as immunomodulators, further enhance immunity. Within this review, the use and application of effective adjuvants is investigated, with particular emphasis on liposomal-based systems. The mechanisms of adjuvant activity, analysis of complex immunological characteristics and formulation and delivery of these vaccines are considered.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
We present the prototype tool CADS* for the computer-aided development of an important class of self-* systems, namely systems whose components can be modelled as Markov chains. Given a Markov chain representation of the IT components to be included into a self-* system, CADS* automates or aids (a) the development of the artifacts necessary to build the self-* system; and (b) their integration into a fully-operational self-* solution. This is achieved through a combination of formal software development techniques including model transformation, model-driven code generation and dynamic software reconfiguration.
Resumo:
The further development of the use of NMR relaxation times in chemical, biological and medical research has perhaps been curtailed by the length of time these measurements often take. The DESPOT (Driven Equilibrium Single Pulse Observation of T1) method has been developed, which reduces the time required to make a T1 measurement by a factor of up to 100. The technique has been studied extensively herein and the thesis contains recommendations for its successful experimental application. Modified DESPOT type equations for use when T2 relaxation is incomplete or where off-resonance effects are thought to be significant are also presented. A recently reported application of the DESPOT technique to MR imaging gave good initial results but suffered from the fact that the images were derived from spin systems that were not driven to equilibrium. An approach which allows equilibrium to be obtained with only one non-acquisition sequence is presented herein and should prove invaluable in variable contrast imaging. A DESPOT type approach has also been successfully applied to the measurement of T1. T_1's can be measured, using this approach significantly faster than by the use of the classical method. The new method also provides a value for T1 simultaneously and therefore the technique should prove valuable in intermediate energy barrier chemical exchange studies. The method also gives rise to the possibility of obtaining simultaneous T1 and T1 MR images. The DESPOT technique depends on rapid multipulsing at nutation angles, normally less than 90^o. Work in this area has highlighted the possible time saving for spectral acquisition over the classical technique (90^o-5T_1)_n. A new method based on these principles has been developed which permits the rapid multipulsing of samples to give T_1 and M_0 ratio information. The time needed, however, is only slightly longer than would be required to determine the M_0 ratio alone using the classical technique. In ^1H decoupled ^13C spectroscopy the method also gives nOe ratio information for the individual absorptions in the spectrum.