26 resultados para Essential-state models

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing evidence suggests that tissue transglutaminase (tTGase; type II) is externalized from cells, where it may play a key role in cell attachment and spreading and in the stabilization of the extracellular matrix (ECM) through protein cross-linking. However, the relationship between these different functions and the enzyme's mechanism of secretion is not fully understood. We have investigated the role of tTGase in cell migration using two stably transfected fibroblast cell lines in which expression of tTGase in its active and inactive (C277S mutant) states is inducible through the tetracycline-regulated system. Cells overexpressing both forms of tTGase showed increased cell attachment and decreased cell migration on fibronectin. Both forms of the enzyme could be detected on the cell surface, but only the clone overexpressing catalytically active tTGase deposited the enzyme into the ECM and cell growth medium. Cells overexpressing the inactive form of tTGase did not deposit the enzyme into the ECM or secrete it into the cell culture medium. Similar results were obtained when cells were transfected with tTGase mutated at Tyr(274) (Y274A), the proposed site for the cis,trans peptide bond, suggesting that tTGase activity and/or its tertiary conformation dependent on this bond may be essential for its externalization mechanism. These results indicate that tTGase regulates cell motility as a novel cell-surface adhesion protein rather than as a matrix-cross-linking enzyme. They also provide further important insights into the mechanism of externalization of the enzyme into the extracellular matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with Maine de Biran’s and Samuel Taylor Coleridge’s conceptions of will, and the way in which both thinkers’ posterities have been affected by the central role of these very conceptions in their respective bodies of thought. The research question that animates this work can therefore be divided into two main parts, one of which deals with will, while the other deals with its effects on posterity. In the first pages of the Introduction, I make the case for a comparison between two philosophers, and show how this comparison can bring one closer to truth, understood not in objective, but in subjective terms. I then justify my choice by underlining that, in spite of their many differences, Maine de Biran and Samuel Taylor Coleridge followed comparable paths, intellectually and spiritually, and came to similar conclusions concerning the essential activity of the human mind. Finally, I ask whether it is possible that this very focus on the human will may have contributed to the state of both thinkers’ works and of the reception of those works. This prologue is followed by five parts. In the first part, the similarities and differences between the two thinkers are explored further. In the second part, the connections between philosophy and singularity are examined, in order to show the ambivalence of the will as a foundation for truth. The third part is dedicated to the traditional division between subject and object in psychology, and its relevance in history and in moral philosophy. The fourth part tackles the complexity of the question of influence, with respect to both Maine de Biran’s and Coleridge’s cases, both thinkers being indebted to many philosophers of all times and places, and having to rely heavily on others for the publication, or the interpretation of their own works. The fifth part is concerned with the different aspects of the faculty of will, and primarily its relationship with interiority, as incommensurability, and actual, conditioned existence in a certain historical and spatial context. It ends with a return to the question of will and posterity and an announcement of what will be covered in the main body of the thesis. The main body is divided into three parts:‘L’émancipation’, ‘L’affirmation, and ‘La projection’. The first part is devoted to the way Maine de Biran and Samuel Taylor Coleridge extricated themselves from one epistemological paradigm to contribute to the foundation of another. It is divided in four chapters. The first chapter deals with the aforementioned change of paradigm, as corresponding to the emergence of two separate but associated movements, Romanticism and what the French philosopher refers to as ‘The Age of History’. The second chapter concerns the movement that preceded them, i.e. the Enlightenment, its main features according to both of our thinkers, and the two epistemological models that prevailed under it and influenced them heavily in their early years: Sensationism (Maine de Biran) and Associationism (Coleridge). The third chapter is about the probable influence of Immanuel Kant and his followers on Maine de Biran and Coleridge, and the various facts that allow us to claim originality for both thinkers’ works. In the fourth chapter, I contrast Maine de Biran and Coleridge with other movements and thinkers of their time, showing that, contrary to their respective thoughts, Maine de Biran and Coleridge could not but break free from the then prevailing systematic approach to truth. The second part of the thesis is concerned with the first part of its research question, namely, Maine de Biran’s and Coleridge’s conceptions of the will. It is divided into four chapters. The first chapter is a reflection on the will as a paradox: on the one hand, the will cannot be caused by any other phenomenon, or it is no longer a will; but it cannot be left purely undetermined, as if it is, it is then not different from chance. It thus needs, in order to be, to be contradictorily already moral. The second chapter is a comparison between Maine de Biran’s and Coleridge’s accounts of the origin of the will, where it is found that the French philosopher only observes that he has a will, whereas the English philosopher postulates the existence of this will. The comparison between Maine de Biran’s and Coleridge’s conceptions of the will is pursued in the third chapter, which tackles the question of the coincidence between the will and the self, in both thinkers’ works. It ends with the fourth chapter, which deals with the question of the relationship between the will and what is other to it, i.e. bodily sensations, passions and desires. The third part of the thesis focuses on the second part of its research question, namely the posterity of Maine de Biran’s and Coleridge’s works. It is divided into four chapters. The first chapter constitutes a continuation of the last chapter of the preceding part, in that that it deals with Maine de Biran’s and Coleridge’s relations to the ‘other’, and particularly their potential and actual audience, and with the way these relations may have affected their writing and publishing practices. The second chapter is a survey of both thinkers’ general reception, where it is found that, while Maine de Biran has been claimed by two important movements of thoughts as their initiator, Coleridge has been neglected by the only real movement he could have, or may indeed have pioneered. The third chapter is more directly concerned with the posterities of Maine de Biran’s and Coleridge’s conceptions of will, and attempts to show that the approach to, and the meaning of the will have evolved throughout the nineteenth century, and in the French Spiritualist and the British Idealist movements, from an essentially personal one to a more impersonal one. The fourth chapter is a partial conclusion, whose aim is to give a precise idea of where Maine de Biran and Coleridge stand, in relation to their century and to the philosophical movements and matters we are concerned with. The conclusion is a recapitulation of what has been found, with a particular emphasis on the dialogue initiated between Maine de Biran and Coleridge on the will, and the relation between will and posterity. It suggests that both thinkers have to pay the price of a problematic reception for the individuality that pervades their respective works, and goes further in suggesting that s/he who chooses to found his individuality on the will is bound to feel this incompleteness in his/her own personal life more acutely than s/he who does not. It ends with a reflection on fixedness and movement, as the two antagonistic states that the theoretician of the will paradoxically aspires to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The equation of state for dense fluids has been derived within the framework of the Sutherland and Katz potential models. The equation quantitatively agrees with experimental data on the isothermal compression of water under extrapolation into the high pressure region. It establishes an explicit relationship between the thermodynamic experimental data and the effective parameters of the molecular potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a source or sink of reactive power, compensators can be made from a voltage sourced inverter circuit with the a.c. terminals of the inverter connected to the system through an inductive link and with a capacitor connected across the d.c. terminals. Theoretical calculations on linearised models of the compensators have shown that the parameters characterising the performance are the reduced firing angle and the resonance ratio. The resonance ratio is the ratio of the natural frequency of oscillation of the energy storage components in the circuit to the system frequency. The reduced firing angle of the inverter divided by the damping coefficient, β, where β is half the R to X ratio of the link between the inverter and the system. The theoretical results have been verified by computer simulation and experiment. There is a narrow range of values for the resonance ratio below which there is no appreciable improvement in performance, despite an increase in the cost of the energy storage components, and above which the performance of the equipment is poor with the current being dominated by harmonics. The harmonic performance of the equipment is improved by using multiple inverters and phase shifting transformers to increase the pulse number. The optimum value of the resonance ratio increases pulse number, indicating a reduction in the energy storage components needed at high pulse numbers. The reactive power output from the compensator varies linearly with the reduced firing angle while the losses vary as the square of it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mineral wool insulation material applied to the primary cooling circuit of a nuclear reactor maybe damaged in the course of a loss of coolant accident (LOCA). The insulation material released by the leak may compromise the operation of the emergency core cooling system (ECCS), as it maybe transported together with the coolant in the form of mineral wool fiber agglomerates (MWFA) suspensions to the containment sump strainers, which are mounted at the inlet of the ECCS to keep any debris away from the emergency cooling pumps. In the further course of the LOCA, the MWFA may block or penetrate the strainers. In addition to the impact of MWFA on the pressure drop across the strainers, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. Therefore, it is essential to understand the transport characteristics of the insulation materials in order to determine the long-term operability of nuclear reactors, which undergo LOCA. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz1 is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effects that particles formed due to corrosion of metallic containment internals by the coolant medium have on the strainer pressure drop. The focus of this presentation is on the numerical models that are used to predict the transport of MWFA by CFD simulations. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates can represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Only one of the single effect experimental scenarios is described here that are used in validation of the numerical models. The scenario examines the suspension and horizontal transport of the fiber agglomerates in a racetrack type channel. The corresponding experiments will be described in an accompanying presentation (see abstract of Seeliger et al.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cystic fibrosis (CF) is the most common lethal inherited disease among Caucasians and arises due to mutations in a chloride channel, called cystic fibrosis transmembrane conductance regulator. A hallmark of this disease is the chronic bacterial infection of the airways, which is usually, associated with pathogens such as Pseudomonas aeruginosa, S. aureus and recently becoming more prominent, B. cepacia. The excessive inflammatory response, which leads to irreversible lung damage, will in the long term lead to mortality of the patient at around the age of 40 years. Understanding the pathogenesis of CF currently relies on animal models, such as those employing genetically-modified mice, and on single cell culture models, which are grown either as polarised or non-polarised epithelium in vitro. Whilst these approaches partially enable the study of disease progression in CF, both types of models have inherent limitations. The overall aim of this thesis was to establish a multicellular co-culture model of normal and CF human airways in vitro, which helps to partially overcome these limitations and permits analysis of cell-to-cell communication in the airways. These models could then be used to examine the co-ordinated response of the airways to infection with relevant pathogens in order to validate this approach over animals/single cell models. Therefore epithelial cell lines of non-CF and CF background were employed in a co-culture model together with human pulmonary fibroblasts. Co-cultures were grown on collagen-coated permeable supports at air-liquid interface to promote epithelial cell differentiation. The models were characterised and essential features for investigating CF infections and inflammatory responses were investigated and analysed. A pseudostratified like epithelial cell layer was established at air liquid interface (ALI) of mono-and co-cultures and cell layer integrity was verified by tight junction (TJ) staining and transepithelial resistance measurements (TER). Mono- and co-cultures were also found to secrete the airway mucin MUC5AC. Influence of bacterial infections was found to be most challenging when intact S. aureus, B. cepacia and P. aeruginosa were used. CF mono- and co-cultures were found to mimic the hyperinflammatory state found in CF, which was confirmed by analysing IL-8 secretions of these models. These co-culture models will help to elucidate the role fibroblasts play in the inflammatory response to bacteria and will provide a useful testing platform to further investigate the dysregulated airway responses seen in CF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To ensure state synchronization of signalling operations, many signaling protocol designs choose to establish “soft” state that expires if it is not refreshed. The approaches of refreshing state in multi-hop signaling system can be classified as either end-to-end (E2E) or hop-by-hop (HbH). Although both state refresh approaches have been widely used in practical signaling protocols, the design tradeoffs between state synchronization and signaling cost have not yet been fully investigated. In this paper, we investigate this issue from the perspectives of state refresh and state removal. We propose simple but effective Markov chain models for both approaches and obtain closed-form solutions which depict the state refresh performance in terms of state consistency and refresh message rate, as well as the state removal performance in terms of state removal delay. Simulations verify the analytical models. It is observed that the HbH approach yields much better state synchronization at the cost of higher signaling cost than the E2E approach. While the state refresh performance can be improved by increasing the values of state refresh and timeout timers, the state removal delay increases largely for both E2E and HbH approaches. The analysis here shed lights on the design of signaling protocols and the configuration of the timers to adapt to changing network conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investment in capacity expansion remains one of the most critical decisions for a manufacturing organisation with global production facilities. Multiple factors need to be considered making the decision process very complex. The purpose of this paper is to establish the state-of-the-art in multi-factor models for capacity expansion of manufacturing plants within a corporation. The research programme consisting of an extensive literature review and a structured assessment of the strengths and weaknesses of the current research is presented. The study found that there is a wealth of mathematical multi-factor models for evaluating capacity expansion decisions however no single contribution captures all the different facets of the problem.