931 resultados para Essential-state models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To explore the use of epidemiological modelling for the estimation of health effects of behaviour change interventions, using the example of computer-tailored nutrition education aimed at fruit and vegetable consumption in The Netherlands. Design: The effects of the intervention on changes in consumption were obtained from an earlier evaluation study. The effect on health outcomes was estimated using an epidemiological multi-state life table model. input data for the model consisted of relative risk estimates for cardiovascular disease and cancers, data on disease occurrence and mortality, and survey data on the consumption of fruits and vegetables. Results: if the computer-tailored nutrition education reached the entire adult population and the effects were sustained, it could result in a mortality decrease of 0.4 to 0.7% and save 72 to 115 life-years per 100000 persons aged 25 years or older. Healthy life expectancy is estimated to increase by 32.7 days for men and 25.3 days for women. The true effect is likely to lie between this theoretical maximum and zero effect, depending mostly on durability of behaviour change and reach of the intervention. Conclusion: Epidemiological models can be used to estimate the health impact of health promotion interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review the role of strong electronic correlations in quasi-two-dimensional organic charge transfer salts such as (BEDT-TTF)(2)X, (BETS)(2)Y, and beta'-[Pd(dmit)(2)](2)Z. We begin by defining minimal models for these materials. It is necessary to identify two classes of material: the first class is strongly dimerized and is described by a half-filled Hubbard model; the second class is not strongly dimerized and is described by a quarter-filled extended Hubbard model. We argue that these models capture the essential physics of these materials. We explore the phase diagram of the half-filled quasi-two-dimensional organic charge transfer salts, focusing on the metallic and superconducting phases. We review work showing that the metallic phase, which has both Fermi liquid and 'bad metal' regimes, is described both quantitatively and qualitatively by dynamical mean field theory (DMFT). The phenomenology of the superconducting state is still a matter of contention. We critically review the experimental situation, focusing on the key experimental results that may distinguish between rival theories of superconductivity, particularly probes of the pairing symmetry and measurements of the superfluid stiffness. We then discuss some strongly correlated theories of superconductivity, in particular the resonating valence bond (RVB) theory of superconductivity. We conclude by discussing some of the major challenges currently facing the field. These include parameterizing minimal models, the evidence for a pseudogap from nuclear magnetic resonance (NMR) experiments, superconductors with low critical temperatures and extremely small superfluid stiffnesses, the possible spin- liquid states in kappa-(ET)(2)Cu-2(CN)(3) and beta'-[Pd(dmit)(2)](2)Z, and the need for high quality large single crystals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chambers and Quiggin (2000) use state-contingent representations of risky production technologies to establish important theoretical results concerning producer behavior under uncertainty. Unfortunately, perceived problems in the estimation of state-contingent models have limited the usefulness of the approach in policy formulation. We show that fixed and random effects state-contingent production frontiers can be conveniently estimated in a finite mixtures framework. An empirical example is provided. Compared to conventional estimation approaches, we find that estimating production frontiers in a statecontingent framework produces significantly different estimates of elasticities, firm technical efficiencies and other quantities of economic interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant problem with currently suggested approaches for transforming between models in different languages is that the transformation is often described imprecisely, with the result that the overall transformation task may be imprecise, incomplete and inconsistent. This paper presents a formal metamodeling approach for transforming between UML and Object-Z. In the paper, the two languages are defined in terms of their formal metamodels, and a systematic transformation between the models is provided at the meta-level in terms of formal mapping functions. As a consequence, we can provide a precise, consistent and complete transformation between them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing evidence suggests that tissue transglutaminase (tTGase; type II) is externalized from cells, where it may play a key role in cell attachment and spreading and in the stabilization of the extracellular matrix (ECM) through protein cross-linking. However, the relationship between these different functions and the enzyme's mechanism of secretion is not fully understood. We have investigated the role of tTGase in cell migration using two stably transfected fibroblast cell lines in which expression of tTGase in its active and inactive (C277S mutant) states is inducible through the tetracycline-regulated system. Cells overexpressing both forms of tTGase showed increased cell attachment and decreased cell migration on fibronectin. Both forms of the enzyme could be detected on the cell surface, but only the clone overexpressing catalytically active tTGase deposited the enzyme into the ECM and cell growth medium. Cells overexpressing the inactive form of tTGase did not deposit the enzyme into the ECM or secrete it into the cell culture medium. Similar results were obtained when cells were transfected with tTGase mutated at Tyr(274) (Y274A), the proposed site for the cis,trans peptide bond, suggesting that tTGase activity and/or its tertiary conformation dependent on this bond may be essential for its externalization mechanism. These results indicate that tTGase regulates cell motility as a novel cell-surface adhesion protein rather than as a matrix-cross-linking enzyme. They also provide further important insights into the mechanism of externalization of the enzyme into the extracellular matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with Maine de Biran’s and Samuel Taylor Coleridge’s conceptions of will, and the way in which both thinkers’ posterities have been affected by the central role of these very conceptions in their respective bodies of thought. The research question that animates this work can therefore be divided into two main parts, one of which deals with will, while the other deals with its effects on posterity. In the first pages of the Introduction, I make the case for a comparison between two philosophers, and show how this comparison can bring one closer to truth, understood not in objective, but in subjective terms. I then justify my choice by underlining that, in spite of their many differences, Maine de Biran and Samuel Taylor Coleridge followed comparable paths, intellectually and spiritually, and came to similar conclusions concerning the essential activity of the human mind. Finally, I ask whether it is possible that this very focus on the human will may have contributed to the state of both thinkers’ works and of the reception of those works. This prologue is followed by five parts. In the first part, the similarities and differences between the two thinkers are explored further. In the second part, the connections between philosophy and singularity are examined, in order to show the ambivalence of the will as a foundation for truth. The third part is dedicated to the traditional division between subject and object in psychology, and its relevance in history and in moral philosophy. The fourth part tackles the complexity of the question of influence, with respect to both Maine de Biran’s and Coleridge’s cases, both thinkers being indebted to many philosophers of all times and places, and having to rely heavily on others for the publication, or the interpretation of their own works. The fifth part is concerned with the different aspects of the faculty of will, and primarily its relationship with interiority, as incommensurability, and actual, conditioned existence in a certain historical and spatial context. It ends with a return to the question of will and posterity and an announcement of what will be covered in the main body of the thesis. The main body is divided into three parts:‘L’émancipation’, ‘L’affirmation, and ‘La projection’. The first part is devoted to the way Maine de Biran and Samuel Taylor Coleridge extricated themselves from one epistemological paradigm to contribute to the foundation of another. It is divided in four chapters. The first chapter deals with the aforementioned change of paradigm, as corresponding to the emergence of two separate but associated movements, Romanticism and what the French philosopher refers to as ‘The Age of History’. The second chapter concerns the movement that preceded them, i.e. the Enlightenment, its main features according to both of our thinkers, and the two epistemological models that prevailed under it and influenced them heavily in their early years: Sensationism (Maine de Biran) and Associationism (Coleridge). The third chapter is about the probable influence of Immanuel Kant and his followers on Maine de Biran and Coleridge, and the various facts that allow us to claim originality for both thinkers’ works. In the fourth chapter, I contrast Maine de Biran and Coleridge with other movements and thinkers of their time, showing that, contrary to their respective thoughts, Maine de Biran and Coleridge could not but break free from the then prevailing systematic approach to truth. The second part of the thesis is concerned with the first part of its research question, namely, Maine de Biran’s and Coleridge’s conceptions of the will. It is divided into four chapters. The first chapter is a reflection on the will as a paradox: on the one hand, the will cannot be caused by any other phenomenon, or it is no longer a will; but it cannot be left purely undetermined, as if it is, it is then not different from chance. It thus needs, in order to be, to be contradictorily already moral. The second chapter is a comparison between Maine de Biran’s and Coleridge’s accounts of the origin of the will, where it is found that the French philosopher only observes that he has a will, whereas the English philosopher postulates the existence of this will. The comparison between Maine de Biran’s and Coleridge’s conceptions of the will is pursued in the third chapter, which tackles the question of the coincidence between the will and the self, in both thinkers’ works. It ends with the fourth chapter, which deals with the question of the relationship between the will and what is other to it, i.e. bodily sensations, passions and desires. The third part of the thesis focuses on the second part of its research question, namely the posterity of Maine de Biran’s and Coleridge’s works. It is divided into four chapters. The first chapter constitutes a continuation of the last chapter of the preceding part, in that that it deals with Maine de Biran’s and Coleridge’s relations to the ‘other’, and particularly their potential and actual audience, and with the way these relations may have affected their writing and publishing practices. The second chapter is a survey of both thinkers’ general reception, where it is found that, while Maine de Biran has been claimed by two important movements of thoughts as their initiator, Coleridge has been neglected by the only real movement he could have, or may indeed have pioneered. The third chapter is more directly concerned with the posterities of Maine de Biran’s and Coleridge’s conceptions of will, and attempts to show that the approach to, and the meaning of the will have evolved throughout the nineteenth century, and in the French Spiritualist and the British Idealist movements, from an essentially personal one to a more impersonal one. The fourth chapter is a partial conclusion, whose aim is to give a precise idea of where Maine de Biran and Coleridge stand, in relation to their century and to the philosophical movements and matters we are concerned with. The conclusion is a recapitulation of what has been found, with a particular emphasis on the dialogue initiated between Maine de Biran and Coleridge on the will, and the relation between will and posterity. It suggests that both thinkers have to pay the price of a problematic reception for the individuality that pervades their respective works, and goes further in suggesting that s/he who chooses to found his individuality on the will is bound to feel this incompleteness in his/her own personal life more acutely than s/he who does not. It ends with a reflection on fixedness and movement, as the two antagonistic states that the theoretician of the will paradoxically aspires to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The equation of state for dense fluids has been derived within the framework of the Sutherland and Katz potential models. The equation quantitatively agrees with experimental data on the isothermal compression of water under extrapolation into the high pressure region. It establishes an explicit relationship between the thermodynamic experimental data and the effective parameters of the molecular potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a source or sink of reactive power, compensators can be made from a voltage sourced inverter circuit with the a.c. terminals of the inverter connected to the system through an inductive link and with a capacitor connected across the d.c. terminals. Theoretical calculations on linearised models of the compensators have shown that the parameters characterising the performance are the reduced firing angle and the resonance ratio. The resonance ratio is the ratio of the natural frequency of oscillation of the energy storage components in the circuit to the system frequency. The reduced firing angle of the inverter divided by the damping coefficient, β, where β is half the R to X ratio of the link between the inverter and the system. The theoretical results have been verified by computer simulation and experiment. There is a narrow range of values for the resonance ratio below which there is no appreciable improvement in performance, despite an increase in the cost of the energy storage components, and above which the performance of the equipment is poor with the current being dominated by harmonics. The harmonic performance of the equipment is improved by using multiple inverters and phase shifting transformers to increase the pulse number. The optimum value of the resonance ratio increases pulse number, indicating a reduction in the energy storage components needed at high pulse numbers. The reactive power output from the compensator varies linearly with the reduced firing angle while the losses vary as the square of it.