835 resultados para Computer networks -- Simulation methods
Resumo:
Real-time control programs are often used in contexts where (conceptually) they run forever. Repetitions within such programs (or their specifications) may either (i) be guaranteed to terminate, (ii) be guaranteed to never terminate (loop forever), or (iii) may possibly terminate. In dealing with real-time programs and their specifications, we need to be able to represent these possibilities, and define suitable refinement orderings. A refinement ordering based on Dijkstra's weakest precondition only copes with the first alternative. Weakest liberal preconditions allow one to constrain behaviour provided the program terminates, which copes with the third alternative to some extent. However, neither of these handles the case when a program does not terminate. To handle this case a refinement ordering based on relational semantics can be used. In this paper we explore these issues and the definition of loops for real-time programs as well as corresponding refinement laws.
Resumo:
Object-Z allows coupling constraints between classes which, on the one hand, facilitate specification at a high level of abstraction, but, on the other hand, make class refinement non-compositional. The consequence of this is that refinement is not practical for large Systems. This paper overcomes this limitation by introducing a methodology for compositional class refinement in Object-Z. The key step is an equivalence transformation of an arbitrary Object-Z specification to one in which introduced constraints prohibit non-compositional refinements. The methodology also allows the constraints which couple classes to be refined yielding an unrestricted approach to compositional class refinement.
Resumo:
In this tutorial paper we summarise the key features of the multi-threaded Qu-Prolog language for implementing multi-threaded communicating agent applications. Internal threads of an agent communicate using the shared dynamic database used as a generalisation of Linda tuple store. Threads in different agents, perhaps on different hosts, communicate using either a thread-to-thread store and forward communication system, or by a publish and subscribe mechanism in which messages are routed to their destinations based on content test subscriptions. We illustrate the features using an auction house application. This is fully distributed with multiple auctioneers and bidders which participate in simultaneous auctions. The application makes essential use of the three forms of inter-thread communication of Qu-Prolog. The agent bidding behaviour is specified graphically as a finite state automaton and its implementation is essentially the execution of its state transition function. The paper assumes familiarity with Prolog and the basic concepts of multi-agent systems.
Resumo:
For determining functionality dependencies between two proteins, both represented as 3D structures, it is an essential condition that they have one or more matching structural regions called patches. As 3D structures for proteins are large, complex and constantly evolving, it is computationally expensive and very time-consuming to identify possible locations and sizes of patches for a given protein against a large protein database. In this paper, we address a vector space based representation for protein structures, where a patch is formed by the vectors within the region. Based on our previews work, a compact representation of the patch named patch signature is applied here. A similarity measure of two patches is then derived based on their signatures. To achieve fast patch matching in large protein databases, a match-and-expand strategy is proposed. Given a query patch, a set of small k-sized matching patches, called candidate patches, is generated in match stage. The candidate patches are further filtered by enlarging k in expand stage. Our extensive experimental results demonstrate encouraging performances with respect to this biologically critical but previously computationally prohibitive problem.
Resumo:
This paper presents a framework for compositional verification of Object-Z specifications. Its key feature is a proof rule based on decomposition of hierarchical Object-Z models. For each component in the hierarchy local properties are proven in a single proof step. However, we do not consider components in isolation. Instead, components are envisaged in the context of the referencing super-component and proof steps involve assumptions on properties of the sub-components. The framework is defined for Linear Temporal Logic (LTL)
Resumo:
In this paper we describe an approach to interface Abstract State Machines (ASM) with Multiway Decision Graphs (MDG) to enable tool support for the formal verification of ASM descriptions. ASM is a specification method for software and hardware providing a powerful means of modeling various kinds of systems. MDGs are decision diagrams based on abstract representation of data and axe used primarily for modeling hardware systems. The notions of ASM and MDG axe hence closely related to each other, making it appealing to link these two concepts. The proposed interface between ASM and MDG uses two steps: first, the ASM model is transformed into a flat, simple transition system as an intermediate model. Second, this intermediate model is transformed into the syntax of the input language of the MDG tool, MDG-HDL. We have successfully applied this transformation scheme on a case study, the Island Tunnel Controller, where we automatically generated the corresponding MDG-HDL models from ASM specifications.
Resumo:
In this paper we review recent theoretical approaches for analysing the dynamics of on-line learning in multilayer neural networks using methods adopted from statistical physics. The analysis is based on monitoring a set of macroscopic variables from which the generalisation error can be calculated. A closed set of dynamical equations for the macroscopic variables is derived analytically and solved numerically. The theoretical framework is then employed for defining optimal learning parameters and for analysing the incorporation of second order information into the learning process using natural gradient descent and matrix-momentum based methods. We will also briefly explain an extension of the original framework for analysing the case where training examples are sampled with repetition.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
In multilevel analyses, problems may arise when using Likert-type scales at the lowest level of analysis. Specifically, increases in variance should lead to greater censoring for the groups whose true scores fall at either end of the distribution. The current study used simulation methods to examine the influence of single-item Likert-type scale usage on ICC(1), ICC(2), and group-level correlations. Results revealed substantial underestimation of ICC(1) when using Likert-type scales with common response formats (e.g., 5 points). ICC(2) and group-level correlations were also underestimated, but to a lesser extent. Finally, the magnitude of underestimation was driven in large part to an interaction between Likert-type scale usage and the amounts of within- and between-group variance. © Sage Publications.
Resumo:
The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.
Resumo:
Experimental and theoretical methods have been used to study zeolite structures, properties and applications as membranes for separation purposes. Thin layers of silicalite-1 and Na-LTA zeolites have been synthesised onto carbon-graphite supports using a hydrothermal synthesis procedure. The separation behaviour of the composite membranes was characterized by gas permeation studies of pure, binary and ternary mixtures of methane, ethane and propane. The influence of temperature and feed gas mixture composition on the separation and selectivity performance of the membranes was also investigated. It was found that the silicalite-1 composite membranes synthesised onto the 4 hour oxidized carbon-graphite supports showed the most promising separation behaviour of all the composite membranes investigated. Molecular simulation methods were used to gain an understanding of how hydrocarbon molecules behave both within the pores and on the surfaces of silicalite-1, mordenite and LTA zeolites. Molecular dynamic simulations were used to investigate the influence of temperature and molecular loadings on the diffusional behaviour of hydrocarbons in zeolites. Both hydroxylated (surface termination with hydroxyl groups) and non-hydroxylated silicalite-1 and Na-mordenite surfaces were generated. For both zeolites the most stable surfaces correspond to the {010} surface. For the silicalite-1 {010} surface the adsorption of hydrocarbons and molecular water onto the hydroxylated surface showed a favourable exothermic adsorption process compared to adsorption on the non-hydroxylated surface. With the Na-mordenite {010} surface the adsorption of hydrocarbons onto both the hydroxylated and non-hydroxylated surfaces had a combination of favourable and non-favourable adsorption energies, while the adsorption of molecular water onto both types of surface was found to be a favourable adsorption process.
Resumo:
Fault tree analysis is used as a tool within hazard and operability (Hazop) studies. The present study proposes a new methodology for obtaining the exact TOP event probability of coherent fault trees. The technique uses a top-down approach similar to that of FATRAM. This new Fault Tree Disjoint Reduction Algorithm resolves all the intermediate events in the tree except OR gates with basic event inputs so that a near minimal cut sets expression is obtained. Then Bennetts' disjoint technique is applied and remaining OR gates are resolved. The technique has been found to be appropriate as an alternative to Monte Carlo simulation methods when rare events are countered and exact results are needed. The algorithm has been developed in FORTRAN 77 on the Perq workstation as an addition to the Aston Hazop package. The Perq graphical environment enabled a friendly user interface to be created. The total package takes as its input cause and symptom equations using Lihou's form of coding and produces both drawings of fault trees and the Boolean sum of products expression into which reliability data can be substituted directly.
Resumo:
A literature review of work carried out on batch and continuous chromatographic biochemical reactor-separators has been made. The major part of this work has involved the development of a batch chromatographic reactor-separator for the production of dextran and fructose by the enzymatic action of the enzyme dextransucrase on sucrose. In this reactor, simultaneous reaction and separation occurs thus reducing downstream processing and isolation of products as compared to the existing industrial process. The chromatographic reactor consisted of a glass column packed with a stationary phase consisting of cross linked polysytrene resin in the calcium form. The mobile phase consisted of diluted dextransucrase in deionised water. Initial experiments were carried out on a reactor separtor which had an internal diameter of 0.97cm and length of 1.5m. To study the effect of scale up the reactor diameter was doubled to 1.94cm and length increased to 1.75m. The results have shown that the chromatographic reactor uses more enzyme than a conventional batch reactor for a given conversion of sucrose and that an increase in void volume results in higher conversions of sucrose. A comparison of the molecular weight distribution of dextran produced by the chromatographic reactor was made with that from a conventional batch reactor. The results have shown that the chromatographic reactor produces 30% more dextran of molecular weight greater than 150,000 daltons at 20% w/v sucrose concentration than conventional reactors. This is because some of the fructose molecules are prevented as acting as acceptors in the chromatographic reactor due to their removal from the reaction zone. In the conventional reactor this is not possible and therefore a greater proportion of low molecular weight dextran is produced which does not have much clinical use. A theoretical model was developed to describe the behaviour of the reactor separator and this model was simulated using a computer. The simulation predictions showed good agreement with experimental results at high eluent flowrates and low conversions.
Resumo:
Computer-based simulation is frequently used to evaluate the capabilities of proposed manufacturing system designs. Unfortunately, the real systems are often found to perform quite differently from simulation predictions and one possible reason for this is an over-simplistic representation of workers' behaviour within current simulation techniques. The accuracy of design predictions could be improved through a modelling tool that integrates with computer-based simulation and incorporates the factors and relationships that determine workers' performance. This paper explores the viability of developing a similar tool based on our previously published theoretical modelling framework. It focuses on evolving this purely theoretical framework towards a practical modelling tool that can actually be used to expand the capabilities of current simulation techniques. Based on an industrial study, the paper investigates how the theoretical framework works in practice, analyses strengths and weaknesses in its formulation, and proposes developments that can contribute towards enabling human performance modelling in a practical way.