27 resultados para Time-sharing computer systems
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.
Resumo:
The CONNECT European project that started in February 2009 aims at dropping the interoperability barrier faced by today’s distributed systems. It does so by adopting a revolutionary approach to the seamless networking of digital systems, that is, synthesizing on the fly the connectors via which networked systems communicate.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
River basin surveillance using remotely sensed data: a water resources information management system
Resumo:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
Resumo:
Just-in-time (JIT) production systems are increasingly being seen as a vital way for manufacturing organizations to enhance their competitiveness. A number of commentators have suggested that this will simplify jobs and reduce employee well-being. This paper presents a conceptual framework for interpreting the effects of JIT and reports findings from a study of the impact of JIT on the content of workers'jobs and on job satisfaction and psychological strain. The introduction of JIT led to a reduction in control over work timing, an increase in production pressure, and a drop in job satisfaction. Contrary to claims in the literature, no changes were found in control over work methods, other aspects of cognitive demands and skill use, and in psychological strain. The study shows that JIT can be implemented without radical changes in job content or adverse impact in terms of employee strain, and the implications of these findings are discussed.
Resumo:
Requirements are sensitive to the context in which the system-to-be must operate. Where such context is well-understood and is static or evolves slowly, existing RE techniques can be made to work well. Increasingly, however, development projects are being challenged to build systems to operate in contexts that are volatile over short periods in ways that are imperfectly understood. Such systems need to be able to adapt to new environmental contexts dynamically, but the contextual uncertainty that demands this self-adaptive ability makes it hard to formulate, validate and manage their requirements. Different contexts may demand different requirements trade-offs. Unanticipated contexts may even lead to entirely new requirements. To help counter this uncertainty, we argue that requirements for self-adaptive systems should be run-time entities that can be reasoned over in order to understand the extent to which they are being satisfied and to support adaptation decisions that can take advantage of the systems' self-adaptive machinery. We take our inspiration from the fact that explicit, abstract representations of software architectures used to be considered design-time-only entities but computational reflection showed that architectural concerns could be represented at run-time too, helping systems to dynamically reconfigure themselves according to changing context. We propose to use analogous mechanisms to achieve requirements reflection. In this paper we discuss the ideas that support requirements reflection as a means to articulate some of the outstanding research challenges.
Resumo:
The behaviour of self adaptive systems can be emergent. The difficulty in predicting the system's behaviour means that there is scope for the system to surprise its customers and its developers. Because its behaviour is emergent, a self-adaptive system needs to garner confidence in its customers and it needs to resolve any surprise on the part of the developer during testing and mainteinance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system's behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, a means needs to be found to explain the current behaviour of the system and the reasons that brought that behaviour about. We propose the use of goal-based models during runtime to offer self-explanation of how a system is meeting its requirements, and why the means of meeting these were chosen. We discuss the results of early experiments in self-explanation, and set out future work. © 2012 C.E.S.A.M.E.S.
Resumo:
Reading scientific articles is more time-consuming than reading news because readers need to search and read many citations. This paper proposes a citation guided method for summarizing multiple scientific papers. A phenomenon we can observe is that citation sentences in one paragraph or section usually talk about a common fact, which is usually represented as a set of noun phrases co-occurring in citation texts and it is usually discussed from different aspects. We design a multi-document summarization system based on common fact detection. One challenge is that citations may not use the same terms to refer to a common fact. We thus use term association discovering algorithm to expand terms based on a large set of scientific article abstracts. Then, citations can be clustered based on common facts. The common fact is used as a salient term set to get relevant sentences from the corresponding cited articles to form a summary. Experiments show that our method outperforms three baseline methods by ROUGE metric.©2013 Elsevier B.V. All rights reserved.
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
Most current 3D landscape visualisation systems either use bespoke hardware solutions, or offer a limited amount of interaction and detail when used in realtime mode. We are developing a modular, data driven 3D visualisation system that can be readily customised to specific requirements. By utilising the latest software engineering methods and bringing a dynamic data driven approach to geo-spatial data visualisation we will deliver an unparalleled level of customisation in near-photo realistic, realtime 3D landscape visualisation. In this paper we show the system framework and describe how this employs data driven techniques. In particular we discuss how data driven approaches are applied to the spatiotemporal management aspect of the application framework, and describe the advantages these convey.