12 resultados para distributed work
em Aston University Research Archive
Resumo:
This research describes the development of a groupware system which adds security services to a Computer Supported Cooperative Work system operating over the Internet. The security services use cryptographic techniques to provide a secure access control service and an information protection service. These security services are implemented as a protection layer for the groupware system. These layers are called External Security Layer (ESL) and Internal Security Layer (ISL) respectively. The security services are sufficiently flexible to allow the groupware system to operate in both synchronous and asynchronous modes. The groupware system developed - known as Secure Software Inspection Groupware (SecureSIG) - provides security for a distributed group performing software inspection. SecureSIG extends previous work on developing flexible software inspection groupware (FlexSIG) Sahibuddin, 1999). The SecureSIG model extends the FlexSIG model, and the prototype system was added to the FlexSIG prototype. The prototype was built by integrating existing software, communication and cryptography tools and technology. Java Cryptography Extension (JCE) and Internet technology were used to build the prototype. To test the suitability and transparency of the system, an evaluation was conducted. A questionnaire was used to assess user acceptability.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
Adaptability for distributed object-oriented enterprise frameworks is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing environment. In this thesis, we propose a Meta Level Component-Based Framework (MELC) which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our novel approach of combining a meta architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. The critical nature of distributed technologies requires frameworks to be adaptable. Our framework employs a meta architecture. It supports dynamic adaptation of feasible design decisions in the framework design space by specifying and coordinating meta-objects that represent various aspects within the distributed environment. The meta architecture in MELC framework can provide the adaptability for system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed applications. The concept of using a meta architecture to produce an adaptable pattern-oriented framework for distributed computing applications is new and has not previously been explored in research. As the framework is adaptable, the proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address technical system issues in the domain of distributed computing and they can be woven together to shape the framework in future. We show how MELC can be used effectively to enable dynamic component integration and to separate system functionality from business functionality. We demonstrate how MELC provides an adaptable and dynamic run time environment using our system configuration and management utility. We also highlight how MELC will impose significant adaptability in system evolution through a prototype E-Bookshop application to assemble its business functions with distributed computing components at the meta level in MELC architecture. Our performance tests show that MELC does not entail prohibitive performance tradeoffs. The work to develop the MELC framework for distributed computing applications has emerged as a promising way to meet current and future challenges in the distributed environment.
Resumo:
This paper explores the role of transactive memory in enabling knowledge transfer between globally distributed teams. While the information systems literature has recently acknowledged the role transactive memory plays in improving knowledge processes and performance in colocated teams, little is known about its contribution to distributed teams. To contribute to filling this gap, knowledge-transfer challenges and processes between onsite and offshore teams were studied at TATA Consultancy Services. In particular, the paper describes the transfer of knowledge between onsite and offshore teams through encoding, storing and retrieving processes. An in-depth case study of globally distributed software development projects was carried out, and a qualitative, interpretive approach was adopted. The analysis of the case suggests that in order to overcome differences derived from the local contexts of the onsite and offshore teams (e.g. different work routines, methodologies and skills), some specific mechanisms supporting the development of codified and personalized ‘directories’ were introduced. These include the standardization of templates and methodologies across the remote sites as well as frequent teleconferencing sessions and occasional short visits. These mechanisms contributed to the development of the notion of ‘who knows what’ across onsite and offshore teams despite the challenges associated with globally distributed teams, and supported the transfer of knowledge between onsite and offshore teams. The paper concludes by offering theoretical and practical implications.
Resumo:
We report a novel tunable dispersion compensator (TDC) based on all-fiber distributed Gires-Tournois etaIons (DGTE), which is formed by overlapped chirped fiber gratings. Two DGTEs of opposite dispersion slope work together to generate a tunable periodical dispersion profile. The demonstrated TDCs have the advantages of multichannel operation, extremely low group-delay ripple, low loss, and low cost.
Resumo:
In this work we propose a NLSE-based model of power and spectral properties of the random distributed feedback (DFB) fiber laser. The model is based on coupled set of non-linear Schrödinger equations for pump and Stokes waves with the distributed feedback due to Rayleigh scattering. The model considers random backscattering via its average strength, i.e. we assume that the feedback is incoherent. In addition, this allows us to speed up simulations sufficiently (up to several orders of magnitude). We found that the model of the incoherent feedback predicts the smooth and narrow (comparing with the gain spectral profile) generation spectrum in the random DFB fiber laser. The model allows one to optimize the random laser generation spectrum width varying the dispersion and nonlinearity values: we found, that the high dispersion and low nonlinearity results in narrower spectrum that could be interpreted as four-wave mixing between different spectral components in the quasi-mode-less spectrum of the random laser under study could play an important role in the spectrum formation. Note that the physical mechanism of the random DFB fiber laser formation and broadening is not identified yet. We investigate temporal and statistical properties of the random DFB fiber laser dynamics. Interestingly, we found that the intensity statistics is not Gaussian. The intensity auto-correlation function also reveals that correlations do exist. The possibility to optimize the system parameters to enhance the observed intrinsic spectral correlations to further potentially achieved pulsed (mode-locked) operation of the mode-less random distributed feedback fiber laser is discussed.
Modeling of the spectrum in a random distributed feedback fiber laser within the power balance modes
Resumo:
The simplest model for a description of the random distributed feedback (RDFB) Raman fiber laser is a power balance model describing the evolution of the intensities of the waves over the fiber length. The model predicts well the power performances of the RDFB fiber laser including the generation threshold, the output power and pump and generation wave intensity distributions along the fiber. In the present work, we extend the power balance model and modify equations in such a way that they describe now frequency dependent spectral power density instead of integral over the spectrum intensities. We calculate the generation spectrum by using the depleted pump wave longitudinal distribution derived from the conventional power balance model. We found the spectral balance model to be sufficient to account for the spectral narrowing in the RDFB laser above the threshold of the generation. © 2014 SPIE.
Resumo:
In this letter, the polarization properties of a random fiber laser operating via Raman gain and random distributed feedback owing to Rayleigh scattering are investigated for the first time. Using polarized pump, the partially polarized generation is obtained with a generation spectrum exhibiting discrete narrow spectral features contrary to the smooth spectrum observed for the depolarized pump. The threshold, output power, degree of polarization and the state of polarization (SOP) of the lasing can be significantly influenced by the SOP of the pump. Fine narrow spectral components are also sensitive to the SOP of the pump wave. Furthermore, we found that random lasing's longitudinal power distributions are different in the case of polarized and depolarized pumping that results in considerable reduction of the generation slope efficiency for the polarized radiation. Our results indicate that polarization effects play an important role on the performance of the random fiber laser. This work improves the understanding of the physics of random lasing in fibers and makes a step forward towards the establishment of the vector model of random fiber lasers.
Resumo:
This paper reports work of a MEng student final year project, which looks in detail at the impacts that distributed generation can have on existing low-voltage distribution network protection systems. After a review of up-to-date protection issues, this paper will investigate several key issues that face distributed generation connections when it comes to network protection systems. These issues include, the blinding of protection systems, failure to automatically reclose, unintentional islanding, loss of mains power and the false tripping of feeders. Each of these problems impacts on protection systems in its own way. This study aims to review and investigate these problems via simulation demonstrations on one representative network to recommend solutions to practices.