70 resultados para Software systems


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The goal of this roadmap paper is to summarize the state-of-the-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on "Software Engineering for Self-Adaptive Systems," which took place in January 2008. © 2009 Springer Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As machine tools continue to become increasingly repeatable and accurate, high-precision manufacturers may be tempted to consider how they might utilise machine tools as measurement systems. In this paper, we have explored this paradigm by attempting to repurpose state-of-the-art coordinate measuring machine Uncertainty Evaluating Software (UES) for a machine tool application. We performed live measurements on all the systems in question. Our findings have highlighted some gaps with UES when applied to machine tools, and we have attempted to identify the sources of variation which have led to discrepancies. Implications of this research include requirements to evolve the algorithms within the UES if it is to be adapted for on-machine measurement, improve the robustness of the input parameters, and most importantly, clarify expectations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The main objective of the paper is to develop a risk management framework for software development projects from developers' perspective. Design/methodology/approach - This study uses a combined qualitative and quantitative technique with the active involvement of stakeholders in order to identify, analyze and respond to risks. The entire methodology has been explained using a case study on software development project in a public sector organization in Barbados. Findings - Analytical approach to managing risk in software development ensures effective delivery of projects to clients. Research limitations/implications - The proposed risk management framework has been applied to a single case. Practical implications - Software development projects are characterized by technical complexity, market and financial uncertainties and competent manpower availability. Therefore, successful project accomplishment depends on addressing those issues throughout the project phases. Effective risk management ensures the success of projects. Originality/value - There are several studies on managing risks in software development and information technology (IT) projects. Most of the studies identify and prioritize risks through empirical research in order to suggest mitigating measures. Although they are important to clients for future projects, these studies fail to provide any framework for risk management from software developers' perspective. Although a few studies introduced framework of risk management in software development, most of them are presented from clients' perspectives and very little effort has been made to integrate this with the software development cycle. As software developers absorb considerable amount of risks, an integrated framework for managing risks in software development from developers' perspective is needed. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - To consider the role of technology in knowledge management in organizations, both actual and desired. Design/methodology/approach - Facilitated, computer-supported group workshops were conducted with 78 people from ten different organizations. The objective of each workshop was to review the current state of knowledge management in that organization and develop an action plan for the future. Findings - Only three organizations had adopted a strongly technology-based "solution" to knowledge management problems, and these followed three substantially different routes. There was a clear emphasis on the use of general information technology tools to support knowledge management activities, rather than the use of tools specific to knowledge management. Research limitations/implications - Further research is needed to help organizations make best use of generally available software such as intranets and e-mail for knowledge management. Many issues, especially human, relate to the implementation of any technology. Participation was restricted to organizations that wished to produce an action plan for knowledge management. The findings may therefore represent only "average" organizations, not the very best practice. Practical implications - Each organization must resolve four tensions: Between the quantity and quality of information/knowledge, between centralized and decentralized organization, between head office and organizational knowledge, and between "push" and "pull" processes. Originality/value - Although it is the group rather than an individual that determines what counts as knowledge, hardly any previous studies of knowledge management have collected data in a group context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The social processes involved in engaging small groups of 3-15 managers in their sharing, organising, acquiring, creating and using knowledge can be supported with software and facilitator assistance. This paper introduces three such systems that we have used as facilitators to support groups of managers in their social process of decision-making by managing knowledge during face-to-face meetings. The systems include Compendium, Group Explorer (with Decision Explorer) and V*I*S*A. We review these systems for group knowledge management where the aim is for better decision-making, and discuss the principles of deploying each in a group meeting. © 2006 Operational Research Society Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on production systems design has in recent years tended to concentrate on ‘software’ factors such as organisational aspects, work design, and the planning of the production operations. In contrast, relatively little attention has been paid to maximising the contributions made by fixed assets, particularly machines and equipment. However, as the cost of unproductive machine time has increased, reliability, particularly of machine tools, has become ever more important. Reliability theory and research has traditionally been based in the main on electrical and electronic equipment whereas mechanical devices, especially machine tools, have not received sufficiently objective treatment. A recently completed research project has considered the reliability of machine tools by taking sample surveys of purchasers, maintainers and manufacturers. Breakdown data were also collected from a number of engineering companies and analysed using both manual and computer techniques. Results obtained have provided an indication of those factors most likely to influence reliability and which in turn could lead to improved design and selection of machine tool systems. Statistical analysis of long-term field data has revealed patterns of trends of failure which could help in the design of more meaningful maintenance schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid developments in computer technology have resulted in a widespread use of discrete event dynamic systems (DEDSs). This type of system is complex because it exhibits properties such as concurrency, conflict and non-determinism. It is therefore important to model and analyse such systems before implementation to ensure safe, deadlock free and optimal operation. This thesis investigates current modelling techniques and describes Petri net theory in more detail. It reviews top down, bottom up and hybrid Petri net synthesis techniques that are used to model large systems and introduces on object oriented methodology to enable modelling of larger and more complex systems. Designs obtained by this methodology are modular, easy to understand and allow re-use of designs. Control is the next logical step in the design process. This thesis reviews recent developments in control DEDSs and investigates the use of Petri nets in the design of supervisory controllers. The scheduling of exclusive use of resources is investigated and an efficient Petri net based scheduling algorithm is designed and a re-configurable controller is proposed. To enable the analysis and control of large and complex DEDSs, an object oriented C++ software tool kit was developed and used to implement a Petri net analysis tool, Petri net scheduling and control algorithms. Finally, the methodology was applied to two industrial DEDSs: a prototype can sorting machine developed by Eurotherm Controls Ltd., and a semiconductor testing plant belonging to SGS Thomson Microelectronics Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.