27 resultados para SOFTWARE APPLICATIONS
Towards an understanding of the causes and effects of software requirements change: two case studies
Resumo:
Changes to software requirements not only pose a risk to the successful delivery of software applications but also provide opportunity for improved usability and value. Increased understanding of the causes and consequences of change can support requirements management and also make progress towards the goal of change anticipation. This paper presents the results of two case studies that address objectives arising from that ultimate goal. The first case study evaluated the potential of a change source taxonomy containing the elements ‘market’, ‘organisation’, ‘vision’, ‘specification’, and ‘solution’ to provide a meaningful basis for change classification and measurement. The second case study investigated whether the requirements attributes of novelty, complexity, and dependency correlated with requirements volatility. While insufficiency of data in the first case study precluded an investigation of changes arising due to the change source of ‘market’, for the remainder of the change sources, results indicate a significant difference in cost, value to the customer and management considerations. Findings show that higher cost and value changes arose more often from ‘organisation’ and ‘vision’ sources; these changes also generally involved the co-operation of more stakeholder groups and were considered to be less controllable than changes arising from the ‘specification’ or ‘solution’ sources. Results from the second case study indicate that only ‘requirements dependency’ is consistently correlated with volatility and that changes coming from each change source affect different groups of requirements. We conclude that the taxonomy can provide a meaningful means of change classification, but that a single requirement attribute is insufficient for change prediction. A theoretical causal account of requirements change is drawn from the implications of the combined results of the two case studies.
Resumo:
Models and software products have been developed for modelling, simulation and prediction of different correlations in materials science, including 1. the correlation between processing parameters and properties in titanium alloys and ?-titanium aluminides; 2. time–temperature–transformation (TTT) diagrams for titanium alloys; 3. corrosion resistance of titanium alloys; 4. surface hardness and microhardness profile of nitrocarburised layers; 5. fatigue stress life (S–N) diagrams for Ti–6Al–4V alloys. The programs are based on trained artificial neural networks. For each particular case appropriate combination of inputs and outputs is chosen. Very good performances of the models are achieved. Graphical user interfaces (GUI) are created for easy use of the models. In addition interactive text versions are developed. The models designed are combined and integrated in software package that is built up on a modular fashion. The software products are available in versions for different platforms including Windows 95/98/2000/NT, UNIX and Apple Macintosh. Description of the software products is given, to demonstrate that they are convenient and powerful tools for practical applications in solving various problems in materials science. Examples for optimisation of the alloy compositions, processing parameters and working conditions are illustrated. An option for use of the software in materials selection procedure is described.
Resumo:
Changes to software requirements occur during initial development and subsequent to delivery, posing a risk to cost and quality while at the same time providing an opportunity to add value. Provision of a generic change source taxonomy will support requirements change risk visibility, and also facilitate richer recording of both pre- and post-delivery change data. In this paper we present a collaborative study to investigate and classify sources of requirements change, drawing comparison between those pertaining to software development and maintenance. We begin by combining evolution, maintenance and software lifecycle research to derive a definition of software maintenance, which provides the foundation for empirical context and comparison. Previously published change ‘causes’ pertaining to development are elicited from the literature, consolidated using expert knowledge and classified using card sorting. A second study incorporating causes of requirements change during software maintenance results in a taxonomy which accounts for the entire evolutionary progress of applications software. We conclude that the distinction between the terms maintenance and development is imprecise, and that changes to requirements in both scenarios arise due to a combination of factors contributing to requirements uncertainty and events that trigger change. The change trigger taxonomy constructs were initially validated using a small set of requirements change data, and deemed sufficient and practical as a means to collect common requirements change statistics across multiple projects.
Resumo:
Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.
Resumo:
A Web-service based approach is presented which enables geographically dispersed users to share software resources over the Internet. A service-oriented software sharing system has been developed, which consists of shared applications, client applications and three types of services: application proxy service, proxy implementation service and application manager service. With the aids of the services, the client applications interact with the shared applications to implement a software sharing task. The approach satisfies the requirements of copyright protection and reuse of legacy codes. In this paper, the role of Web-services and the architecture of the system are presented first, followed by a case study to illustrate the approach developed.
Resumo:
Transcript of a Panel Discussion at the Dartmouth Symposium, chaired by Eric Lyon.
Resumo:
With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.