879 resultados para software engineering: metrics
Resumo:
For many organizations, maintaining and upgrading enterprise resource planning (ERP) systems (large packaged application software) is often far more costly than the initial implementation. Systematic planning and knowledge of the fundamental maintenance processes and maintenance-related management data are required in order to effectively and efficiently administer maintenance activities. This paper reports a revelatory case study of Government Services Provider (GSP), a high-performing ERP service provider to government agencies in Australia. GSP ERP maintenance-process and maintenance-data standards are compared with the IEEE/EIA 12207 software engineering standard for custom software, also drawing upon published research, to identify how practices in the ERP context diverge from the IEEE standard. While the results show that many best practices reflected in the IEEE standard have broad relevance to software generally, divergent practices in the ERP context necessitate a shift in management focus, additional responsibilities, and different maintenance decision criteria. Study findings may provide useful guidance to practitioners, as well as input to the IEEE and other related standards.
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
This paper considers the problem of building a software architecture for a human-robot team. The objective of the team is to build a multi-attribute map of the world by performing information fusion. A decentralized approach to information fusion is adopted to achieve the system properties of scalability and survivability. Decentralization imposes constraints on the design of the architecture and its implementation. We show how a Component-Based Software Engineering approach can address these constraints. The architecture is implemented using Orca – a component-based software framework for robotic systems. Experimental results from a deployed system comprised of an unmanned air vehicle, a ground vehicle, and two human operators are presented. A section on the lessons learned is included which may be applicable to other distributed systems with complex algorithms. We also compare Orca to the Player software framework in the context of distributed systems.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
Projects funded by the Australian National Data Service(ANDS). The specific projects that were funded included: a) Greenhouse Gas Emissions Project (N2O) with Prof. Peter Grace from QUT’s Institute of Sustainable Resources. b) Q150 Project for the management of multimedia data collected at Festival events with Prof. Phil Graham from QUT’s Institute of Creative Industries. c) Bio-diversity environmental sensing with Prof. Paul Roe from the QUT Microsoft eResearch Centre. For the purposes of these projects the Eclipse Rich Client Platform (Eclipse RCP) was chosen as an appropriate software development framework within which to develop the respective software. This poster will present a brief overview of the requirements of the projects, an overview of the experiences of the project team in using Eclipse RCP, report on the advantages and disadvantages of using Eclipse and it’s perspective on Eclipse as an integrated tool for supporting future data management requirements.
Resumo:
Post-deployment maintenance and evolution can account for up to 75% of the cost of developing a software system. Software refactoring can reduce the costs associated with evolution by improving system quality. Although refactoring can yield benefits, the process includes potentially complex, error-prone, tedious and time-consuming tasks. It is these tasks that automated refactoring tools seek to address. However, although the refactoring process is well-defined, current refactoring tools do not support the full process. To develop better automated refactoring support, we have completed a usability study of software refactoring tools. In the study, we analysed the task of software refactoring using the ISO 9241-11 usability standard and Fitts' List of task allocation. Expanding on this analysis, we reviewed 11 collections of usability guidelines and combined these into a single list of 38 guidelines. From this list, we developed 81 usability requirements for refactoring tools. Using these requirements, the software refactoring tools Eclipse 3.2, Condenser 1.05, RefactorIT 2.5.1, and Eclipse 3.2 with the Simian UI 2.2.12 plugin were studied. Based on the analysis, we have selected a subset of the requirements that can be incorporated into a prototype refactoring tool intended to address the full refactoring process.
Resumo:
With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
IEEE 802.11p is the new standard for intervehicular communications (IVC) using the 5.9 GHz frequency band; it is planned to be widely deployed to enable cooperative systems. 802.11p uses and performance have been studied theoretically and in simulations over the past years. Unfortunately, many of these results have not been confirmed by on-tracks experimentation. In this paper, we describe field trials of 802.11p technology with our test vehicles; metrics such as maximum range, latency and frame loss are examined. Then, we propose a detailed modelisation of 802.11p that can be used to accurately simulate its performance within Cooperative Systems (CS) applications.
Resumo:
Social media tools are starting to become mainstream and those working in the software development industry are often ahead of the game in terms of using current technological innovations to improve their work. With the advent of outsourcing and distributed teams the software industry is ideally placed to take advantage of social media technologies, tools and environments. This paper looks at how social media is being used by early adopters within the software development industry. Current tools and trends in social media tool use are described and critiqued: what works and what doesn't. We use industrial case studies from platform development, commercial application development and government contexts which provide a clear picture of the emergent state of the art. These real world experiences are then used to show how working collaboratively in geographically dispersed teams, enabled by social media, can enhance and improve the development experience.
Resumo:
A customer reported problem (or Trouble Ticket) in software maintenance is typically solved by one or more maintenance engineers. The decision of allocating the ticket to one or more engineers is generally taken by the lead, based on customer delivery deadlines and a guided complexity assessment from each maintenance engineer. The key challenge in such a scenario is two folds, un-truthful (hiked up) elicitation of ticket complexity by each engineer to the lead and the decision of allocating the ticket to a group of engineers who will solve the ticket with in customer deadline. The decision of allocation should ensure Individual and Coalitional Rationality along with Coalitional Stability. In this paper we use game theory to examine the issue of truthful elicitation of ticket complexities by engineers for solving ticket as a group given a specific customer delivery deadline. We formulate this problem as strategic form game and propose two mechanisms, (1) Division of Labor (DOL) and (2) Extended Second Price (ESP). In the proposed mechanisms we show that truth telling by each engineer constitutes a Dominant Strategy Nash Equilibrium of the underlying game. Also we analyze the existence of Individual Rationality (IR) and Coalitional Rationality (CR) properties to motivate voluntary and group participation. We use Core, solution concept from co-operative game theory to analyze the stability of the proposed group based on the allocation and payments.
Resumo:
Reuse is at the heart of major improvements in productivity and quality in Software Engineering. Both Model Driven Engineering (MDE) and Software Product Line Engineering (SPLE) are software development paradigms that promote reuse. Specifically, they promote systematic reuse and a departure from craftsmanship towards an industrialization of the software development process. MDE and SPLE have established their benefits separately. Their combination, here called Model Driven Product Line Engineering (MDPLE), gathers together the advantages of both. Nevertheless, this blending requires MDE to be recasted in SPLE terms. This has implications on both the core assets and the software development process. The challenges are twofold: (i) models become central core assets from which products are obtained and (ii) the software development process needs to cater for the changes that SPLE and MDE introduce. This dissertation proposes a solution to the first challenge following a feature oriented approach, with an emphasis on reuse and early detection of inconsistencies. The second part is dedicated to assembly processes, a clear example of the complexity MDPLE introduces in software development processes. This work advocates for a new discipline inside the general software development process, i.e., the Assembly Plan Management, which raises the abstraction level and increases reuse in such processes. Different case studies illustrate the presented ideas.