71 resultados para Computer software--Development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A framework is a reusable design that requires software components to function. To instantiate a framework, a software engineer must provide the software components required by the framework. To do this effectively, the framework-component interfaces must be specified so the software engineer knows what assumptions the framework makes about the components, and so the components can be verified against these assumptions. This paper presents an approach to specifying software frameworks. The approach involves the specification of the framework’s syntax, semantics, and the interfaces between the framework and its components. The approach is demonstrated with a simple case study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Proof reuse, or analogical reasoning, involves reusing the proof of a source theorem in the proof of a target conjecture. We have developed a method for proof reuse that is based on the generalisation replay paradigm described in the literature, in which a generalisation of the source proof is replayed to construct the target proof. In this paper, we describe the novel aspects of our method, which include a technique for producing more accurate source proof generalisations (using knowledge of the target goal), as well as a flexible replay strategy that allows the user to set various parameters to control the size and the shape of the search space. Finally, we report on the results of applying this method to a case study from the realm of software verification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is not surprising that students are unconvinced about the benefits of formal methods if we do not show them how these methods can be integrated with other activities in the software lifecycle. In this paper, we describe an approach to integrating formal specification with more traditional verification and validation techniques in a course that teaches formal specification and specification-based testing. This is accomplished through a series of assignments on a single software component that involves specifying the component in Object-Z, validating that specification using inspection and a specification animation tool, and then testing an implementation of the specification using test cases derived from the formal specification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces various software engineering challenges, as well as privacy and usability concerns. In this paper, we present a conceptual framework and software infrastructure that together address known software engineering challenges, and enable further practical exploration of social and usability issues by facilitating the prototyping and fine-tuning of context-aware applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A major challenge in teaching software engineering to undergraduates is that most students have limited industry experience, so the problems addressed are unknown and hence unappreciated. Issues of scope prevent a realistic software engineering experience, and students often graduate with a simplistic view of software engineering’s challenges. Problems and Programmers (PnP) is a competitive, physical card game that simulates the software engineering process from requirements specification to product delivery. Deliverables are abstracted, allowing a focus on process issues and for lessons to be learned in a relatively short time. The rules are easy to understand and the game’s physical nature allows for face-to-face interaction between players. The game’s developers have described PnP in previous publications, but this paper reports the game’s use within a larger educational scheme. Students learn and play PnP, and then are required to create a software requirements specification based on the game. Finally, students reflect on the game’s strengths and weaknesses and their experiences in an individual essay. The paper discusses this approach, students’ experiences and overall outcomes, and offers an independent, critical look at the game, its use, and potential improvements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows how formal and informal modeling languages can be cooperatively used in the MDA framework, and how transformations between models in these languages can be achieved using an MDA development environment. The integrated approach also provides an effective V&V technique for the MDA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Formal methods have significant benefits for developing safety critical systems, in that they allow for correctness proofs, model checking safety and liveness properties, deadlock checking, etc. However, formal methods do not scale very well and demand specialist skills, when developing real-world systems. For these reasons, development and analysis of large-scale safety critical systems will require effective integration of formal and informal methods. In this paper, we use such an integrative approach to automate Failure Modes and Effects Analysis (FMEA), a widely used system safety analysis technique, using a high-level graphical modelling notation (Behavior Trees) and model checking. We inject component failure modes into the Behavior Trees and translate the resulting Behavior Trees to SAL code. This enables us to model check if the system in the presence of these faults satisfies its safety properties, specified by temporal logic formulas. The benefit of this process is tool support that automates the tedious and error-prone aspects of FMEA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current initiatives in the field of Business Process Management (BPM) strive for the development of a BPM standard notation by pushing the Business Process Modeling Notation (BPMN). However, such a proposed standard notation needs to be carefully examined. Ontological analysis is an established theoretical approach to evaluating modelling techniques. This paper reports on the outcomes of an ontological analysis of BPMN and explores identified issues by reporting on interviews conducted with BPMN users in Australia. Complementing this analysis we consolidate our findings with previous ontological analyses of process modelling notations to deliver a comprehensive assessment of BPMN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context-aware applications rely on implicit forms of input, such as sensor-derived data, in order to reduce the need for explicit input from users. They are especially relevant for mobile and pervasive computing environments, in which user attention is at a premium. To support the development of context-aware applications, techniques for modelling context information are required. These must address a unique combination of requirements, including the ability to model information supplied by both sensors and people, to represent imperfect information, and to capture context histories. As the field of context-aware computing is relatively new, mature solutions for context modelling do not exist, and researchers rely on information modelling solutions developed for other purposes. In our research, we have been using a variant of Object-Role Modeling (ORM) to model context. In this paper, we reflect on our experiences and outline some research challenges in this area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design, development, and use of complex systems models raises a unique class of challenges and potential pitfalls, many of which are commonly recurring problems. Over time, researchers gain experience in this form of modeling, choosing algorithms, techniques, and frameworks that improve the quality, confidence level, and speed of development of their models. This increasing collective experience of complex systems modellers is a resource that should be captured. Fields such as software engineering and architecture have benefited from the development of generic solutions to recurring problems, called patterns. Using pattern development techniques from these fields, insights from communities such as learning and information processing, data mining, bioinformatics, and agent-based modeling can be identified and captured. Collections of such 'pattern languages' would allow knowledge gained through experience to be readily accessible to less-experienced practitioners and to other domains. This paper proposes a methodology for capturing the wisdom of computational modelers by introducing example visualization patterns, and a pattern classification system for analyzing the relationship between micro and macro behaviour in complex systems models. We anticipate that a new field of complex systems patterns will provide an invaluable resource for both practicing and future generations of modelers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model transformations are an integral part of model-driven development. Incremental updates are a key execution scenario for transformations in model-based systems, and are especially important for the evolution of such systems. This paper presents a strategy for the incremental maintenance of declarative, rule-based transformation executions. The strategy involves recording dependencies of the transformation execution on information from source models and from the transformation definition. Changes to the source models or the transformation itself can then be directly mapped to their effects on transformation execution, allowing changes to target models to be computed efficiently. This particular approach has many benefits. It supports changes to both source models and transformation definitions, it can be applied to incomplete transformation executions, and a priori knowledge of volatility can be used to further increase the efficiency of change propagation.