891 resultados para Computer aided software engineering


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many years in the area of business systems analysis and design, practitioners and researchers alike have been searching for some comprehensive basis on which to evaluate, compare, and engineer techniques that are promoted for use in the modelling of systems' requirements. To date, while many frameworks, factors, and facets have been forthcoming, none appear to be based on a sound theory. In light of this dilemma, over the last 10 years, attention has been devoted by researchers to the use of ontology to provide some theoretical basis for the advancement of the business systems modelling discipline. This paper outlines how we have used a particular ontology for this purpose over the last five years. In particular we have learned that the understandability and the applicability of the selected ontology must be clear for IS professionals, the results of any ontological evaluation must be tempered by economic efficiency considerations of the stakeholders involved, and ontologies may have to be focused for the business purpose and type of user involved in the modelling situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we construct implicit stochastic Runge-Kutta (SRK) methods for solving stochastic differential equations of Stratonovich type. Instead of using the increment of a Wiener process, modified random variables are used. We give convergence conditions of the SRK methods with these modified random variables. In particular, the truncated random variable is used. We present a two-stage stiffly accurate diagonal implicit SRK (SADISRK2) method with strong order 1.0 which has better numerical behaviour than extant methods. We also construct a five-stage diagonal implicit SRK method and a six-stage stiffly accurate diagonal implicit SRK method with strong order 1.5. The mean-square and asymptotic stability properties of the trapezoidal method and the SADISRK2 method are analysed and compared with an explicit method and a semi-implicit method. Numerical results are reported for confirming convergence properties and for comparing the numerical behaviour of these methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type. (C) 2003 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.39J, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information security devices must preserve security properties even in the presence of faults. This in turn requires a rigorous evaluation of the system behaviours resulting from component failures, especially how such failures affect information flow. We introduce a compositional method of static analysis for fail-secure behaviour. Our method uses reachability matrices to identify potentially undesirable information flows based on the fault modes of the system's components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The real-time refinement calculus is a formal method for the systematic derivation of real-time programs from real-time specifications in a style similar to the non-real-time refinement calculi of Back and Morgan. In this paper we extend the real-time refinement calculus with procedures and provide refinement rules for refining real-time specifications to procedure calls. A real-time specification can include constraints on, not only what outputs are produced, but also when they are produced. The derived programs can also include time constraints oil when certain points in the program must be reached; these are expressed in the form of deadline commands. Such programs are machine independent. An important consequence of the approach taken is that, not only are the specifications machine independent, but the whole refinement process is machine independent. To implement the machine independent code on a target machine one has a separate task of showing that the compiled machine code will reach all its deadlines before they expire. For real-time programs, externally observable input and output variables are essential. These differ from local variables in that their values are observable over the duration of the execution of the program. Hence procedures require input and output parameter mechanisms that are references to the actual parameters so that changes to external inputs are observable within the procedure and changes to output parameters are externally observable. In addition, we allow value and result parameters. These may be auxiliary parameters, which are used for reasoning about the correctness of real-time programs as well as in the expression of timing deadlines, but do not lead to any code being generated for them by a compiler. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communications devices for government or military applications must keep data secure, even when their electronic components fail. Combining information flow and risk analyses could make fault-mode evaluations for such devices more efficient and cost-effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.