34 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
em University of Queensland eSpace - Australia
Resumo:
Our research described in this paper identifies a three part premise relating to the spyware paradigm. Firstly the data suggests spyware is proliferating at an exponential rate. Secondly ongoing research confirms that spyware produces many security risks – including that of privacy/confidentiality breaches via illicit data collection and reporting. Thirdly, anti-spyware controls are improving but are still considered problematic for several reasons. Our research then concludes that control measures to counter this very significant challenge should merit compliance auditing – and this auditing may effectively target the vital message passing performed by all illicit data collection spyware. Our research then evolves into an experiment involving the design and implementation of a software audit tool to conduct the desired compliance auditing. The software audit tool is positioned at the protected network’s gateway. The software audit tool uses ‘phone-home’ IP addresses as spyware signatures to detect the presence of the offending software. The audit tool also has the capability to differentiate legitimate message passing software from that produced by spyware – and ‘learn’ both new spyware signatures and new legitimate message passing profiles. The testing stage of the software has proven successful – albeit using very limited levels of network message passing variety and frequency.
Resumo:
Software Configuration Management is the discipline of managing large collections of software development artefacts from which software products are built. Software configuration management tools typically deal with artefacts at fine levels of granularity - such as individual source code files - and assist with coordination of changes to such artefacts. This paper describes a lightweight tool, designed to be used on top of a traditional file-based configuration management system. The add-on tool support enables users to flexibly define new hierarchical views of product structure, independent of the underlying artefact-repository structure. The tool extracts configuration and change data with respect to the user-defined hierarchy, leading to improved visibility of how individual subsystems have changed. The approach yields a range of new capabilities for build managers, and verification and validation teams. The paper includes a description of our experience using the tool in an organization that builds large embedded software systems.
Resumo:
It is not surprising that students are unconvinced about the benefits of formal methods if we do not show them how these methods can be integrated with other activities in the software lifecycle. In this paper, we describe an approach to integrating formal specification with more traditional verification and validation techniques in a course that teaches formal specification and specification-based testing. This is accomplished through a series of assignments on a single software component that involves specifying the component in Object-Z, validating that specification using inspection and a specification animation tool, and then testing an implementation of the specification using test cases derived from the formal specification.
Resumo:
No Abstract
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
Objectives: To validate the WOMAC 3.1 in a touch screen computer format, which applies each question as a cartoon in writing and in speech (QUALITOUCH method), and to assess patient acceptance of the computer touch screen version. Methods: The paper and computer formats of WOMAC 3.1 were applied in random order to 53 subjects with hip or knee osteoarthritis. The mean age of the subjects was 64 years ( range 45 to 83), 60% were male, 53% were 65 years or older, and 53% used computers at home or at work. Agreement between formats was assessed by intraclass correlation coefficients (ICCs). Preferences were assessed with a supplementary questionnaire. Results: ICCs between formats were 0.92 (95% confidence interval, 0.87 to 0.96) for pain; 0.94 (0.90 to 0.97) for stiffness, and 0.96 ( 0.94 to 0.98) for function. ICCs were similar in men and women, in subjects with or without previous computer experience, and in subjects below or above age 65. The computer format was found easier to use by 26% of the subjects, the paper format by 8%, and 66% were undecided. Overall, 53% of subjects preferred the computer format, while 9% preferred the paper format, and 38% were undecided. Conclusion: The computer format of the WOMAC 3.1 is a reliable assessment tool. Agreement between computer and paper formats was independent of computer experience, age, or sex. Thus the computer format may help improve patient follow up by meeting patients' preferences and providing immediate results.
Resumo:
Background: The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence-and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results: GANN ( available at http://bioinformatics.org.au/gann) is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion: GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Two-dimensional (2-D) strain (epsilon(2-D)) on the basis of speckle tracking is a new technique for strain measurement. This study sought to validate epsilon(2-D) and tissue velocity imaging (TVI)based strain (epsilon(TVI)) with tagged harmonic-phase (HARP) magnetic resonance imaging (MRI). Thirty patients (mean age. 62 +/- 11 years) with known or suspected ischemic heart disease were evaluated. Wall motion (wall motion score index 1.55 +/- 0.46) was assessed by an expert observer. Three apical images were obtained for longitudinal strain (16 segments) and 3 short-axis images for radial and circumferential strain (18 segments). Radial epsilon(TVI) was obtained in the posterior wall. HARP MRI was used to measure principal strain, expressed as maximal length change in each direction. Values for epsilon(2-D), epsilon(TVI), and HARP MRI were comparable for all 3 strain directions and were reduced in dysfunctional segments. The mean difference and correlation between longitudinal epsilon(2-D) and HARP MRI (2.1 +/- 5.5%, r = 0.51, p < 0.001) were similar to those between longitudinal epsilon(TVI), and HARP MRI (1.1 +/- 6.7%, r = 0.40, p < 0.001). The mean difference and correlation were more favorable between radial epsilon(2-D) and HARP MRI (0.4 +/- 10.2%, r = 0.60, p < 0.001) than between radial epsilon(TVI), and HARP MRI (3.4 +/- 10.5%, r = 0.47, p < 0.001). For circumferential strain, the mean difference and correlation between epsilon(2-D) and HARP MRI were 0.7 +/- 5.4% and r = 0.51 (p < 0.001), respectively. In conclusion, the modest correlations of echocardiographic and HARP MRI strain reflect the technical challenges of the 2 techniques. Nonetheless, epsilon(2-D) provides a reliable tool to quantify regional function, with radial measurements being more accurate and feasible than with TVI. Unlike epsilon(TVI), epsilon(2-D) provides circumferential measurements. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
To foster ongoing international cooperation beyond ACES (APEC Cooperation for Earthquake Simulation) on the simulation of solid earth phenomena, agreement was reached to work towards establishment of a frontier international research institute for simulating the solid earth: iSERVO = International Solid Earth Research Virtual Observatory institute (http://www.iservo.edu.au). This paper outlines a key Australian contribution towards the iSERVO institute seed project, this is the construction of: (1) a typical intraplate fault system model using practical fault system data of South Australia (i.e., SA interacting fault model), which includes data management and editing, geometrical modeling and mesh generation; and (2) a finite-element based software tool, which is built on our long-term and ongoing effort to develop the R-minimum strategy based finite-element computational algorithm and software tool for modelling three-dimensional nonlinear frictional contact behavior between multiple deformable bodies with the arbitrarily-shaped contact element strategy. A numerical simulation of the SA fault system is carried out using this software tool to demonstrate its capability and our efforts towards seeding the iSERVO Institute.