46 resultados para software, translation, validation tool, VMNET, Wikipedia, XML
Resumo:
An attempt was made to quantify the boundaries and validate the granule growth regime map for liquid-bound granules recently proposed by Iveson and Litster (AlChE J. 44 (1998) 1510). This regime map postulates that the type of granule growth behaviour is a function of only two dimensionless groups: the amount of granule deformation during collision (characterised by a Stokes deformation number, St(def)) and the maximum granule pore saturation, s(max). The results of experiments performed with a range of materials (glass ballotini, iron ore fines, copper chalcopyrite powder and a sodium sulphate and cellulose mixture) using both drum and high shear mixer granulators were examined. The drum granulation results gave good agreement with the proposed regime map. The boundary between crumb and steady growth occurs at St(def) of order 0.1 and the boundary between steady and induction growth occurs at St(def) of order 0.001. The nucleation only boundary occurs at pore saturations that increase from 70% to 80% with decreasing St(def). However, the high shear mixer results all had St(def) numbers which were too large. This is most likely to be because the chopper tip-speed is an over-estimate of the average impact velocity granules experience and possibly also due to the dynamic yield strength of the materials being significantly greater than the yield strengths measured at low strain rates. Hence, the map is only a useful tool for comparing the granulation behaviour of different materials in the same device. Until we have a better understanding of the flow patterns and impact velocities in granulators, it cannot be used to compare different types of equipment. Theoretical considerations also revealed that several of the regime boundaries are also functions of additional parameters not explicitly contained on the map, such as binder viscosity. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Seasonal climate forecasting offers potential for improving management of crop production risks in the cropping systems of NE Australia. But how is this capability best connected to management practice? Over the past decade, we have pursued participative systems approaches involving simulation-aided discussion with advisers and decision-makers. This has led to the development of discussion support software as a key vehicle for facilitating infusion of forecasting capability into practice. In this paper, we set out the basis of our approach, its implementation and preliminary evaluation. We outline the development of the discussion support software Whopper Cropper, which was designed for, and in close consultation with, public and private advisers. Whopper Cropper consists of a database of simulation output and a graphical user interface to generate analyses of risks associated with crop management options. The charts produced provide conversation pieces for advisers to use with their farmer clients in relation to the significant decisions they face. An example application, detail of the software development process and an initial survey of user needs are presented. We suggest that discussion support software is about moving beyond traditional notions of supply-driven decision support systems. Discussion support software is largely demand-driven and can compliment participatory action research programs by providing cost-effective general delivery of simulation-aided discussions about relevant management actions. The critical role of farm management advisers and dialogue among key players is highlighted. We argue that the discussion support concept, as exemplified by the software tool Whopper Cropper and the group processes surrounding it, provides an effective means to infuse innovations, like seasonal climate forecasting, into farming practice. Crown Copyright (C) 2002 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Concurrent programs are hard to test due to the inherent nondeterminism. This paper presents a method and tool support for testing concurrent Java components. Too[ support is offered through ConAn (Concurrency Analyser), a too] for generating drivers for unit testing Java classes that are used in a multithreaded context. To obtain adequate controllability over the interactions between Java threads, the generated driver contains threads that are synchronized by a clock. The driver automatically executes the calls in the test sequence in the prescribed order and compares the outputs against the expected outputs specified in the test sequence. The method and tool are illustrated in detail on an asymmetric producer-consumer monitor. Their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
Objectives: To validate the WOMAC 3.1 in a touch screen computer format, which applies each question as a cartoon in writing and in speech (QUALITOUCH method), and to assess patient acceptance of the computer touch screen version. Methods: The paper and computer formats of WOMAC 3.1 were applied in random order to 53 subjects with hip or knee osteoarthritis. The mean age of the subjects was 64 years ( range 45 to 83), 60% were male, 53% were 65 years or older, and 53% used computers at home or at work. Agreement between formats was assessed by intraclass correlation coefficients (ICCs). Preferences were assessed with a supplementary questionnaire. Results: ICCs between formats were 0.92 (95% confidence interval, 0.87 to 0.96) for pain; 0.94 (0.90 to 0.97) for stiffness, and 0.96 ( 0.94 to 0.98) for function. ICCs were similar in men and women, in subjects with or without previous computer experience, and in subjects below or above age 65. The computer format was found easier to use by 26% of the subjects, the paper format by 8%, and 66% were undecided. Overall, 53% of subjects preferred the computer format, while 9% preferred the paper format, and 38% were undecided. Conclusion: The computer format of the WOMAC 3.1 is a reliable assessment tool. Agreement between computer and paper formats was independent of computer experience, age, or sex. Thus the computer format may help improve patient follow up by meeting patients' preferences and providing immediate results.
Resumo:
Background: The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence-and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results: GANN ( available at http://bioinformatics.org.au/gann) is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion: GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
Two-dimensional (2-D) strain (epsilon(2-D)) on the basis of speckle tracking is a new technique for strain measurement. This study sought to validate epsilon(2-D) and tissue velocity imaging (TVI)based strain (epsilon(TVI)) with tagged harmonic-phase (HARP) magnetic resonance imaging (MRI). Thirty patients (mean age. 62 +/- 11 years) with known or suspected ischemic heart disease were evaluated. Wall motion (wall motion score index 1.55 +/- 0.46) was assessed by an expert observer. Three apical images were obtained for longitudinal strain (16 segments) and 3 short-axis images for radial and circumferential strain (18 segments). Radial epsilon(TVI) was obtained in the posterior wall. HARP MRI was used to measure principal strain, expressed as maximal length change in each direction. Values for epsilon(2-D), epsilon(TVI), and HARP MRI were comparable for all 3 strain directions and were reduced in dysfunctional segments. The mean difference and correlation between longitudinal epsilon(2-D) and HARP MRI (2.1 +/- 5.5%, r = 0.51, p < 0.001) were similar to those between longitudinal epsilon(TVI), and HARP MRI (1.1 +/- 6.7%, r = 0.40, p < 0.001). The mean difference and correlation were more favorable between radial epsilon(2-D) and HARP MRI (0.4 +/- 10.2%, r = 0.60, p < 0.001) than between radial epsilon(TVI), and HARP MRI (3.4 +/- 10.5%, r = 0.47, p < 0.001). For circumferential strain, the mean difference and correlation between epsilon(2-D) and HARP MRI were 0.7 +/- 5.4% and r = 0.51 (p < 0.001), respectively. In conclusion, the modest correlations of echocardiographic and HARP MRI strain reflect the technical challenges of the 2 techniques. Nonetheless, epsilon(2-D) provides a reliable tool to quantify regional function, with radial measurements being more accurate and feasible than with TVI. Unlike epsilon(TVI), epsilon(2-D) provides circumferential measurements. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
To foster ongoing international cooperation beyond ACES (APEC Cooperation for Earthquake Simulation) on the simulation of solid earth phenomena, agreement was reached to work towards establishment of a frontier international research institute for simulating the solid earth: iSERVO = International Solid Earth Research Virtual Observatory institute (http://www.iservo.edu.au). This paper outlines a key Australian contribution towards the iSERVO institute seed project, this is the construction of: (1) a typical intraplate fault system model using practical fault system data of South Australia (i.e., SA interacting fault model), which includes data management and editing, geometrical modeling and mesh generation; and (2) a finite-element based software tool, which is built on our long-term and ongoing effort to develop the R-minimum strategy based finite-element computational algorithm and software tool for modelling three-dimensional nonlinear frictional contact behavior between multiple deformable bodies with the arbitrarily-shaped contact element strategy. A numerical simulation of the SA fault system is carried out using this software tool to demonstrate its capability and our efforts towards seeding the iSERVO Institute.
Resumo:
It is a paradox that in a country with one of the most variable climates in the world, cropping decisions are sometimes made with limited consideration of production and resource management risks. There are significant opportunities for improved performance based on targeted information regarding risks resulting from decision options. WhopperCropper is a tool to help agricultural advisors and farmers capture these benefits and use it to add value to their intuition and experience. WhopperCropper allows probability analysis of the effects of a range of selectable crop inputs and existing resources on yield and economic outcomes. Inputs can include agronomic inputs (e.g crop type, N fertiliser rate), resources (e.g soil water at sowing), and seasonal climate forecast (SOI phase). WhopperCropper has been successfully developed and refined as a discussion-support process for decision makers and their advisers in the northern grains region of Australia. The next phase of the project will build on the current project by extending its application nationally and enhancing the resource management aspects. A commercial partner, with over 800 advisor clients nationally, will participate in the project.
Resumo:
The Symbolic Analysis Laboratory (SAL) is a suite of tools for analysis of state transition systems. Tools supported include a simulator and four temporal logic model checkers. The common input language to these tools was originally developed with translation from other languages, both programming and specification languages, in mind. It is, therefore, a rich language supporting a range of type definitions and expressions. In this paper, we investigate the translation of Z specifications into the SAL language as a means of providing model checking support for Z. This is facilitated by a library of SAL definitions encoding the Z mathematical toolkit.