959 resultados para automated software testing
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
Electric vehicles and electronic components inside the vehicle are becoming increasingly important. The software as well starts to have a significant impact on modern high-end cars therefore a careful validation process needs to be implemented with the aim of having a bug free product when it is released. The software complexity increases and thus also the testing phases is more demanding. Test can be troublesome and, in some cases, boring and easy. The intelligence can be moved in test definition and writing rather than on test execution. The aim of this document is to start the definition of an automatic modular testing system capable to execute test cycles on systems that interacts with the CAN networks and with DUT that can be touched with a robotic arm. The document defines a first version of the system, in particular the hardware interface part with the aim of taking logs and execute test in an automated fashion with the test engineer can have a higher focus on the test definition and analysis rather than execution.
Resumo:
Support for interoperability and interchangeability of software components which are part of a fieldbus automation system relies on the definition of open architectures, most of them involving proprietary technologies. Concurrently, standard, open and non-proprietary technologies, such as XML, SOAP, Web Services and the like, have greatly evolved and been diffused in the computing area. This article presents a FOUNDATION fieldbus (TM) device description technology named Open-EDD, based on XML and other related technologies (XLST, DOM using Xerces implementation, OO, XMIL Schema), proposing an open and nonproprietary alternative to the EDD (Electronic Device Description). This initial proposal includes defining Open-EDDML as the programming language of the technology in the FOUNDATION fieldbus (TM) protocol, implementing a compiler and a parser, and finally, integrating and testing the new technology using field devices and a commercial fieldbus configurator. This study attests that this new technology is feasible and can be applied to other configurators or HMI applications used in fieldbus automation systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Previous work on generating state machines for the purpose of class testing has not been formally based. There has also been work on deriving state machines from formal specifications for testing non-object-oriented software. We build on this work by presenting a method for deriving a state machine for testing purposes from a formal specification of the class under test. We also show how the resulting state machine can be used as the basis for a test suite developed and executed using an existing framework for class testing. To derive the state machine, we identify the states and possible interactions of the operations of the class under test. The Test Template Framework is used to formally derive the states from the Object-Z specification of the class under test. The transitions of the finite state machine are calculated from the derived states and the class's operations. The formally derived finite state machine is transformed to a ClassBench testgraph, which is used as input to the ClassBench framework to test a C++ implementation of the class. The method is illustrated using a simple bounded queue example.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
Dherte PM, Negrao MPG, Mori Neto S, Holzhacker R, Shimada V, Taberner P, Carmona MJC - Smart Alerts: Development of a Software to Optimize Data Monitoring. Background and objectives: Monitoring is useful for vital follow-ups and prevention, diagnosis, and treatment of several events in anesthesia. Although alarms can be useful in monitoring they can cause dangerous user`s desensitization. The objective of this study was to describe the development of specific software to integrate intraoperative monitoring parameters generating ""smart alerts"" that can help decision making, besides indicating possible diagnosis and treatment. Methods: A system that allowed flexibility in the definition of alerts, combining individual alarms of the parameters monitored to generate a more elaborated alert system was designed. After investigating a set of smart alerts, considered relevant in the surgical environment, a prototype was designed and evaluated, and additional suggestions were implemented in the final product. To verify the occurrence of smart alerts, the system underwent testing with data previously obtained during intraoperative monitoring of 64 patients. The system allows continuous analysis of monitored parameters, verifying the occurrence of smart alerts defined in the user interface. Results: With this system a potential 92% reduction in alarms was observed. We observed that in most situations that did not generate alerts individual alarms did not represent risk to the patient. Conclusions: Implementation of software can allow integration of the data monitored and generate information, such as possible diagnosis or interventions. An expressive potential reduction in the amount of alarms during surgery was observed. Information displayed by the system can be oftentimes more useful than analysis of isolated parameters.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Purpose: To evaluate the ability of the GDx Variable Corneal Compensation (VCC) Guided Progression Analysis (GPA) software for detecting glaucomatous progression. Design: Observational cohort study. Participants: The study included 453 eyes from 252 individuals followed for an average of 46 +/- 14 months as part of the Diagnostic Innovations in Glaucoma Study. At baseline, 29% of the eyes were classified as glaucomatous, 67% of the eyes were classified as suspects, and 5% of the eyes were classified as healthy. Methods: Images were obtained annually with the GDx VCC and analyzed for progression using the Fast Mode of the GDx GPA software. Progression using conventional methods was determined by the GPA software for standard automated achromatic perimetry (SAP) and by masked assessment of optic disc stereophotographs by expert graders. Main Outcome Measures: Sensitivity, specificity, and likelihood ratios (LRs) for detection of glaucoma progression using the GDx GPA were calculated with SAP and optic disc stereophotographs used as reference standards. Agreement among the different methods was reported using the AC(1) coefficient. Results: Thirty-four of the 431 glaucoma and glaucoma suspect eyes (8%) showed progression by SAP or optic disc stereophotographs. The GDx GPA detected 17 of these eyes for a sensitivity of 50%. Fourteen eyes showed progression only by the GDx GPA with a specificity of 96%. Positive and negative LRs were 12.5 and 0.5, respectively. None of the healthy eyes showed progression by the GDx GPA, with a specificity of 100% in this group. Inter-method agreement (AC1 coefficient and 95% confidence intervals) for non-progressing and progressing eyes was 0.96 (0.94-0.97) and 0.44 (0.28-0.61), respectively. Conclusions: The GDx GPA detected glaucoma progression in a significant number of cases showing progression by conventional methods, with high specificity and high positive LRs. Estimates of the accuracy for detecting progression suggest that the GDx GPA could be used to complement clinical evaluation in the detection of longitudinal change in glaucoma. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references. Ophthalmology 2010; 117: 462-470 (C) 2010 by the American Academy of Ophthalmology.
Resumo:
This study evaluated the stress levels at the core layer and the veneer layer of zirconia crowns (comprising an alternative core design vs. a standard core design) under mechanical/thermal simulation, and subjected simulated models to laboratory mouth-motion fatigue. The dimensions of a mandibular first molar were imported into computer-aided design (CAD) software and a tooth preparation was modeled. A crown was designed using the space between the original tooth and the prepared tooth. The alternative core presented an additional lingual shoulder that lowered the veneer bulk of the cusps. Finite element analyses evaluated the residual maximum principal stresses fields at the core and veneer of both designs under loading and when cooled from 900 degrees C to 25 degrees C. Crowns were fabricated and mouth-motion fatigued, generating master Weibull curves and reliability data. Thermal modeling showed low residual stress fields throughout the bulk of the cusps for both groups. Mechanical simulation depicted a shift in stress levels to the core of the alternative design compared with the standard design. Significantly higher reliability was found for the alternative core. Regardless of the alternative configuration, thermal and mechanical computer simulations showed stress in the alternative core design comparable and higher to that of the standard configuration, respectively. Such a mechanical scenario probably led to the higher reliability of the alternative design under fatigue.
Resumo:
Concerns have been raised about the reproducibility of brachial artery reactivity (BAR), because subjective decisions regarding the location of interfaces may influence the measurement of very small changes in lumen diameter. We studied 120 consecutive patients with BAR to address if an automated technique could be applied, and if experience influenced reproducibility between two observers, one experienced and one inexperienced. Digital cineloops were measured automatically, using software that measures the leading edge of the endothelium and tracks this in sequential frames and also manually, where a set of three point-to-point measurements were averaged. There was a high correlation between automated and manual techniques for both observers, although less variability was present with expert readers. The limits of agreement overall for interobserver concordance were 0.13 +/-0.65 mm for the manual and 0.03 +/-0.74 mm for the automated measurement. For intraobserver concordance, the limits of agreement were -0.07 +/-0.38 mm for observer 1 and -0.16 +/-0.55 mm for observer 2. We concluded that BAR measurements were highly concordant between observers, although more concordant using the automated method, and that experience does affect concordance. Care must be taken to ensure that the same segments are measured between observers and serially.
Resumo:
A system has been developed for studying the biodegradation of natural and synthetic polymeric material. The system is based on standard methods developed by the European Committee for Standardisation (CEN TC 261) (ISO/DIS 14855) and the American Society of Testing Materials, 'ASTM. Standard Test Method for Determining Aerobic. Biodegradation of Plastic Materials under Controlled Composting Conditions' (ASTM D 5338-92). A new low-cost compost facility has been used which satisfies the requirements of these standards. The system has been automated for data collection and has been run under the conditions specified by the standards. In the system, cellulose, newspaper and two starch-based polymers were treated with compost in a series of 3dm(3) vessels at 52 degreesC and under conditions of optimum moisture and pH. The degradation was followed over time by measuring the amount of carbon released as carbon dioxide. (C) 2001 Society of Chemical Industry.
Resumo:
The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes.
Resumo:
Concurrent programs are hard to test due to the inherent nondeterminism. This paper presents a method and tool support for testing concurrent Java components. Too[ support is offered through ConAn (Concurrency Analyser), a too] for generating drivers for unit testing Java classes that are used in a multithreaded context. To obtain adequate controllability over the interactions between Java threads, the generated driver contains threads that are synchronized by a clock. The driver automatically executes the calls in the test sequence in the prescribed order and compares the outputs against the expected outputs specified in the test sequence. The method and tool are illustrated in detail on an asymmetric producer-consumer monitor. Their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.
Resumo:
Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper presents a generic model for language-independent reverse engineering of graphical user interface based applications, and we explore the integration of model-based testing techniques in our approach, thus allowing us to perform fault detection. A prototype tool has been constructed, which is already capable of deriving and testing a user interface behavioral model of applications written in Java/Swing.