11 resultados para Teste de Software

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alterations in the neuropsychomotor development of children are not rare and can manifest themselves with varying intensity at different stages of their development. In this context, maternal risk factors may contribute to the appearance of these alterations. A number of studies have reported that neuropsychomotor development diagnosis is not an easy task, especially in the basic public health network. Diagnosis requires effective, low-cost, and easy - to-apply procedures. The Denver Developmental Screening Test, first published in 1967, is currently used in several countries. It has been revised and renamed as the Denver II Test and meets the aforementioned criteria. Accordingly, the aim of this study was to apply the Denver II Test in order to verify the prevalence of suspected neuropsychomotor development delay in children between the ages of 0 and 12 months and correlate it with the following maternal risk factors: family income, schooling, age at pregnancy, drug use during pregnancy, gestational age, gestational problems, type of delivery and the desire to have children. For data collection, performed during the first 6 months of 2004, a clinical assessment was made of 398 children selected by pediatricians and the nursing team of each public health unit. Later, the parents or guardians were asked to complete a structured questionnaire to determine possible risk indicators of neuropsychomotor development delay. Finally the Denver II Developmental Screening Test (DDST) was applied. The data were analyzed together, using Statistical Package for Social Science (SPSS) software, version 6.1. The confidence interval was set at 95%. The Denver II Test yielded normal and questionable results. This suggests compromised neuropsychomotor development in the children examined and deserves further investigation. The correlation of the results with preestablished maternal risk variables (family income, mother s schooling, age at pregnancy, drug use during the pregnancy and gestational age) was strongly significant. The other maternal risk variables (gestational problems, type of delivery and desire to have children) were not significant. Using an adjusted logistic regression model, we obtained the estimate of the greater likelihood of a child having suspected neuropsychomotor development delay: a mother with _75 4 years of schooling, chronological age less than 20 years and a drug user during pregnancy. This study produced two manuscripts, one published in Acta Cirúrgica Brasileira , in which an analysis was performed of children with suspected neuropsychomotor development delay in the city of Natal, Brazil. The other paper (to be published) analyzed the magnitude of the independent variable maternal schooling associated to neuropsychomotor development delay, every 3 months during the first twelve months of life of the children selected.. The results of the present study reinforce the multifactorial characteristic of development and the cumulative effect of maternal risk factors, and show the need for a regional policy that promotes low-cost programs for the community, involving children at risk of neuropsychomotor development delay. Moreover, they suggest the need for better qualified health professionals in terms of monitoring child development. This was an inter- and multidisciplinary study with the integrated participation of doctors, nurses, nursing assistants and professionals from other areas, such as statisticians and information technology professionals, who met all the requirements of the Postgraduate Program in Health Sciences of the Federal University of Rio Grande do Norte

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering the transition from industrial society to information society, we realize that the digital training that is addressed is currently insufficient to navigate within a digitized reality. As proposed to minimize this problem, this paper assesses, validates and develops the software RoboEduc to work with educational robotics with the main differential programming of robotic devices in levels, considering the specifics of reality training . One of the emphases of this work isthe presentation of materials and procedures involving the development, analysis and evolution of this software. For validation of usability tests were performed, based on analysis of these tests was developed version 4.0 of RoboEduc

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The ability to walk is impaired in obese by anthropometric factors (BMI and height), musculoskeletal pain and level of inactivity. Little is known about the influence of body adiposity and the acute response of the cardiovascular system during whole the 6-minute walk test (6mWT). Objective: To evaluate the effect of anthropometric measures (BMI and WHR waist-to-hip ratio), the effort heart and inactivity in ability to walk the morbidly obese. Materials and Methods: a total 36 morbidly obese (36.23 + 11.82 years old, BMI 49.16 kg/m2) were recruited from outpatient department of treatment of obesity and bariatric surgery in University Hospital Onofre Lopes and anthropometric measurements of obesity (BMI and WHR), pulmonary function, pattern habitual physical activity (Baecke Questionnaire) and walking capacity (6mWT). The patient was checking to measure: heart rate (HR), breathing frequency (BF), peripheral oxygen saturation, level of perceived exertion, systemic arterial pressure and duplo-produto (DP), moreover the average speed development and total distance walking. The data were analysed between gender and pattern of body adiposity, measuring the behavior minute by minute of walking. The Pearson and Spearmam correlation coefficients were calculated, and stepwise multiple Regression examined the predictors of walking capacity. All analyses were performed en software Statistic 6.0. Results: 20 obese patients had abdominal adiposity (WHR = 1.01), waist circumference was 135.8 cm in women (25) and 139.8 cm in men (10). Walked to the end of 6mWT 412.43 m, with no differences between gender and adiposity. The total distance walked by obesity alone was explained by BMI (45%), HR in the sixth minute (43%), the Baecke (24%) and fatigue (-23%). 88.6% of obese (31) performed the test above 60% of maximal HR, while the peak HR achieved at 5-minute of 6mWT. Systemic arterial pressure and DP rised after walking, but with no differences between gender and adiposity. Conclusion: The walk of obese didn´t suffers influence of gender or the pattern of body adiposity. The final distance walked is attributed to excess body weight, stress heart, the feeling of effort required by physical activity and level of sedentary to obese. With a minute of walking, the obeses achieved a range of intensity cardiovascular trainning

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of software systems, there is also an increased concern about its faults. These faults can cause financial losses and even loss of life. Therefore, we propose in this paper the minimization of faults in software by using formally specified tests. The combination of testing and formal specifications is gaining strength in searches mainly through the MBT (Model-Based Testing). The development of software from formal specifications, when the whole process of refinement is done rigorously, ensures that what is specified in the application will be implemented. Thus, the implementation generated from these specifications would accurately depict what was specified. But not always the specification is refined to the level of implementation and code generation, and in these cases the tests generated from the specification tend to find fault. Additionally, the generation of so-called "invalid tests", ie tests that exercise the application scenarios that were not addressed in the specification, complements more significantly the formal development process. Therefore, this paper proposes a method for generating tests from B formal specifications. This method was structured in pseudo-code. The method is based on the systematization of the techniques of black box testing of boundary value analysis, equivalence partitioning, as well as the technique of orthogonal pairs. The method was applied to a B specification and B test machines that generate test cases independent of implementation language were generated. Aiming to validate the method, test cases were transformed manually in JUnit test cases and the application, created from the B specification and developed in Java, was tested. Faults were found with the execution of the JUnit test cases

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work shows a project method proposed to design and build software components from the software functional m del up to assembly code level in a rigorous fashion. This method is based on the B method, which was developed with support and interest of British Petroleum (BP). One goal of this methodology is to contribute to solve an important problem, known as The Verifying Compiler. Besides, this work describes a formal model of Z80 microcontroller and a real system of petroleum area. To achieve this goal, the formal model of Z80 was developed and documented, as it is one key component for the verification upto the assembly level. In order to improve the mentioned methodology, it was applied on a petroleum production test system, which is presented in this work. Part of this technique is performed manually. However, almost of these activities can be automated by a specific compiler. To build such compiler, the formal modelling of microcontroller and modelling of production test system should provide relevant knowledge and experiences to the design of a new compiler. In ummary, this work should improve the viability of one of the most stringent criteria for formal verification: speeding up the verification process, reducing design time and increasing the quality and reliability of the product of the final software. All these qualities are very important for systems that involve serious risks or in need of a high confidence, which is very common in the petroleum industry