894 resultados para Automated Software Testing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Some verification and validation techniques have been evaluated both theoretically and empirically. Most empirical studies have been conducted without subjects, passing over any effect testers have when they apply the techniques. We have run an experiment with students to evaluate the effectiveness of three verification and validation techniques (equivalence partitioning, branch testing and code reading by stepwise abstraction). We have studied how well able the techniques are to reveal defects in three programs. We have replicated the experiment eight times at different sites. Our results show that equivalence partitioning and branch testing are equally effective and better than code reading by stepwise abstraction. The effectiveness of code reading by stepwise abstraction varies significantly from program to program. Finally, we have identified project contextual variables that should be considered when applying any verification and validation technique or to choose one particular technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Antecedentes: Esta investigación se enmarca principalmente en la replicación y secundariamente en la síntesis de experimentos en Ingeniería de Software (IS). Para poder replicar, es necesario disponer de todos los detalles del experimento original. Sin embargo, la descripción de los experimentos es habitualmente incompleta debido a la existencia de conocimiento tácito y a la existencia de otros problemas tales como: La carencia de un formato estándar de reporte, la inexistencia de herramientas que den soporte a la generación de reportes experimentales, etc. Esto provoca que no se pueda reproducir fielmente el experimento original. Esta problemática limita considerablemente la capacidad de los experimentadores para llevar a cabo replicaciones y por ende síntesis de experimentos. Objetivo: La investigación tiene como objetivo formalizar el proceso experimental en IS, de modo que facilite la comunicación de información entre experimentadores. Contexto: El presente trabajo de tesis doctoral ha sido desarrollado en el seno del Grupo de Investigación en Ingeniería del Software Empírica (GrISE) perteneciente a la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM), como parte del proyecto TIN2011-23216 denominado “Tecnologías para la Replicación y Síntesis de Experimentos en Ingeniería de Software”, el cual es financiado por el Gobierno de España. El grupo GrISE cumple a la perfección con los requisitos necesarios (familia de experimentos establecida, con al menos tres líneas experimentales y una amplia experiencia en replicaciones (16 replicaciones hasta 2011 en la línea de técnicas de pruebas de software)) y ofrece las condiciones para que la investigación se lleve a cabo de la mejor manera, como por ejemplo, el acceso total a su información. Método de Investigación: Para cumplir este objetivo se opta por Action Research (AR) como el método de investigación más adecuado a las características de la investigación, para obtener resultados a través de aproximaciones sucesivas que abordan los problemas concretos de comunicación entre experimentadores. Resultados: Se formalizó el modelo conceptual del ciclo experimental desde la perspectiva de los 3 roles principales que representan los experimentadores en el proceso experimental, siendo estos: Gestor de la Investigación (GI), Gestor del Experimento (GE) y Experimentador Senior (ES). Por otra parte, se formalizó el modelo del ciclo experimental, a través de: Un workflow del ciclo y un diagrama de procesos. Paralelamente a la formalización del proceso experimental en IS, se desarrolló ISRE (de las siglas en inglés Infrastructure for Sharing and Replicating Experiments), una prueba de concepto de entorno de soporte a la experimentación en IS. Finalmente, se plantearon guías para el desarrollo de entornos de soporte a la experimentación en IS, en base al estudio de las características principales y comunes de los modelos de las herramientas de soporte a la experimentación en distintas disciplinas experimentales. Conclusiones: La principal contribución de la investigación esta representada por la formalización del proceso experimental en IS. Los modelos que representan la formalización del ciclo experimental, así como la herramienta ISRE, construida a modo de evaluación de los modelos, fueron encontrados satisfactorios por los experimentadores del GrISE. Para consolidar la validez de la formalización, consideramos que este estudio debería ser replicado en otros grupos de investigación representativos en la comunidad de la IS experimental. Futuras Líneas de Investigación: El cumplimiento de los objetivos, de la mano con los hallazgos alcanzados, han dado paso a nuevas líneas de investigación, las cuales son las siguientes: (1) Considerar la construcción de un mecanismo para facilitar el proceso de hacer explícito el conocimiento tácito de los experimentadores por si mismos de forma colaborativa y basados en el debate y el consenso , (2) Continuar la investigación empírica en el mismo grupo de investigación hasta cubrir completamente el ciclo experimental (por ejemplo: experimentos nuevos, síntesis de resultados, etc.), (3) Replicar el proceso de investigación en otros grupos de investigación en ISE, y (4) Renovar la tecnología de la prueba de concepto, tal que responda a las restricciones y necesidades de un entorno real de investigación. ABSTRACT Background: This research addresses first and foremost the replication and also the synthesis of software engineering (SE) experiments. Replication is impossible without access to all the details of the original experiment. But the description of experiments is usually incomplete because knowledge is tacit, there is no standard reporting format or there are hardly any tools to support the generation of experimental reports, etc. This means that the original experiment cannot be reproduced exactly. These issues place considerable constraints on experimenters’ options for carrying out replications and ultimately synthesizing experiments. Aim: The aim of the research is to formalize the SE experimental process in order to facilitate information communication among experimenters. Context: This PhD research was developed within the empirical software engineering research group (GrISE) at the Universidad Politécnica de Madrid (UPM)’s School of Computer Engineering (ETSIINF) as part of project TIN2011-23216 entitled “Technologies for Software Engineering Experiment Replication and Synthesis”, which was funded by the Spanish Government. The GrISE research group fulfils all the requirements (established family of experiments with at least three experimental lines and lengthy replication experience (16 replications prior to 2011 in the software testing techniques line)) and provides favourable conditions for the research to be conducted in the best possible way, like, for example, full access to information. Research Method: We opted for action research (AR) as the research method best suited to the characteristics of the investigation. Results were generated successive rounds of AR addressing specific communication problems among experimenters. Results: The conceptual model of the experimental cycle was formalized from the viewpoint of three key roles representing experimenters in the experimental process. They were: research manager, experiment manager and senior experimenter. The model of the experimental cycle was formalized by means of a workflow and a process diagram. In tandem with the formalization of the SE experimental process, infrastructure for sharing and replicating experiments (ISRE) was developed. ISRE is a proof of concept of a SE experimentation support environment. Finally, guidelines for developing SE experimentation support environments were designed based on the study of the key features that the models of experimentation support tools for different experimental disciplines had in common. Conclusions: The key contribution of this research is the formalization of the SE experimental process. GrISE experimenters were satisfied with both the models representing the formalization of the experimental cycle and the ISRE tool built in order to evaluate the models. In order to further validate the formalization, this study should be replicated at other research groups representative of the experimental SE community. Future Research Lines: The achievement of the aims and the resulting findings have led to new research lines, which are as follows: (1) assess the feasibility of building a mechanism to help experimenters collaboratively specify tacit knowledge based on debate and consensus, (2) continue empirical research at the same research group in order to cover the remainder of the experimental cycle (for example, new experiments, results synthesis, etc.), (3) replicate the research process at other ESE research groups, and (4) update the tools of the proof of concept in order to meet the constraints and needs of a real research environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este proyecto estudia los fundamentos y las técnicas de las pruebas de software. Veremos lo importante que pueden llegar a ser las pruebas, mostrando diferentes desastres causados por fallos en el software. También estudiaremos las diferentes herramientas que se utilizan para llevar a cabo la gestión, administración y ejecución de estas pruebas. Finalmente aplicaremos los conceptos estudiados mediante un caso práctico. Crearemos los casos de prueba funcionales basándonos en las especificaciones del protocolo MDB/ICP e instalaremos y aprenderemos cómo crear estos casos con una de las herramientas estudiadas en la parte teórica. ABSTRACT: This project studies the fundamentals and techniques of software testing. We will see how important the evidence showing different disasters caused by bugs in the software can become. We will also study the different tools used to carry out the management, administration and execution of these tests. Finally, we apply the concepts studied by a case study. We create test cases based on functional specifications MDB/ICP protocol We will install and learn how to create such cases by one of the tools studied in the theoretical part.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research has explored the relationship between system test complexity and tacit knowledge. It is proposed as part of this thesis, that the process of system testing (comprising of test planning, test development, test execution, test fault analysis, test measurement, and case management), is directly affected by both complexity associated with the system under test, and also by other sources of complexity, independent of the system under test, but related to the wider process of system testing. While a certain amount of knowledge related to the system under test is inherent, tacit in nature, and therefore difficult to make explicit, it has been found that a significant amount of knowledge relating to these other sources of complexity, can indeed be made explicit. While the importance of explicit knowledge has been reinforced by this research, there has been a lack of evidence to suggest that the availability of tacit knowledge to a test team is of any less importance to the process of system testing, when operating in a traditional software development environment. The sentiment was commonly expressed by participants, that even though a considerable amount of explicit knowledge relating to the system is freely available, that a good deal of knowledge relating to the system under test, which is demanded for effective system testing, is actually tacit in nature (approximately 60% of participants operating in a traditional development environment, and 60% of participants operating in an agile development environment, expressed similar sentiments). To cater for the availability of tacit knowledge relating to the system under test, and indeed, both explicit and tacit knowledge required by system testing in general, an appropriate knowledge management structure needs to be in place. This would appear to be required, irrespective of the employed development methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Test av mjukvara görs i syfte att se ifall systemet uppfyller specificerade krav samt för att hitta fel. Det är en viktig del i systemutveckling och involverar bland annat regressionstestning. Regressionstester utförs för att säkerställa att en ändring i systemet inte medför att andra delar i systemet påverkas negativt. Dokumenthanteringssystem hanterar ofta känslig data hos organisationer vilket ställer höga krav på säkerheten. Behörigheter i system måste därför testas noggrant för att säkerställa att data inte hamnar i fel händer. Dokumenthanteringssystem gör det möjligt för flera organisationer att samla sina resurser och kunskaper för att nå gemensamma mål. Gemensamma arbetsprocesser stöds med hjälp av arbetsflöden som innehåller ett antal olika tillstånd. Vid dessa olika tillstånd gäller olika behörigheter. När en behörighet ändras krävs regressionstester för att försäkra att ändringen inte har gjort inverkan på andra behörigheter. Denna studie har utförts som en kvalitativ fallstudie vars syfte var att beskriva utmaningar med regressionstestning av roller och behörigheter i arbetsflöden för dokument i dokumenthanteringssystem. Genom intervjuer och en observation så framkom det att stora utmaningar med dessa tester är att arbetsflödens tillstånd följer en förutbestämd sekvens. För att fullfölja denna sekvens så involveras en enorm mängd behörigheter som måste testas. Det ger ett mycket omfattande testarbete avseende bland annat tid och kostnad. Studien har riktat sig mot dokumenthanteringssystemet ProjectWise som förvaltas av Trafikverket. Beslutsunderlag togs fram för en teknisk lösning för automatiserad regressionstestning av roller och behörigheter i arbetsflöden åt ProjectWise. Utifrån en kravinsamling tillhandahölls beslutsunderlag som involverade Team Foundation Server (TFS), Coded UI och en nyckelordsdriven testmetod som en teknisk lösning. Slutligen jämfördes vilka skillnader den tekniska lösningen kan utgöra mot manuell testning. Utifrån litteratur, dokumentstudie och förstahandserfarenheter visade sig testautomatisering kunna utgöra skillnader inom ett antal identifierade problemområden, bland annat tid och kostnad.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Designing and implementing thread-safe multithreaded libraries can be a daunting task as developers of these libraries need to ensure that their implementations are free from concurrency bugs, including deadlocks. The usual practice involves employing software testing and/or dynamic analysis to detect. deadlocks. Their effectiveness is dependent on well-designed multithreaded test cases. Unsurprisingly, developing multithreaded tests is significantly harder than developing sequential tests for obvious reasons. In this paper, we address the problem of automatically synthesizing multithreaded tests that can induce deadlocks. The key insight to our approach is that a subset of the properties observed when a deadlock manifests in a concurrent execution can also be observed in a single threaded execution. We design a novel, automatic, scalable and directed approach that identifies these properties and synthesizes a deadlock revealing multithreaded test. The input to our approach is the library implementation under consideration and the output is a set of deadlock revealing multithreaded tests. We have implemented our approach as part of a tool, named OMEN1. OMEN is able to synthesize multithreaded tests on many multithreaded Java libraries. Applying a dynamic deadlock detector on the execution of the synthesized tests results in the detection of a number of deadlocks, including 35 real deadlocks in classes documented as thread-safe. Moreover, our experimental results show that dynamic analysis on multithreaded tests that are either synthesized randomly or developed by third-party programmers are ineffective in detecting the deadlocks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hipertensos têm rarefação capilar e disfunção endotelial microcirculatória, tornando-se mais vulneráveis a lesões em órgãos-alvo. O estudo buscou avaliar o efeito de seis meses de tratamento farmacológico sobre densidade capilar e reatividade microvascular a estímulos fisiológicos e farmacológicos em hipertensos de baixo risco cardiovascular. Secundariamente testou-se a existência de diversidade nas respostas a diferentes estratégias anti-hipertensivas. Foram recrutados 44 pacientes, com 46,71,3 anos e 20 normotensos com 48,01,6 anos. Avaliaram-se dados antropométricos e laboratoriais e dosaram-se no soro o fator de crescimento vascular endotelial (VEGF), receptor Flt-1 para VEGF e óxido nítrico (NO). A contagem capilar foi por microscopia intravital, captando-se imagens da microcirculação no dorso da falange do dedo médio e contando os capilares com programa específico. Repetia-se o procedimento após hiperemia reativa pós-oclusiva (HRPO) para avaliar o recrutamento capilar. A reatividade vascular foi testada por fluxometria Laser Doppler, iontoforese de acetilcolina (Ach), HRPO e hiperemia térmica local (HTL). Os pacientes foram distribuídos aleatoriamente para dois grupos de tratamento: succinato de metoprolol titulado a 100 mg diários ou olmesartana medoxomila titulada a 40 mg diários, empregando-se, se necessário, a hidroclorotiazida. Os controles seguiram o mesmo protocolo inicial e após seis meses todos os testes foram repetidos nos hipertensos. As variáveis clínicas e laboratoriais basais eram semelhantes em comparação aos controles e entre os dois grupos de tratamento. Após seis meses, havia pequenas diferenças entre os grupos na relação cintura-quadril e HDL. A densidade capilar antes do tratamento era significativamente menor que no grupo controle (71,31,5 vs 80,61,8 cap/mm2 p<0,001 e HRPO 71,71,5 vs 79,52,6 cap/mm2 p<0,05) e, com o tratamento, aumentou para 75,41,1 cap/mm2 (p<0,01) no estado basal e para 76,81,1 cap/mm2 à HRPO (p<0,05). À reatividade vascular, a condutância vascular cutânea (CVC) em unidades de perfusão (UP)/mmHg era similar à HTL nos controles e hipertensos e aumentou com o tratamento nos dois subgrupos (metoprolol:1,730,2 a 1,900,2 p<0,001 e olmesartana:1,490,1 a 1,870,1 p<0,001). A CVC máxima à HRPO era menor nos hipertensos: 0,30(0,22-0,39) que nos controles: 0,39(0,31-0,49) com p<0,001. Após tratamento, aumentou para 0,41(0,29-0,51) com p<0,001. O aumento foi significativo apenas no grupo olmesartana (0,290,02 a 0,420,04 p<0,001). A diferença entre o tempo para atingir o fluxo máximo à HRPO aumentou no grupo metoprolol após tratamento 3,0 (-0,3 a 8,8) segundos versus olmesartana 0,4 (-2,1 a 2,4) segundos p<0,001. À iontoforese, a área sob a curva de fluxo (AUC) era similar nos grupos e aumentou com o tratamento, de 6087(3857-9137) para 7296(5577-10921) UP/s p=0,04. O VEGF e receptor não diferiam dos controles nem sofreram variações. A concentração de NO era maior nos hipertensos que nos controles: 64,9 (46,8-117,6) vs 50,7 (42-57,5) M/dl p=0,02 e não variou com tratamento. Em conclusão, hipertensos de baixo risco têm menor densidade e menor recrutamento capilar e ambos aumentam com tratamento. Apresentam também disfunção endotelial microcirculatória que melhora com a terapia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os testes são uma atividade crucial no desenvolvimento de sistemas, pois uma boa execução dos testes podem expor anomalias do software e estas podem ser corrigidas ainda no processo de desenvolvimento, reduzindo custos. Esta dissertação apresenta uma ferramenta de testes chamada SIT (Sistema de Testes) que auxiliará no teste de Sistemas de Informações Geográficas (SIG). Os SIG são caracterizados pelo uso de informações espaciais georreferenciadas, que podem gerar um grande número de casos de teste complexos. As técnicas tradicionais de teste são divididas em funcionais e estruturais. Neste trabalho, o SIT abordará os testes funcionais, focado em algumas técnicas clássicas como o particionamento de equivalência e análise do Valor Limite. O SIT também propõe o uso de Lógica Nebulosa como uma ferramenta que irá sugerir um conjunto mínimo de testes a executar nos SIG, ilustrando os benefícios da ferramenta.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combinatorial testing is an important testing method. It requires the test cases to cover various combinations of parameters of the system under test. The test generation problem for combinatorial testing can be modeled as constructing a matrix which has certain properties. This paper first discusses two combinatorial testing criteria: covering array and orthogonal array, and then proposes a backtracking search algorithm to construct matrices satisfying them. Several search heuristics and symmetry breaking techniques are used to reduce the search time. This paper also introduces some techniques to generate large covering array instances from smaller ones. All the techniques have been implemented in a tool called EXACT (EXhaustive seArch of Combinatorial Test suites). A new optimal covering array is found by this tool.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

提出了基于蜕变测试方法的集成测试环境MTest,进而为检验蜕变测试方法的能力和效率,以稀疏矩阵乘法程序为例设计了一组实验.该实验基于变异分析技术,分别以mutation score和错误发现率为度量指标,定量地分析和对比了特殊用例测试,以特殊测试用例和随机测试用例为源测试用例的蜕变测试这3种方法的测试能力和效率.该实验可在MTest测试环境下自动完成.实验结果表明,蜕变测试与特殊用例测试之间是互补的,而且就蜕变测试的源测试用例而言,随机测试用例在测试能力和效率上优于特殊测试用例.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

提出了两种提高回归测试自动化程度的技术。其中一种技术采用数据驱动的方式,使得测试脚本成为可以驱动所有类似测试用例组的通用脚本,同时,实现了测试执行和测试逻辑的分离,使得测试用例的修改和维护更加容易。介绍的另一种技术使用附加的动态链接库来恢复被测软件的图形界面状态,使得软件图形界面的自动测试不易受到被测软件状态改变的影响,提高了整个自动测试系统的健壮性。

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

描述了一个Java自动化的单元测试工具JUTA.JUTA首先调用工具Soot解析单个Java方法的源码,并将源码解析成一个控制流图.在此基础上,采用符号执行的方法分析控制流图上的路径.工具能够自动地产生满足覆盖率标准的程序的测试用例.这种方法产生的所有测试用例都是可执行的,并且一般来说具有较小的测试用例数.如果用户能够合理地给出描述程序错误的断言,框架JUTA能够自动地检查源码中部分特定类型的错误.实验结果表明工具对Java单元代码的动态测试和静态测试均能在可接受的时间内给出有效的结果.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dascalu, M., Stavarache, L.L., Dessus, P., Trausan-Matu, S., McNamara, D.S., & Bianco, M. (2015). ReaderBench: An Integrated Cohesion-Centered Framework. In G. Conole, T. Klobucar, C. Rensing, J. Konert & É. Lavoué (Eds.), 10th European Conf. on Technology Enhanced Learning (pp. 505–508). Toledo, Spain: Springer.