970 resultados para Testes de software


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spacecraft move with high speeds and suffer abrupt changes in acceleration. So, an onboard GPS receiver could calculate navigation solutions if the Doppler effect is taken into consideration during the satellite signals acquisition and tracking. Thus, for the receiver subject to such dynamic cope these shifts in the frequency signal, resulting from this effect, it is imperative to adjust its acquisition bandwidth and increase its tracking loop to a higher order. This paper presents the changes in the GPS Orion s software, an open architecture receiver produced by GEC Plessey Semiconductors, nowadays Zarlink, in order to make it able to generate navigation fix for vehicle under high dynamics, especially Low Earth Orbit satellites. GPS Architect development system, sold by the same company, supported the modifications. Furthermore, it presents GPS Monitor Aerospace s characteristics, a computational tool developed for monitoring navigation fix calculated by the GPS receiver, through graphics. Although it was not possible to simulate the software modifications implemented in the receiver in high dynamics, it was observed that the receiver worked in stationary tests, verified also in the new interface. This work also presents the results of GPS Receiver for Aerospace Applications experiment, achieved with the receiver s participation in a suborbital mission, Operation Maracati 2, in December 2010, using a digital second order carrier tracking loop. Despite an incident moments before the launch have hindered the effective navigation of the receiver, it was observed that the experiment worked properly, acquiring new satellites and tracking them during the VSB-30 rocket flight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Testar software é uma das atividades que faz parte do desenvolvimento de software, e tem como objetivo apresentar evidências de confiabilidade contribuindo para uma maior qualidade do software. Esta atividade consome uma parte significativa do esforço de um projeto de desenvolvimento de software, sempre com o objetivo de encontrar erros antes da fase de manutenção, pois o custo de correção nesta fase pode ser até 100 vezes superior ao custo de correção na fase de conceção. Para auferir mais qualidade ao software, este pode ser certificado por uma norma de qualidade. As normas fornecem processos consistentes, rigorosos e uniformes para o desenvolvimento de software sempre com o objetivo de garantir qualidade ao software. As normas têm um papel importante na definição dos requisitos de teste, casos de teste e relatórios de teste que contemplam a atividade de testes permitindo elaborar um plano de testes mais rigoroso. Como o processo de testes é complexo no desenvolvimento de software, as ferramentas de automatização de testes de software permitem reduzir tempo, recursos e consequentemente os custos para a organização. A automatização deverá ser capaz de produzir os mesmos resultados obtidos através de um processo de testes manual, evidenciando sempre o resultado do teste. Deve também permitir a realização de testes sistemáticos e paralelos em diferentes ambientes de teste, sem o aumento do tempo e de recursos humanos. Nesta dissertação pretende-se desenvolver uma abordagem automatizada com o software Sikuli para a realização de testes seguindo a norma ISO/IEC 25051 para certificação de software. Depois da criação da abordagem e a respetiva criação de testes, é necessário validar a capacidade desta abordagem em comparação com uma abordagem de testes manuais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As tecnologias para automatização de tarefas são ferramentas valiosas para garantir a correta execução de processos bem como a obtenção dos resultados esperados. Este projeto visa desenvolver uma solução para a automatização de tarefas recorrendo à captura de imagens do ecrã, à elaboração de uma sequência de ações/tarefas definidas pelo utilizador de modo a detetar falhas e garantir a obtenção dos resultados esperados no fim da execução da tarefa. Este projeto pretende apoiar a realização de testes de software, servindo de apoio à análise do nível de qualidade e fiabilidade de software, permitindo a redução do tempo despendido por parte dos utilizadores/programadores na realização dos testes ou mesmo em tarefas que se pretendam fazer.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O objetivo deste trabalho é o desenvolvimento de frameworks de testes automáticos de software. Este tipo de testes normalmente está associado ao modelo evolucionário e às metodologias ágeis de desenvolvimento de software, enquanto que os testes manuais estão relacionados com o modelo em cascata e as metodologias tradicionais. Como tal foi efetuado um estudo comparativo sobre os tipos de metodologias e de testes existentes, para decidir quais os que melhor se adequavam ao projeto e dar resposta à questão "Será que realmente compensa realizar testes (automáticos)?". Finalizado o estudo foram desenvolvidas duas frameworks, a primeira para a implementação de testes funcionais e unitários sem dependências a ser utilizada pelos estagiários curriculares da LabOrders, e a segunda para a implementação de testes unitários com dependências externas de base de dados e serviços, a ser utilizada pelos funcionários da empresa. Nas últimas duas décadas as metodologias ágeis de desenvolvimento de software não pararam de evoluir, no entanto as ferramentas de automação não conseguiram acompanhar este progresso. Muitas áreas não são abrangidas pelos testes e por isso alguns têm de ser feitos manualmente. Posto isto foram criadas várias funcionalidades inovadoras para aumentar a cobertura dos testes e tornar as frameworks o mais intuitivas possível, nomeadamente: 1. Download automático de ficheiros através do Internet Explorer 9 (e versões mais recentes). 2. Análise do conteúdo de ficheiros .pdf (através dos testes). 3. Obtenção de elementos web e respetivos atributos através de código jQuery utilizando a API WebDriver com PHP bindings. 4. Exibição de mensagens de erro personalizadas quando não é possível encontrar um determinado elemento. As frameworks implementadas estão também preparadas para a criação de outros testes (de carga, integração, regressão) que possam vir a ser necessários no futuro. Foram testadas em contexto de trabalho pelos colaboradores e clientes da empresa onde foi realizado o projeto de mestrado e os resultados permitiram concluir que a adoção de uma metodologia de desenvolvimento de software com testes automáticos pode aumentar a produtividade, reduzir as falhas e potenciar o cumprimento de orçamentos e prazos dos projetos das organizações.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A utilização de sistemas de informações geográficas via Web (Sigweb) tem crescido nos últimos anos pela facilidade na manipulação e visualização de informações de diferentes lugares através da Internet. O objetivo deste estudo é realizar testes de sistemas de informações geográficas na Internet com ênfase na técnica funcional para avaliar a funcionalidade, usabilidade, a navegabilidade dos programas conhecidos como Sigweb prontos para usar. Para tanto, foi necessária a identificação dos casos de uso dos programas propostos para o estudo. Como resultado se pode conferir o comportamento dos sistemas durante os testes, além de distinguir as características de cada SigWeb e as dificuldades encontradas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os testes são uma atividade crucial no desenvolvimento de sistemas, pois uma boa execução dos testes podem expor anomalias do software e estas podem ser corrigidas ainda no processo de desenvolvimento, reduzindo custos. Esta dissertação apresenta uma ferramenta de testes chamada SIT (Sistema de Testes) que auxiliará no teste de Sistemas de Informações Geográficas (SIG). Os SIG são caracterizados pelo uso de informações espaciais georreferenciadas, que podem gerar um grande número de casos de teste complexos. As técnicas tradicionais de teste são divididas em funcionais e estruturais. Neste trabalho, o SIT abordará os testes funcionais, focado em algumas técnicas clássicas como o particionamento de equivalência e análise do Valor Limite. O SIT também propõe o uso de Lógica Nebulosa como uma ferramenta que irá sugerir um conjunto mínimo de testes a executar nos SIG, ilustrando os benefícios da ferramenta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de Computadores