993 resultados para Testes de hipóteses estatísticas
Resumo:
OBJETIVO: caracterizar e comparar o desempenho das crianças com diagnóstico de gagueira nos testes de padrão temporal, com crianças sem queixas e/ou sinais de transtornos psiquiátricos ou neurológicos, dificuldades de fala, audição, linguagem e/ou aprendizagem. MÉTODO: participaram 30 crianças entre 9 e 12 anos de idade, de ambos os gêneros, divididas em dois grupos: GI - 15 crianças com gagueira desenvolvimental persistente; GII - 15 crianças sem queixas e/ou sinais de transtornos psiquiátricos ou neurológicos, dificuldades de fala, audição, linguagem e/ou aprendizagem. Para avaliação do processamento auditivo temporal, foi aplicado os Testes Tonais de Padrão de Frequência (PPS-Pitch Pattern Sequence Test) e Testes Tonais de Padrão de Duração (DPS - Duration Pattern Sequence Test). RESULTADOS: o grupo II apresentou desempenho superior no teste de padrão de frequência e de padrão de duração quando comparado ao grupo I. Os resultados indicaram que houve diferença estatisticamente significante entre os grupos estudados. CONCLUSÃO: os participantes do grupo I apresentaram desempenho alterado nos testes de padrão temporal, o que indica que existe relação entre a gagueira e o transtorno do processamento auditivo.
Resumo:
Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
A automação consiste em uma importante atividade do processo de teste e é capaz de reduzir significativamente o tempo e custo do desenvolvimento. Algumas ferramentas tem sido propostas para automatizar a realização de testes de aceitação em aplicações Web. Contudo, grande parte delas apresenta limitações importantes tais como necessidade de valoração manual dos casos de testes, refatoração do código gerado e forte dependência com a estrutura das páginas HTML. Neste trabalho, apresentamos uma linguagem de especificação de teste e uma ferramenta concebidas para minimizar os impactos propiciados por essas limitações. A linguagem proposta dá suporte aos critérios de classes de equivalência e a ferramenta, desenvolvida sob a forma de um plug-in para a plataforma Eclipse, permite a geração de casos de teste através de diferentes estratégias de combinação. Para realizar a avaliação da abordagem, utilizamos um dos módulos do Sistema Unificado de Administração Publica (SUAP) do Instituto Federal do Rio Grande do Norte (IFRN). Participaram da avaliação analistas de sistemas e um técnico de informática que atuam como desenvolvedores do sistema utilizado.
Resumo:
Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications
Uma abordagem para a verificação do comportamento excepcional a partir de regras de designe e testes
Resumo:
Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students
Resumo:
The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen
Resumo:
The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT
Resumo:
This study aims to analyze tourist information provided by the official websites of the 2014 FIFA World Cup host cities. The framework developed by Díaz (2005) was applied to analyze different aspects, such as: local tourist information, tourist services distribution, communication and interaction between website and users, and website foreign language versions. This dissertation describes how society and tourism are related by analyzing the consequences of technological evolution in the travel and tourism sector, showing the importance of the use of information and communication technology to provide accurate, upto- date and low-cost information to tourist destinations. Because of the nature of the study, the research subjects are the 12 Brazilian host cities represented by their respective official webpages (cities, states and convention bureaus), and also Brazil s official website, totalizing 36 elements to be analyzed. The methodology has been characterized as descriptive and exploratory with quantitative analysis, and also using desk research and survey literature review. In order to analyze the data collected, parametric and nonparametric statistics tests were used, such as: variance analysis (ANOVA and KRUSKAL-WALLIS) to measure means variance between groups combined with multiple comparison tests (Tukey and Games Howell); nonparametric correlations tests (Kendall s Tau b); and cluster analyses. Finally, Microsoft Excel was used to collect data and SPSS for managing data through quantitative analyses tests. Overall, the websites of the south region showed better results than the other Brazilian regions. Despite this result, the data analysis demonstrated that the available tourist information are incomplete as it was verified that tourist host cities websites are unable to provide all the information needed for the web visitors to organize and plan their journey. This means that visitors have to look for more information in other sources
Resumo:
Brazil has about 8,500 km of coastline and on this scale, fishing is a historically important source of animal protein for human consumption. The national fishing background shows a growth of marine fishery production until 1985 and within this period it was recorded a steady decline. From the year 2003 fishing statistics aim to some "recovery" of the total fisheries production, which probably is related to a change in industry practice. The target of commercial fishing became smaller species with low commercial value, but very abundants. The coney, Cephalopholis fulva (Serranidae), is one of these species that have been suffering a greater fishing pressure in recent years. In order to provide data about the current situation of the genetic diversity of these populations, several molecular markers have been being used for this purpose. The prior knowledge of genetic variability is crucial for management and biodiversity conservation. To this end, the control region sequences (dloop) of mtDNA from Cephalopholis fulva (Serranidae) from five geographical points of the coast of Brazil (Ceará, Rio Grande do Norte, Bahia and Espírito Santo) and the Archipelago of Fernando de Noronha (FN) were sequenced and their genetic diversity analyzed. The FST values were very low (0.0246 to 0.000), indicating high gene flow between the sampled spots. The indices h and indicate a secondary contact between previously allopatric lineages differentiated or large and stable populations with long evolutionary history. Tests of Tajima and Fu showed expansion for all populations. In contrast, the mismatch distribution and SSD indicated expansion just for coastal populations. Unlike other species of the Atlantic which have been deeply affected by events on later Pleistocene, the population-genetic patterns of C. fulva may be related to recent events occurred approximately 130,000 years ago. Moreover, the data presented by geographical samples of the specie C. fulva showed high genetic diversity, also indicating the absence of deleterious effects of over-exploitation on this specie, as well as evidence of complete panmixia between all sampled populations
Resumo:
Foram estudados parâmetros relacionados ao estado nutricional de 151 adultos sadios, pertencentes à classe média e residindo em Botucatu, SP, Brasil. Valores antropométricos foram maiores nos homens, com exceção da prega tricipital e da área adiposa do braço. O aumento da idade associou-se a aumento dos valores da massa muscular (homens e mulheres) e do peso do corpo, da prega tricipital e da área adiposa do braço (mulheres). Os resultados antropométricos aproximaram-se dos valores referenciais internacionais, mas não foram inteiramente concordantes com eles, sendo inferiores para o peso corpóreo e circunferência e área musculares do braço. Nos indivíduos de menos de 50 anos, os valores da ingestão energética foram ligeiramente inferiores aos níveis recomendados. A ingestão protéica foi adequada. Os valores médios das proteínas e lípides do soro foram similares aos valores de referência. Testes de hipersensibilidade cutânea são apresentados como uma prova funcional para avaliação do estado nutricional.
Resumo:
Considering a quantum gas, the foundations of standard thermostatistics are investigated in the context of non-Gaussian statistical mechanics introduced by Tsallis and Kaniadakis. The new formalism is based on the following generalizations: i) Maxwell- Boltzmann-Gibbs entropy and ii) deduction of H-theorem. Based on this investigation, we calculate a new entropy using a generalization of combinatorial analysis based on two different methods of counting. The basic ingredients used in the H-theorem were: a generalized quantum entropy and a generalization of collisional term of Boltzmann equation. The power law distributions are parameterized by parameters q;, measuring the degree of non-Gaussianity of quantum gas. In the limit q
Resumo:
The search for sustainable solutions through an appropriate environmental administration of the available natural resources, that comes from encounter to the aspirations of preservation of the environment and of the human being, in way to diagnose and to solve the environmental and social problems with the smallest possible impact to the nature and the man, it is the great challenge, so much for that generation, as for the future generations. The study of the environmental problems of the water and the participation and the social actors' environmental understanding as a whole, interferes in the field of the thematic environmental international, contemplating the strategic need of an appropriate administration of that very natural one, through a program returned to the diagnosis of the problems and in the search of compatible maintainable solutions, in a social and environmental politics of planning and environmental education, centered above all in the citizen's voice , user of that system. The present thesis she seeks to study the problem of the maintainable administration of the water, focusing the participation and the citizen's environmental understanding in the use of that very natural one for urban residential activities, in what concerns the approach and analyses of variables that treat of the measurement of general knowledge and you adapt, sense of community of the access to the means of information and of the attitudes and environmental behaviors, besides the variables of partner-demographic characterization or personal identification of the interviewed ones of an exploratory research of the type " survey ", accomplished through a stratified aleatory sampling, being the strata each one of the 4 (four) Political-Administrative Areas of the Natal city, having happened the collection of the data in the period of february to april/2002. The methodology used in this work it constitutes in the application of questionnaires with scales of the type Likert to measure the echo-varied of the study, besides a partner-demographic scale for the characterization of the studied sample. For the analysis of the results, it was made an exploratory descriptive study initially, followed by the use of techniques statistical multivariate s, such as, factorial analysis through the application of main components, besides the accomplishment of studies of multiple lineal regression. To complement this study, the accomplishment of Tests of Independence was proceeded through the Qui-square of Pearson, in way to verify the dependence of the associations between the partner-demographic variables and the principal selected variables and presents in the resulting factors of the factorial analysis. The results appear for a low level of environmental knowledge, of access to the information and community's sense, besides the verification that the principal factors resultants send for the need of feeling emphasis in the programs and administration actions addressed for the environmental understanding, the behaviors and attitudes that approach the information and the environmental education, besides the reuse of the water