73 resultados para Testes de hipóteses estatísticas
Resumo:
obesity affects rightly functional capacity diminishing the cardiovascular system efficiency and oxygen uptake (VO2). Field tests, such as, Incremental Shuttle Walking Test (ISWT) and Six Minute Walk Test (6MWT) has been employed as alternative of Cardiopulmonary Exercise Test (CPX), to functional assessing for conditions which transport of oxygen to peripheral is diminished. Nevertheless, the knowing about metabolic variables response in real time and it comparing among different maximal and submaximal tests in obese is absent. Aim: to compare cardiopulmonary, metabolic response during CPX, ISWT and 6MWT and to analyse it influence of adiposity markers in obese. Material e Method: crosssectional, prospective study. Obese included if: (BMI>30Kg/m2; FVC>80%), were assessed as clinical, anthropometric (BMI, body adiposity index-BAI, waist-WC, hip- HC and neck-NC circumferences) and spirometry (forced vital capacity-FVC, Forced expiratory volume-1°second-FEV1, maximal voluntary ventilation-MVV) variables. Obese performed the sequence of tests: CPX, ISWT and 6MWT. Throughout tests was assessed breath-by-breath by telemetry system (Cortex-Biophysik-Metamax3B) variables; oxygen uptake on peak of activity (VO2peak); carbon dioxide production (VCO2); Volume Expiratory (VE); ventilatory equivalents for VO2 (VE/VO2) and CO2 (VE/VCO2); respiratory exchange rate (RER) and perceived effort-Borg6-20). Results: 15 obese (10women) 39.4+10.1years, normal spirometry (%CVF=93.7+9.7) finished all test. They have BMI (43.5+6.6kg/m2) and different as %adiposity (BAI=50.0+10.5% and 48.8+16.9% respectively women and men). Difference of VO2ml/kg/min and %VO2 were finding between CPX (18.6+4.0) and 6MWT (13.2+2.5) but not between ISWT (15.4+2.9). Agreement was found for ISWT and CPX on VO2Peak (3.2ml/kg/min; 95%; IC-3.0 9.4) and %VO2 (16.4%). VCO2(l/min) confirms similarity in production for CPX (2.3+1.0) and ISWT (1.7+0.7) and difference for 6MWT (1.4+0.6). WC explains more the response of CPX and ISWT than other adiposity markers. Adiposity diminishes 3.2% duration of CPX. Conclusion: ISWT promotes similar metabolic and cardiovascular response than CPX in obese. It suggesting that ISWT could be useful and reliable to assess oxygen uptake and functional capacity in obese
Resumo:
The germination of cotton seeds and the seedlings emergency are generally delayed and reduced by the salinity. Although the cotton is considered a tolerant culture, it can suffer substantial reductions in regarding its growth and production when exposed to salinity condition. The aims of this study went evaluate the effect of the saline stress in the germination phase to four cotton genotypes (BRS Rubi, BRS Safira, BRS 201 and CNPA 187 8H), using different osmotic potentials generated with increment of sodium chloride (NaCl). The saline stress was simulated using NaCl aqueous solutions in the potentials: 0.0 (Control); -0.2; -0.4; -0.6; -0.8 and -1.0 MPa. The treatments were monitored by means of tests for analysis of seeds, germination, first counting, speed germination index, length of shoot, radicle length, dry weigth of embrionic axis and shoot/radicle ratio. The tests for germination, first counting and index of germination speed were accomplished using 50 seeds for repetition and for the study of length of shoot, radicle length, dry weigth of embrionic axis and shoot/radicle ratio were used 20 seeds by repetition. For both tests four repetitions were accomplished by genotype for each one of the potentials. The seeds of each repetition were involved in papers Germitest humidified with NaCl solution corresponding to the potential. The repetitions of both tests were maintained in a germinator with saturated humidity. The analysis were initiate four days after the induction of the saline stress. The evaluations of the first three variables analyzed were accomplished daily; the seeds were remove and counted when its germinated. For the length tests just the repetitions corresponding to the potential of NaCl 0,0 MPa were analysis 4 days after the beginning of the induction of the saline stress. The analysis of the repetitions of the potentials -0,2 and -0,4 and of the potentials -0,6, -0,8 and -1,0 MPa they were accomplished with 12 and 20 days, respectively. For accomplishment of the analisis of this test the shoot of the 20 plantules of each repetition was separate from the radicle and both parts were measured. The statistical analyses were performed using the GENMOD and GLM procedures of the SAS. For the variable germination, the cultivates CNPA 187 8H and BRS Safira stood out for the potential -0.8 MPa, with averages of 89% and 81%, respectively. The test of speed germination index to cultivate BRS Safira presented the largest averages for the two higher saline potentials. It was observed that the increase of the saline potential reduces the germination percentage and speed germination index. For each day of evaluation it was verified that the increase of the saline potential causes a reduction of the length both of the shoot and of the radicle. The radicle tends to grow more than the shoot until the potential -0,4 MPa
Resumo:
In the last years, many scientific researches in implantology have been focused on alternatives that would provide higher speed and quality in the process of osseointegration. Different treatment methods can be used to modify the topographic and chemical properties of titanium surface in order to optimize the tissue-implant reactions by a positive tissue response. This study aimed to evaluate the adhesion and proliferation of mesenchymal cells from human periodontal ligament on two different titanium surfaces, using cell culture techniques. Grade II titanium discs received different surface treatments, forming two distinct groups: polished and cathodic cage plasma nitriding. Human periodontal ligament mesenchymal cells were cultured on titanium discs in 24-well cell culture plates, at a density of 2 x 104 cells per well, including wells with no discs as positive control. Data obtained by counting the cells that adhered to the titanium surfaces (polished group and cathodic cage group) and to the plastic surface (control group), in the 24, 48 and 72-hour periods after plating, were used to analyze cell adhesion and proliferation and to obtain the cell growing curve in the different groups. The data were submitted to nonparametric analysis and the differences between groups were compared by Kruskal-Wallis and Friedman statistical tests. No statistically significant differences were found in the cells counts between the groups (p>0.05). It was concluded that both treatments produced surfaces compatible with the adhesion and proliferation of human periodontal ligament mesenchymal cells
Resumo:
Highly emotional itens are best remembered in emotional memory tasks than neutral items. An example of emotional item that benefits declarative memory processes are the taboo words. These words undergo from a conventional prohibition, imposed by tradition or custom. Literature suggests that the strongest recollection these words is due to emotional arousal, as well as, the fact that they form a cohesive semantic group, which is a positive additive effect. However, studies with semantic lists show that cohesion can have a negative effect of interference, impairing memory. We analyzed, in two experiments, the effect of arousal and semantic cohesion of taboo words on recognition tests, comparing with into two other word categories: semantically related and without emotional arousal (semantic category) and neutral, with low semantic relation (objects). Our results indicate that cohesion has interfered whith the performance of the test by increasing the number of false alarms. This effect was strongly observed in the semantic category of words in both experiments, but also in the neutral and taboo words, when both were explicitly considered as semantic categories through the instruction of the test in Experiment 2. Despite the impairment induced by semantic cohesion in both experiments, the taboo words were more discriminated than others, and this result agrees with the indication of the emotional arousal as the main factor for the best recollection of emotional items in memory tests
Resumo:
The object of this study was to identify the possibility of predicting the involvement in traffic infractions from the results of the psychological tests carried out by psychologists specialized in the process of driver licensing in the state of Rio Grande do Norte (RN). The proposal consisted in identifying the penalty points recorded in national driving licenses (CNH) and identifying the corresponding tests and scores obtained, verifying if the average scores in the tests of drivers with and without an infraction record were significantly different and if there is any relation between the test scores and the frequency of the infractions. The results of the psychological instruments were collected in two moments the first being in the act of acquisition of the CNH and the second being during license renewal at the only certified clinic and at the DETRAN-RN. A population of 839 drivers of 14 municipalities were identified. 127 protocols of psychological tests were identified in the records of the DETRAN-RN (2002) and 76 at the clinic (2007), pointing out failures in the process of safekeeping of the psychological material, as well as in its retrieval from the record files. The sample was thus reduced to 68 drivers, all male, with age range between 18 and 41 years old, mean of 21,72 years old (DP = 5,24). 54 drivers were identified without a record of infraction, and 14 with a record. The latter committed 29 infractions. The penalty points recorded in their CNH ranged from 0 to 35 and the typical value of points (median) was zero. In the group with a record of infractions the number of points ranged between 3 and 35, mean of 10,79 (DP = 7,73). Differences were observed in the composition of the battery of tests in the two moments with the same subjects. The use of different tests to assess the same construct of the subject, first and second moment of assessment, did not allow for some analyzes with more efficient statistical proof. It was pointed out that five tests were not carried out and 118 were not corrected/analyzed. Significant differences between the groups were not identified with the psychological instruments used. In another attempt to establish differences between the means, the application of the independent t-Test evidenced a significant difference in the scores of the instruments of concentrated attention in 2002 (t = 2,21, gl = 25, p = 0.037) and of diffuse attention in 2002 (t = 2,37, gl = 24, p = 0.026). The results also did not evidence significant correlation between the scores of the tests and the penalty points of the infractions. Based on this study, it cannot be concluded with precision that the high or low scores are good criteria to determine that a driver will commit more or less traffic infractions, nor that the drivers with higher scores in the tests commit less infractions and vice-versa. Furthermore, the problems to find the instruments and the most basic data require a stronger monitoring on the part of the certified clinic and of the DETRAN-RN.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
The camera motion estimation represents one of the fundamental problems in Computer Vision and it may be solved by several methods. Preemptive RANSAC is one of them, which in spite of its robustness and speed possesses a lack of flexibility related to the requirements of applications and hardware platforms using it. In this work, we propose an improvement to the structure of Preemptive RANSAC in order to overcome such limitations and make it feasible to execute on devices with heterogeneous resources (specially low budget systems) under tighter time and accuracy constraints. We derived a function called BRUMA from Preemptive RANSAC, which is able to generalize several preemption schemes, allowing previously fixed parameters (block size and elimination factor) to be changed according the applications constraints. We also propose the Generalized Preemptive RANSAC method, which allows to determine the maximum number of hipotheses an algorithm may generate. The experiments performed show the superiority of our method in the expected scenarios. Moreover, additional experiments show that the multimethod hypotheses generation achieved more robust results related to the variability in the set of evaluated motion directions
Resumo:
Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
A automação consiste em uma importante atividade do processo de teste e é capaz de reduzir significativamente o tempo e custo do desenvolvimento. Algumas ferramentas tem sido propostas para automatizar a realização de testes de aceitação em aplicações Web. Contudo, grande parte delas apresenta limitações importantes tais como necessidade de valoração manual dos casos de testes, refatoração do código gerado e forte dependência com a estrutura das páginas HTML. Neste trabalho, apresentamos uma linguagem de especificação de teste e uma ferramenta concebidas para minimizar os impactos propiciados por essas limitações. A linguagem proposta dá suporte aos critérios de classes de equivalência e a ferramenta, desenvolvida sob a forma de um plug-in para a plataforma Eclipse, permite a geração de casos de teste através de diferentes estratégias de combinação. Para realizar a avaliação da abordagem, utilizamos um dos módulos do Sistema Unificado de Administração Publica (SUAP) do Instituto Federal do Rio Grande do Norte (IFRN). Participaram da avaliação analistas de sistemas e um técnico de informática que atuam como desenvolvedores do sistema utilizado.
Resumo:
Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications
Uma abordagem para a verificação do comportamento excepcional a partir de regras de designe e testes
Resumo:
Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students
Resumo:
The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen