979 resultados para Psicologia : Testes e medidas
Resumo:
A automação consiste em uma importante atividade do processo de teste e é capaz de reduzir significativamente o tempo e custo do desenvolvimento. Algumas ferramentas tem sido propostas para automatizar a realização de testes de aceitação em aplicações Web. Contudo, grande parte delas apresenta limitações importantes tais como necessidade de valoração manual dos casos de testes, refatoração do código gerado e forte dependência com a estrutura das páginas HTML. Neste trabalho, apresentamos uma linguagem de especificação de teste e uma ferramenta concebidas para minimizar os impactos propiciados por essas limitações. A linguagem proposta dá suporte aos critérios de classes de equivalência e a ferramenta, desenvolvida sob a forma de um plug-in para a plataforma Eclipse, permite a geração de casos de teste através de diferentes estratégias de combinação. Para realizar a avaliação da abordagem, utilizamos um dos módulos do Sistema Unificado de Administração Publica (SUAP) do Instituto Federal do Rio Grande do Norte (IFRN). Participaram da avaliação analistas de sistemas e um técnico de informática que atuam como desenvolvedores do sistema utilizado.
Resumo:
Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications
Resumo:
Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization
Uma abordagem para a verificação do comportamento excepcional a partir de regras de designe e testes
Resumo:
Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code
Resumo:
There is a growing interest of the Computer Science education community for including testing concepts on introductory programming courses. Aiming at contributing to this issue, we introduce POPT, a Problem-Oriented Programming and Testing approach for Introductory Programming Courses. POPT main goal is to improve the traditional method of teaching introductory programming that concentrates mainly on implementation and neglects testing. POPT extends POP (Problem Oriented Programing) methodology proposed on the PhD Thesis of Andrea Mendonça (UFCG). In both methodologies POPT and POP, students skills in dealing with ill-defined problems must be developed since the first programming courses. In POPT however, students are stimulated to clarify ill-defined problem specifications, guided by de definition of test cases (in a table-like manner). This paper presents POPT, and TestBoot a tool developed to support the methodology. In order to evaluate the approach a case study and a controlled experiment (which adopted the Latin Square design) were performed. In an Introductory Programming course of Computer Science and Software Engineering Graduation Programs at the Federal University of Rio Grande do Norte, Brazil. The study results have shown that, when compared to a Blind Testing approach, POPT stimulates the implementation of programs of better external quality the first program version submitted by POPT students passed in twice the number of test cases (professor-defined ones) when compared to non-POPT students. Moreover, POPT students submitted fewer program versions and spent more time to submit the first version to the automatic evaluation system, which lead us to think that POPT students are stimulated to think better about the solution they are implementing. The controlled experiment confirmed the influence of the proposed methodology on the quality of the code developed by POPT students
Resumo:
The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen
Resumo:
The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT
Resumo:
Foram estudados parâmetros relacionados ao estado nutricional de 151 adultos sadios, pertencentes à classe média e residindo em Botucatu, SP, Brasil. Valores antropométricos foram maiores nos homens, com exceção da prega tricipital e da área adiposa do braço. O aumento da idade associou-se a aumento dos valores da massa muscular (homens e mulheres) e do peso do corpo, da prega tricipital e da área adiposa do braço (mulheres). Os resultados antropométricos aproximaram-se dos valores referenciais internacionais, mas não foram inteiramente concordantes com eles, sendo inferiores para o peso corpóreo e circunferência e área musculares do braço. Nos indivíduos de menos de 50 anos, os valores da ingestão energética foram ligeiramente inferiores aos níveis recomendados. A ingestão protéica foi adequada. Os valores médios das proteínas e lípides do soro foram similares aos valores de referência. Testes de hipersensibilidade cutânea são apresentados como uma prova funcional para avaliação do estado nutricional.
Resumo:
In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test
Resumo:
Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts
Resumo:
OBJETIVO: Atualmente, entre as mulheres, a relação sexual é a forma de transmissão que mais tem contribuído para a feminização da epidemia de HIV/Aids. Na busca constante de se estabelecer padrões mais adequados de orientação para saúde, investigou-se o uso de medidas contraceptivas, que também sirvam de proteção da transmissão do HIV, entre mulheres portadoras de HIV/Aids. MÉTODOS: Estudo exploratório desenvolvido em um serviço público ambulatorial de um hospital universitário, referência aos portadores de HIV/Aids da região centro-sul do Estado de São Paulo, no período de cinco meses (2000 a 2001). Foram estudadas 73 mulheres portadoras da infecção pelo HIV, ou com Aids. Os dados foram obtidos por meio de um formulário que investigava a caracterização sociodemográfica, as formas de anticoncepção utilizada e a situação sorológica do parceiro sexual. Os dados foram analisados descritivamente e os conteúdos das respostas abertas, agrupados em temas. Foi aplicado o teste exato de Fisher para análise de algumas variáveis, em nível de 5%. Para a análise de conteúdo utilizou-se a proposta de Bardin. RESULTADOS: A maioria das mulheres estava em fase de vida reprodutiva, eram casadas e foram contaminadas quase exclusivamente por meio da relação heterossexual. Entre elas, 35,4% referiam parceiro sexual discordante quanto à sorologia anti-HIV, e 13,7% utilizavam formas inadequadas de anticoncepção que também não protegiam da transmissão do HIV. CONCLUSÕES: Os resultados alertam para a necessidade de ações educativas continuadas quanto a experiências sexuais mais seguras entre portadoras de HIV/Aids, para que elas possam discutir com seus parceiros outras formas de exercerem sua sexualidade, sendo capazes de estabelecer opção contraceptiva mais consciente, de maneira a zelar pela sua saúde, do parceiro e até do concepto.
Resumo:
Because the penetration depth of Ground Penetrating Radar (GPR) signals is very limited in high conductive soils, the usefullness of this method in tropical regions is not yet completly known. The main objective of this researh is to test the usefullness of the method in Brazil. Two typical problems where GPR has been used in Europe and North American were choosed for this test: the first one is to characterize the internal structures of a sand body and the second problem is the localization of old buried pipes lines. The first test was done near the city of São Bento do Norte, in the northern coast of Rio Grande do Norte state, NE Brazil. In this region, there is a sand dune that is migrating very fast in the direction of adjacent settling areas. To characterize the internal structure of the dune and its relationship to the prevailing wind direction, as a preliminary step to understand the dune migration, GPR profiles using the 400 MHz frequency were performed in E-W, N-S, NE-SW, and SE-NW directions over the sand dune intersecting at the top of the dune. The practical resolution of the GPR data is around 30 cm; this was sufficient to distinguish individual foresets inside the dune. After applying the elevation correction to the data, we identified that dips of bedding structures are smallest for the N-S profile, which is perpendicular to the dominant wind direction, largest for the E-W profile, and intermediate for the SW-NE and SE-NW profiles. Foresets in the E-W profile dip with angles varying from 2 to 6 degrees. In the E-W profile, the water table and a horizontal truncation interface separating two generations of dunes were identified, as well as an abrupt directional change in the foreset patterns associated to a lateral contact between two dune generations, the older one extending to the west. The used high frequency of 400 Mhz does not allow a penetration deep enough to map completely these internal contacts. The second test was done near Estreito, a small town near Carnaúbais city, also in Rio Grande do Norte state. In this locality, there are several old pipe lines buried in area covered by plantations where digging should be minimized. Several GPR profiles using the 400 and 200 MHz frequency were performed trying to intercept perpendicularly the possible pipe lines. Because of the high conductivity of the soil, the raw original data can hardly be use to identify the pipe lines. However, after an adequate processing over the 200 MHz profiles, six pipe lines were identified. As a global result of the tests, GPR can be very usefull if the conductivity of the ground is low or, in the case of medium conductivities of the soils, if adequate processing is performed
Resumo:
It is presented an integrated geophysical investigation of the spatial distribution of faults and deformation bands (DB´s) in a faulted siliciclastic reservoir analogue, located in Tucano Basin, Bahia State, northeastern Brazil. Ground Penetrating Radar (GPR) and permeability measurements allowed the analysis of the influence of DB´s in the rock permeability and porosity. GPR data were processed using a suitable flow parametrization in order to highlight discontinuities in sedimentary layers. The obtained images allowed the subsurface detection of DB´s presenting displacements greater that 10 cm. A good correlation was verified between DB´s detected by GPR and those observed in surface, the latter identified using conventional structural methods. After some adaptations in the minipermeameter in order to increase measurement precision, two approaches to measure permeabilities were tested: in situ and in collected cores. The former approach provided better results than the latter and consisted of scratching the outcrop surface, followed by direct measurements on outcrop rocks. The measured permeability profiles allowed to characterize the spatial transition from DB´s to undeformed rock; variation of up to three orders of magnitude were detected. The permeability profiles also presented quasi-periodic patterns, associated with textural and granulometric changes, possibly associated to depositional cycles. Integrated interpretation of the geological, geophysical and core data, provided the subsurface identification of an increase in the DB´s number associated with a sedimentary layer presenting granulometric decrease at depths greater than 8 m. An associated sharp decrease in permeability was also measured in cores from boreholes. The obtained results reveal that radagrams, besides providing high resolution images, allowing the detection of small structures (> 10 cm), also presented a correlation with the permeability data. In this way, GPR data may be used to build upscaling laws, bridging the gap between outcrop and seismic data sets, which may result in better models for faulted reservoirs
Resumo:
O presente trabalho procura descrever a experiência vivenciada por aluna de graduação ao cuidar de criança hospitalizada numa unidade pediátrica. Utilizando-se das técnicas de comunicação terapêutica e medidas terapêuticas de enfermagem, a aluna desenvolveu relacionamento de ajuda com a criança, o que lhe permitiu prestar assistência de enfermagem de forma integral, envolvendo-se com ela e compartilhando experiências benéficas para ambas.