29 resultados para Testes Substantivos

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mimosa caesalpiniaefolia Benth. is a forest species of the Mimosaceae family, recommended for recovery of degraded areas. The evaluation of vigor by biochemical tests have been an important tool in the control of seed quality programs, and the electrical conductivity and potassium leaching the most efficient in the verifying the physiological potential. The objective, therefore, to adjust the methodology of the electrical conductivity test for seeds of M. caesalpiniaefolia, for then compare the efficiency of this test with the potassium in the evaluation of seed vigor of different lots of seeds M. caesalpiniaefolia. To test the adequacy of the electrical conductivity were used different combinations of temperatures , 25 °C and 30 ºC, number of seeds , 25 and 50, periods of imbibition , 4 , 8 , 12 , 16 and 24 hours , and volumes deionized water, 50 mL and 75mL. For potassium leaching test, which was conducted from the results achieved by the methodology of the adequacy of the electrical conductivity test, to compare the efficiency of both tests , in the classification of seeds at different levels of vigor, and the period 4 hours also evaluated because the potassium leaching test can be more efficient in the shortest time . The best combination obtained in experiment of electrical conductivity is 25 seeds soaked in 50 mL deionized or distilled water for 8 hours at a temperature of 30 ° C. Data were subjected to analysis of variance, the means were compared with each other by F tests and Tukey at 5 % probability, and when necessary polynomial regression analysis was performed. The electrical conductivity test performed at period eight hour proved to be more efficient in the separation of seed lots M. caesalpiniaefolia at different levels of vigor compared to the potassium test

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of the Internet stimulated the appearance of several services. An example is the communication ones present in the users day-by-day. Services as chat and e-mail reach an increasing number of users. This fact is turning the Net a powerful communication medium. The following work explores the use of communication conventional services into the Net infrastructure. We introduce the concept of communication social protocols applied to a shared virtual environment. We argue that communication tools have to be adapted to the Internet potentialities. To do that, we approach some theories of the Communication area and its applicability in a virtual environment context. We define multi-agent architecture to support the offer of these services, as well as, a software and hardware platform to support the accomplishment of experiments using Mixed Reality. Finally, we present the obtained results, experiments and products

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spacecraft move with high speeds and suffer abrupt changes in acceleration. So, an onboard GPS receiver could calculate navigation solutions if the Doppler effect is taken into consideration during the satellite signals acquisition and tracking. Thus, for the receiver subject to such dynamic cope these shifts in the frequency signal, resulting from this effect, it is imperative to adjust its acquisition bandwidth and increase its tracking loop to a higher order. This paper presents the changes in the GPS Orion s software, an open architecture receiver produced by GEC Plessey Semiconductors, nowadays Zarlink, in order to make it able to generate navigation fix for vehicle under high dynamics, especially Low Earth Orbit satellites. GPS Architect development system, sold by the same company, supported the modifications. Furthermore, it presents GPS Monitor Aerospace s characteristics, a computational tool developed for monitoring navigation fix calculated by the GPS receiver, through graphics. Although it was not possible to simulate the software modifications implemented in the receiver in high dynamics, it was observed that the receiver worked in stationary tests, verified also in the new interface. This work also presents the results of GPS Receiver for Aerospace Applications experiment, achieved with the receiver s participation in a suborbital mission, Operation Maracati 2, in December 2010, using a digital second order carrier tracking loop. Despite an incident moments before the launch have hindered the effective navigation of the receiver, it was observed that the experiment worked properly, acquiring new satellites and tracking them during the VSB-30 rocket flight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent astronomical observations (involving supernovae type Ia, cosmic background radiation anisotropy and galaxy clusters probes) have provided strong evidence that the observed universe is described by an accelerating, flat model whose space-time properties can be represented by the FriedmannRobertsonWalker (FRW) metric. However, the nature of the substance or mechanism behind the current cosmic acceleration remains unknown and its determination constitutes a challenging problem for modern cosmology. In the general relativistic description, an accelerat ing regime is usually obtained by assuming the existence of an exotic energy component endowed with negative pressure, called dark energy, which is usually represented by a cosmological constant ¤ associated to the vacuum energy density. All observational data available so far are in good agreement with the concordance cosmic ¤CDM model. Nevertheless, such models are plagued with several problems thereby inspiring many authors to propose alternative candidates in the relativistic context. In this thesis, a new kind of accelerating flat model with no dark energy and fully dominated by cold dark matter (CDM) is proposed. The number of CDM particles is not conserved and the present accelerating stage is a consequence of the negative pressure describing the irreversible process of gravitational particle creation. In order to have a transition from a decelerating to an accelerating regime at low redshifts, the matter creation rate proposed here depends on 2 parameters (y and ߯): the first one identifies a constant term of the order of H0 and the second one describes a time variation proportional to he Hubble parameter H(t). In this scenario, H0 does not need to be small in order to solve the age problem and the transition happens even if there is no matter creation during the radiation and part of the matter dominated phase (when the ß term is negligible). Like in flat ACDM scenarios, the dimming of distant type Ia supernovae can be fitted with just one free parameter, and the coincidence problem plaguing the models driven by the cosmological constant. ACDM is absent. The limits endowed with with the existence of the quasar APM 08279+5255, located at z = 3:91 and with an estimated ages between 2 and 3 Gyr are also investigated. In the simplest case (ß = 0), the model is compatible with the existence of the quasar for y > 0:56 whether the age of the quasar is 2.0 Gyr. For 3 Gyr the limit derived is y > 0:72. New limits for the formation redshift of the quasar are also established

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The stimulation of motor learning is an important component to a rehabilitation and type of practice used is de basic importance to Physiotherapy. The motor skills are the types more basic of behavior that subjects must acquire throughout its lives and observational learning one of forms for its acquisition. Objective: This study aimed to compare performance of patients post- stroke on test of recognition of activities of day life using self-controlled and externally determined practice. Intervention: Forty subjects had been evaluated, 20 stroke patients (the mean age was 57,9?}6,7 years, schooling 6,7?}3,09 years and time of injury 23,4?}17,2 months) and 20 health subjects (the mean age 55,4?}5,9 years and schooling 8?}3,7 years). All was evaluated about independence functional (FIM) and cognitive state (MMSE), and patients were also evaluated about neurologic state (NIHSS). Later, all realized a recognition of activities of day life test (drink water and speak to telephone) on self-controlled (PAUTO and CAUTO) and externally determined (P20 and C20) frequency. The stroke subjects also were examined for a three-dimensional system of kinematic analysis, when they have drink water. The statistic analysis was realized for chi-square and t Student tests. Results: This was not difference, about number of rightness, between groups of self-controlled and externally determined practice (p0,005), and also not between patients and control groups (p0,005). Patients mean velocity (PAUTO: 141,1mm/sec and P20: 141,6mm/sec) and peak velocity (PAUTO: 652,1mm/sec and P20: 598,6mm/sec) were reduced, as well as the angles reached for elbow (PAUTO: 66,60 and 124,40; P20: 66,30 and 128,50 extension e flexion respectively) regarding literature. Conclusions: The performance on recognition of activities of day life test was similar between on self-controlled and externally determined frequency, showing both technique may be used to stimulate motor learning on chronic patients after stroke

Relevância:

20.00% 20.00%

Publicador:

Resumo:

obesity affects rightly functional capacity diminishing the cardiovascular system efficiency and oxygen uptake (VO2). Field tests, such as, Incremental Shuttle Walking Test (ISWT) and Six Minute Walk Test (6MWT) has been employed as alternative of Cardiopulmonary Exercise Test (CPX), to functional assessing for conditions which transport of oxygen to peripheral is diminished. Nevertheless, the knowing about metabolic variables response in real time and it comparing among different maximal and submaximal tests in obese is absent. Aim: to compare cardiopulmonary, metabolic response during CPX, ISWT and 6MWT and to analyse it influence of adiposity markers in obese. Material e Method: crosssectional, prospective study. Obese included if: (BMI>30Kg/m2; FVC>80%), were assessed as clinical, anthropometric (BMI, body adiposity index-BAI, waist-WC, hip- HC and neck-NC circumferences) and spirometry (forced vital capacity-FVC, Forced expiratory volume-1°second-FEV1, maximal voluntary ventilation-MVV) variables. Obese performed the sequence of tests: CPX, ISWT and 6MWT. Throughout tests was assessed breath-by-breath by telemetry system (Cortex-Biophysik-Metamax3B) variables; oxygen uptake on peak of activity (VO2peak); carbon dioxide production (VCO2); Volume Expiratory (VE); ventilatory equivalents for VO2 (VE/VO2) and CO2 (VE/VCO2); respiratory exchange rate (RER) and perceived effort-Borg6-20). Results: 15 obese (10women) 39.4+10.1years, normal spirometry (%CVF=93.7+9.7) finished all test. They have BMI (43.5+6.6kg/m2) and different as %adiposity (BAI=50.0+10.5% and 48.8+16.9% respectively women and men). Difference of VO2ml/kg/min and %VO2 were finding between CPX (18.6+4.0) and 6MWT (13.2+2.5) but not between ISWT (15.4+2.9). Agreement was found for ISWT and CPX on VO2Peak (3.2ml/kg/min; 95%; IC-3.0 9.4) and %VO2 (16.4%). VCO2(l/min) confirms similarity in production for CPX (2.3+1.0) and ISWT (1.7+0.7) and difference for 6MWT (1.4+0.6). WC explains more the response of CPX and ISWT than other adiposity markers. Adiposity diminishes 3.2% duration of CPX. Conclusion: ISWT promotes similar metabolic and cardiovascular response than CPX in obese. It suggesting that ISWT could be useful and reliable to assess oxygen uptake and functional capacity in obese

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Highly emotional itens are best remembered in emotional memory tasks than neutral items. An example of emotional item that benefits declarative memory processes are the taboo words. These words undergo from a conventional prohibition, imposed by tradition or custom. Literature suggests that the strongest recollection these words is due to emotional arousal, as well as, the fact that they form a cohesive semantic group, which is a positive additive effect. However, studies with semantic lists show that cohesion can have a negative effect of interference, impairing memory. We analyzed, in two experiments, the effect of arousal and semantic cohesion of taboo words on recognition tests, comparing with into two other word categories: semantically related and without emotional arousal (semantic category) and neutral, with low semantic relation (objects). Our results indicate that cohesion has interfered whith the performance of the test by increasing the number of false alarms. This effect was strongly observed in the semantic category of words in both experiments, but also in the neutral and taboo words, when both were explicitly considered as semantic categories through the instruction of the test in Experiment 2. Despite the impairment induced by semantic cohesion in both experiments, the taboo words were more discriminated than others, and this result agrees with the indication of the emotional arousal as the main factor for the best recollection of emotional items in memory tests

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A automação consiste em uma importante atividade do processo de teste e é capaz de reduzir significativamente o tempo e custo do desenvolvimento. Algumas ferramentas tem sido propostas para automatizar a realização de testes de aceitação em aplicações Web. Contudo, grande parte delas apresenta limitações importantes tais como necessidade de valoração manual dos casos de testes, refatoração do código gerado e forte dependência com a estrutura das páginas HTML. Neste trabalho, apresentamos uma linguagem de especificação de teste e uma ferramenta concebidas para minimizar os impactos propiciados por essas limitações. A linguagem proposta dá suporte aos critérios de classes de equivalência e a ferramenta, desenvolvida sob a forma de um plug-in para a plataforma Eclipse, permite a geração de casos de teste através de diferentes estratégias de combinação. Para realizar a avaliação da abordagem, utilizamos um dos módulos do Sistema Unificado de Administração Publica (SUAP) do Instituto Federal do Rio Grande do Norte (IFRN). Participaram da avaliação analistas de sistemas e um técnico de informática que atuam como desenvolvedores do sistema utilizado.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automation has become increasingly necessary during the software test process due to the high cost and time associated with such activity. Some tools have been proposed to automate the execution of Acceptance Tests in Web applications. However, many of them have important limitations such as the strong dependence on the structure of the HTML pages and the need of manual valuing of the test cases. In this work, we present a language for specifying acceptance test scenarios for Web applications called IFL4TCG and a tool that allows the generation of test cases from these scenarios. The proposed language supports the criterion of Equivalence Classes Partition and the tool allows the generation of test cases that meet different combination strategies (i.e., Each-Choice, Base-Choice and All Combinations). In order to evaluate the effectiveness of the proposed solution, we used the language and the associated tool for designing and executing Acceptance Tests on a module of Sistema Unificado de Administração Pública (SUAP) of Instituto Federal Rio Grande do Norte (IFRN). Four Systems Analysts and one Computer Technician, which work as developers of the that system, participated in the evaluation. Preliminary results showed that IFL4TCG can actually help to detect defects in Web applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Checking the conformity between implementation and design rules in a system is an important activity to try to ensure that no degradation occurs between architectural patterns defined for the system and what is actually implemented in the source code. Especially in the case of systems which require a high level of reliability is important to define specific design rules for exceptional behavior. Such rules describe how exceptions should flow through the system by defining what elements are responsible for catching exceptions thrown by other system elements. However, current approaches to automatically check design rules do not provide suitable mechanisms to define and verify design rules related to the exception handling policy of applications. This paper proposes a practical approach to preserve the exceptional behavior of an application or family of applications, based on the definition and runtime automatic checking of design rules for exception handling of systems developed in Java or AspectJ. To support this approach was developed, in the context of this work, a tool called VITTAE (Verification and Information Tool to Analyze Exceptions) that extends the JUnit framework and allows automating test activities to exceptional design rules. We conducted a case study with the primary objective of evaluating the effectiveness of the proposed approach on a software product line. Besides this, an experiment was conducted that aimed to realize a comparative analysis between the proposed approach and an approach based on a tool called JUnitE, which also proposes to test the exception handling code using JUnit tests. The results showed how the exception handling design rules evolve along different versions of a system and that VITTAE can aid in the detection of defects in exception handling code