992 resultados para Testing programs
Resumo:
This study compared the relative effectiveness of two computerized remedial reading programs in improving the reading word recognition, rate, and comprehension of adolescent readers demonstrating significant and longstanding reading difficulties. One of the programs involved was Autoskill Component Reading Subskills Program, which provides instruction in isolated letters, syllables, and words, to a point of rapid automatic responding. This program also incorporates reading disability subtypes in its approach. The second program, Read It Again. Sam, delivers a repeated reading strategy. The study also examined the feasibility of using peer tutors in association with these two programs. Grade 9 students at a secondary vocational school who satisfied specific criteria with respect to cognitive and reading ability participated. Eighteen students were randomly assigned to three matched groups, based on prior screening on a battery of reading achievement tests. Two I I groups received training with one of the computer programs; the third group acted as a control and received the remedial reading program offered within the regular classroom. The groups met daily with a trained tutor for approximately 35 minutes, and were required to accumulate twenty hours of instruction. At the conclusion of the program, the pretest battery was repeated. No significant differences were found in the treatment effects of the two computer groups. Each of the two treatment groups was able to effect significantly improved reading word recognition and rate, relative to the control group. Comprehension gains were modest. The treatment groups demonstrated a significant gain, relative to the control group, on one of the three comprehension measures; only trends toward a gain were noted on the remaining two measures. The tutoring partnership appeared to be a viable alternative for the teacher seeking to provide individualized computerized remedial programs for adolescent unskilled readers. Both programs took advantage of computer technology in providing individualized drill and practice, instant feedback, and ongoing recordkeeping. With limited cautions, each of these programs was considered effective and practical for use with adolescent unskilled readers.
Resumo:
Étude de cas / Case study
Resumo:
In this session we look at the sorts of errors that occur in programs, and how we can use different testing and debugging strategies (such as unit testing and inspection) to track them down. We also look at error handling within the program and at how we can use Exceptions to manage errors in a more sophisticated way. These slides are based on Chapter 6 of the Book 'Objects First with BlueJ'
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Nursing school graduates are under pressure to pass the RN-NCLEX Exam on the first attempt since New York State monitors the results and uses them to evaluate the school’s nursing programs. Since the RN-NCLEX Exam is a standardized test, we sought a method to make our students better test takers. The use of on-line computer adaptive testing has raised our student’s standardized test scores at the end of the nursing course.
Resumo:
While the simulation of flood risks originating from the overtopping of river banks is well covered within continuously evaluated programs to improve flood protection measures, flash flooding is not. Flash floods are triggered by short, local thunderstorm cells with high precipitation intensities. Small catchments have short response times and flow paths and convective thunder cells may result in potential flooding of endangered settlements. Assessing local flooding and pathways of flood requires a detailed hydraulic simulation of the surface runoff. Hydrological models usually do not incorporate surface runoff at this detailedness but rather empirical equations are applied for runoff detention. In return 2D hydrodynamic models usually do not allow distributed rainfall as input nor are any types of soil/surface interaction implemented as in hydrological models. Considering several cases of local flash flooding during the last years the issue emerged for practical reasons but as well as research topics to closing the model gap between distributed rainfall and distributed runoff formation. Therefore, a 2D hydrodynamic model, depth-averaged flow equations using the finite volume discretization, was extended to accept direct rainfall enabling to simulate the associated runoff formation. The model itself is used as numerical engine, rainfall is introduced via the modification of waterlevels at fixed time intervals. The paper not only deals with the general application of the software, but intends to test the numerical stability and reliability of simulation results. The performed tests are made using different artificial as well as measured rainfall series as input. Key parameters of the simulation such as losses, roughness or time intervals for water level manipulations are tested regarding their impact on the stability.
Resumo:
PLCs (acronym for Programmable Logic Controllers) perform control operations, receiving information from the environment, processing it and modifying this same environment according to the results produced. They are commonly used in industry in several applications, from mass transport to petroleum industry. As the complexity of these applications increase, and as various are safety critical, a necessity for ensuring that they are reliable arouses. Testing and simulation are the de-facto methods used in the industry to do so, but they can leave flaws undiscovered. Formal methods can provide more confidence in an application s safety, once they permit their mathematical verification. We make use of the B Method, which has been successfully applied in the formal verification of industrial systems, is supported by several tools and can handle decomposition, refinement, and verification of correctness according to the specification. The method we developed and present in this work automatically generates B models from PLC programs and verify them in terms of safety constraints, manually derived from the system requirements. The scope of our method is the PLC programming languages presented in the IEC 61131-3 standard, although we are also able to verify programs not fully compliant with the standard. Our approach aims to ease the integration of formal methods in the industry through the abbreviation of the effort to perform formal verification in PLCs
Resumo:
We sought to evaluate the performance of diagnostic tools to establish an affordable setting for early detection of cervical cancer in developing countries. We compared the performance of different screening tests and their feasibility in a cohort of over 12,000 women: conventional Pap smear, liquid-based cytology, visual inspection with acetic acid (VIA), visual inspection with Iodine solution (VILI), cervicography, screening colposcopy, and high-risk human papillomavirus (HPV) testing (HR-HPV) collected by physician and by self-sampling. HR-HPV assay collected by the physician has the highest sensitivity (80 %), but high unnecessary referrals to colposcopy (15.1 %). HR-HPV test in self-sampling had a markedly lower (57.1 %) sensitivity. VIA, VILI, and cervicography had a poor sensitivity (47.4, 55, and 28.6 %, respectively). Colposcopy presented with sensitivity of 100 % in detecting CIN2+, but the lowest specificity (66.9 %). Co-testing with VIA and VILI Pap test increased the sensitivity of stand-alone Pap test from 71.6 to 87.1 % and 71.6 to 95 %, respectively, but with high number of unnecessary colposcopies. Co-testing with HR-HPV importantly increased the sensitivity of Pap test (to 86 %), but with high number of unnecessary colposcopies (17.5 %). Molecular tests adjunct to Pap test seems a realistic option to improve the detection of high-grade lesions in population-based screening programs.
Resumo:
Public health strategies to reduce cardiovascular morbidity and mortality should focus on global cardiometabolic risk reduction. The efficacy of lifestyle changes to prevent type 2 diabetes have been demonstrated, but low-cost interventions to reduce cardiometabolic risk in Latin-America have been rarely reported. Our group developed 2 programs to promote health of high-risk individuals attending a primary care center in Brazil. This study compared the effects of two 9-month lifestyle interventions, one based on medical consultations (traditional) and another with 13 multi-professional group sessions in addition to the medical consultations (intensive) on cardiometabolic parameters. Adults were eligible if they had pre-diabetes (according to the American Diabetes Association) and/or metabolic syndrome (International Diabetes Federation criteria for Latin-America). Data were expressed as means and standard deviations or percentages and compared between groups or testing visits. A p-value < 0.05 was considered significant. Results: 180 individuals agreed to participate (35.0% men, mean age 54.7 ± 12.3 years, 86.1% overweight or obese). 83 were allocated to the traditional and 97 to the intensive program. Both interventions reduced body mass index, waist circumference and tumor necrosis factor-α. Only intensive program reduced 2-hour plasma glucose and blood pressure and increased adiponectin values, but HDL-cholesterol increased only in the traditional. Also, responses to programs were better in intensive compared to traditional program in terms of blood pressure and adiponectin improvements. No new case of diabetes in intensive but 3 cases and one myocardial infarction in traditional program were detected. Both programs induced metabolic improvement in the short-term, but if better results in the intensive are due to higher awareness about risk and self-motivation deserves further investigation. In conclusion, these low-cost interventions are able to minimize cardiometabolic risk factors involved in the progression to type 2 diabetes and/or cardiovascular disease.
Resumo:
The breeding program for beef cattle in Japan has changed dramatically over 4 decades. Visual judging was done initially, but progeny testing in test stations began in 1968. In the 1980s, the genetic evaluation program using field records, so-called on-farm progeny testing, was first adopted in Oita, Hyogo, and Kumamoto prefectures. In this study, genetic trends for carcass traits in these 3 Wagyu populations were estimated, and genetic gains per year were compared among the 3 different beef cattle breeding programs. The field carcass records used were collected between 1988 and 2003. The traits analyzed were carcass weight, LM area, rib thickness, s.c. fat thickness, and beef marbling standard number. The average breeding values of reproducing dams born the same year were used to estimate the genetic trends for the carcass traits. For comparison of the 3 breeding programs, birth years of the dams were divided into 3 periods reflecting each program. Positive genetic trends for beef marbling standard number were clearly shown in all populations. The genetic gains per year for all carcass traits were significantly enhanced by adopting the on-farm progeny testing program. These results indicate that the on-farm progeny testing program with BLUP is a very powerful approach for genetic improvement of carcass traits in Japanese Wagyu beef cattle.
Resumo:
A tandem mass spectral database system consists of a library of reference spectra and a search program. State-of-the-art search programs show a high tolerance for variability in compound-specific fragmentation patterns produced by collision-induced decomposition and enable sensitive and specific 'identity search'. In this communication, performance characteristics of two search algorithms combined with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID' (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30,000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their Setup from tandem-in-space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS.
Resumo:
We have run experimental interventions to promote HIV tests in a large firm in South Africa. We combined HIV tests with existing medical check programs to increase the uptake. In the foregoing survey we undertook previously, it was suggested that fears and stigma of HIV/AIDS were the primary reasons given by the employees for not taking the test. To counter these, we implemented randomized interventions. We find substantial heterogeneity in responses by ethnicity. Africans and Colored rejected the tests most often. Supportive information increased the uptake by 6 to 16% points. A tradeoff in targeting resulting in stigmatizing the targeted and a reduction of exclusion error is discussed.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once), or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms, for which we give a complete satisfiability testing algorithm, w.r.t. available type information. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
Some verification and validation techniques have been evaluated both theoretically and empirically. Most empirical studies have been conducted without subjects, passing over any effect testers have when they apply the techniques. We have run an experiment with students to evaluate the effectiveness of three verification and validation techniques (equivalence partitioning, branch testing and code reading by stepwise abstraction). We have studied how well able the techniques are to reveal defects in three programs. We have replicated the experiment eight times at different sites. Our results show that equivalence partitioning and branch testing are equally effective and better than code reading by stepwise abstraction. The effectiveness of code reading by stepwise abstraction varies significantly from program to program. Finally, we have identified project contextual variables that should be considered when applying any verification and validation technique or to choose one particular technique.