914 resultados para Testing and Debugging
Resumo:
In this session we look at the sorts of errors that occur in programs, and how we can use different testing and debugging strategies (such as unit testing and inspection) to track them down. We also look at error handling within the program and at how we can use Exceptions to manage errors in a more sophisticated way. These slides are based on Chapter 6 of the Book 'Objects First with BlueJ'
Resumo:
In this work a WSN Support Tool for developing, testing, monitoring and debugging new application prototypes in a reliable and robust way is proposed, by combining a Hardware -Software Integration Platform with the implementation of a parallel communication channel that helps users to interact to the experiments in runtime without interfering in the operation of the wireless network. As a pre-deployment tool, prototypes can be validated in a real environment before implementing them in the final application, aiming to increase the effectiveness and efficiency of the technology. This infrastructure is the support of CookieLab: a WSN testbed based on the Cookie Nodes Platform.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that ¡Ilústrate different trade-offs among program size, running time, or levéis of verbosity of the messages shown to the user.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that illustrate different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.
Resumo:
Human immunodeficiency virus (HIV) infection poses one of the greatest challenges to tuberculosis (TB) control, with TB killing more people with HIV infection than any other condition. The standards in this chapter cover provider-initiated HIV counselling and testing and the care of HIV-infected patients with TB. All TB patients who have not previously been diagnosed with HIV infection should be encouraged to have an HIV test. Failing to do so is to deny people access to the care and treatment they might need, especially in the context of the wider availability of treatments that prevent infections associated with HIV A clearly defined plan of care for those found to be co-infected with TB and HIV should be in place., with procedures to ensure that the patient has access to this care before offering routine testing for HIV in persons with TB. It is acknowledged that people caring for TB patients should ensure that those who are HIV positive are transferred for the appropriate ongoing care once their TB treatment has been completed. In some cases, referral for specialised HIV-related treatment and care may be necessary during treatment for TB. The aim of these standards is to enable patients to remain as healthy as possible, whatever their HIV status.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The aim of this study was to assess the variation between neuropathologists in the diagnosis of common dementia syndromes when multiple published protocols are applied. Fourteen out of 18 Australian neuropathologists participated in diagnosing 20 cases (16 cases of dementia, 4 age-matched controls) using consensus diagnostic methods. Diagnostic criteria, clinical synopses and slides from multiple brain regions were sent to participants who were asked for case diagnoses. Diagnostic sensitivity, specificity, predictive value, accuracy and variability were determined using percentage agreement and kappa statistics. Using CERAD criteria, there was a high inter-rater agreement for cases with probable and definite Alzheimer's disease but low agreement for cases with possible Alzheimer's disease. Braak staging and the application of criteria for dementia with Lewy bodies also resulted in high inter-rater agreement. There was poor agreement for the diagnosis of frontotemporal dementia and for identifying small vessel disease. Participants rarely diagnosed more than one disease in any case. To improve efficiency when applying multiple diagnostic criteria, several simplifications were proposed and tested on 5 of the original 210 cases. Inter-rater reliability for the diagnosis of Alzheimer's disease and dementia with Lewy bodies significantly improved. Further development of simple and accurate methods to identify small vessel lesions and diagnose frontotemporal dementia is warranted.
Resumo:
OBJECTIVE To analyze the clinical and laboratory characteristics of HIV-infected individuals upon admission to a reference health care center.METHODS This cross-sectional study was conducted between 1999 and 2010 on 527 individuals with confirmed serological diagnosis of HIV infection who were enrolled in an outpatient health care service in Santarém, PA, Northern Brazil. Data were collected from medical records and included the reason for HIV testing, clinical status, and count of peripheral CD4+ T lymphocytes upon enrollment. The data were divided into three groups, according to the patient’s year of admission – P1 (1999-2002), P2 (2003-2006), and P3 (2007-2010) – for comparative analysis of the variables of interest.RESULTS In the study group, 62.0% of the patients were assigned to the P3 group. The reason for undergoing HIV testing differed between genders. In the male population, most tests were conducted because of the presence of symptoms suggesting infection. Among women, tests were the result of knowledge of the partner’s seropositive status in groups P1 and P2. Higher proportion of women undergoing testing because of symptoms of HIV/AIDS infection abolished the difference between genders in the most recent period. A higher percentage of patients enrolling at a more advanced stage of the disease was observed in P3.CONCLUSIONS Despite the increased awareness of the number of HIV/AIDS cases, these patients have identified their serological status late and were admitted to health care units with active disease. The HIV/AIDS epidemic in Pará presents specificities in its progression that indicate the complex characteristics of the epidemic in the Northern region of Brazil and across the country.
Resumo:
Monitoring systems have traditionally been developed with rigid objectives and functionalities, and tied to specific languages, libraries and run-time environments. There is a need for more flexible monitoring systems which can be easily adapted to distinct requirements. On-line monitoring has been considered as increasingly important for observation and control of a distributed application. In this paper we discuss monitoring interfaces and architectures which support more extensible monitoring and control services. We describe our work on the development of a distributed monitoring infrastructure, and illustrate how it eases the implementation of a complex distributed debugging architecture. We also discuss several issues concerning support for tool interoperability and illustrate how the cooperation among multiple concurrent tools can ease the task of distributed debugging.
Resumo:
Three different treatments were applied on several specimens of dolomitic and calcitic marble, properly stained with rust to mimic real situations (the stone specimens were exposed to the natural environment for about six months in contact with rusted iron). Thirty six marble specimens, eighteen calcitic and eighteen dolomitic, were characterized before and after treatment and monitored throughout the cleaning tests. The specimens were characterized by SEM-EDS (Scanning Electron Microscopy coupled with Energy Dispersion System), XRD (XRay Diffraction), XRF (X-Ray Fluorescence), FTIR (Fourier Transform Infrared Spectroscopy) and color measurements. It was also made a microscopic and macroscopic analysis of the stone surface along with the tests of short and long term capillary absorption. A series of test trials were conducted in order to understand which concentrations and contact times best suits to this purpose, to confirm what had been written to date in the literature. We sought to develop new methods of treatment application, skipping the usual methods of applying chemical treatments on stone substrates, with the use of cellulose poultice, resorting to the agar, a gel already used in many other areas, being something new in this area, which possesses great applicability in the field of conservation of stone materials. After the application of the best methodology for cleaning, specimens were characterized again in order to understand which treatment was more effective and less harmful, both for the operator and the stone material. Very briefly conclusions were that for a very intense and deep penetration into the stone, a solution of 3.5% of SDT buffered with ammonium carbonate to pH around 7 applied with agar support would be indicated. For rust stains in its initial state, the use of Ammonium citrate at a concentration of 5% buffered with ammonium to pH 7 could be applied more than once until satisfactory results appear.
Resumo:
Contém resumo
Resumo:
This work intends to present a newly developed test setup for dynamic out-of-plane loading using underWater Blast Wave Generators (WBWG) as loading source. Underwater blasting operations have been, during the last decades, subject of research and development of maritime blasting operations (including torpedo studies), aquarium tests for the measurement of blasting energy of industrial explosives and confined underwater blast wave generators. WBWG allow a wide range for the produced blast impulse and surface area distribution. It also avoids the generation of high velocity fragments and reduces atmospheric sound wave. A first objective of this work is to study the behavior of masonry infill walls subjected to blast loading. Three different masonry walls are to be studied, namely unreinforced masonry infill walls and two different reinforcement solutions. These solutions have been studied previously for seismic action mitigation. Subsequently, the walls will be simulated using an explicit finite element code for validation and parametric studies. Finally, a tool to help designers to make informed decisions on the use of infills under blast loading will be presented.