12 resultados para testing tools

em Deakin Research Online - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Adverse drug reactions (ADRs) are a major public health concern and cause significant patient morbidity and mortality. Pharmacogenomics is the study of how genetic polymorphisms affect an individual’s response to pharmacotherapy at the level of a whole genome. This article updates our knowledge on how genetic polymorphisms of important genes alter the risk of ADR occurrence after an extensive literature search. To date, at least 244 pharmacogenes identified have been associated with ADRs of 176 clinically used drugs based on PharmGKB. At least 28 genes associated with the risk of ADRs have been listed by the Food and Drug Administration as pharmacogenomic biomarkers. With the availability of affordable and reliable testing tools, pharmacogenomics looks promising to predict, reduce, and minimize ADRs in selected populations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most software testing research has focused on the development of systematic, standardised, and automated testing methodologies and tools. The abilities and expertise needed to apply such techniques and tools - such as personality traits, education, and experience - have attracted a comparatively small amount of research attention. However, the limited research in the area to date provides some indication that the human traits of software testers are important for effective testing. This paper presents the opinions of software testers themselves, collected through an online survey, on the importance of a variety of factors that influence effective testing, including testing-specific training, experience, skills, and human qualities like dedication and general intelligence. The survey responses strongly suggest that while testing tools and training are important, human factors were similarly considered highly important. Domain knowledge, experience, intelligence, and dedication, amongst other traits, were considered crucial for a software tester to be effective. As such, while systematic methodologies are important, the individual most clearly does matter in software testing. The results of our research have implications for education, recruitment, training and management of software testers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The increasing complexity and number of digital forensic tasks required in criminal investigations demand the development of an effective and efficient testing methodology, enabling tools of similar functionalities to be compared based on their performance. Assuming that the tool tester is familiar with the underlying testing platform and has the ability to use the tools correctly, we provide a numerical solution for the lower bound on the number of testing cases needed to determine comparative capabilities of any set of digital forensic tools. We also present a case study on the performance testing of password cracking tools, which allows us to confirm that the lower bound on the number of testing runs needed is closely related to the row size of certain orthogonal arrays. We show how to reduce the number of test runs by using knowledge of the underlying system

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We hereby develop an effective and efficient testing methodology for correctness testing for file recovery tools across different file systems. We assume that the tool tester is familiar with the formats of common file types and has the ability to use the tools correctly. Our methodology first derives a testing plan to minimize the number of runs required to identify the differences in tools with respect to correctness. We also present a case study on correctness testing for file carving tools, which allows us to confirm that the number of necessary testing runs is bounded and our results are statistically sound.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis surveys the latest development of digital forensic tools designed for anti-cybercrime purposes. It discusses the necessity of testing the digital forensics tools, and presents a novel testing framework. This new testing framework takes the viewpoint of software vendors rather than traditional software engineering approaches.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In previous work, the authors presented a theoretical lower bound on the required number of testing runs for performance testing of digital forensic tools. We also demonstrated a practical method of testing showing how to tolerate both measurement and random errors in order to achieve results close to this bound. In this paper, we extend the previous work to the situation of correctness testing. The contribution of this methodology enables the tester to achieve correctness testing results of high quality from a manageable number of observations and in a dynamic but controllable way. This is of particular interest to forensic testers who do not have access to sophisticated equipment and who can allocate only a small amount of time to testing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In previous work, the authors presented a theoretical lower bound on the required number of testing runs for performance testing of digital forensic tools. However, experimental errors are inevitable in laboratory settings, occurring as measurement errors or as random errors and can result in practical situations where the number of testing runs is far from the theoretical bound. This paper adapts our former work to tolerate such errors in the testing results. The contribution of our new methodology enables the tester to achieve performance testing results of high quality from a manageable number of observations and in a dynamic but controllable way. This is of particular interest to forensic testers who do not have access to sophisticated equipment and who can allocate only a small amount of time to testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in ecological statistics have reached behavioral ecology, and an increasing number of studies now apply analytical tools that incorporate alternatives to the conventional null hypothesis testing based on significance levels. However, these approaches continue to receive mixed support in our field. Because our statistical choices can influence research design and the interpretation of data, there is a compelling case for reaching consensus on statistical philosophy and practice. Here, we provide a brief overview of the recently proposed approaches and open an online forum for future discussion (https://bestat.ecoinformatics.org/). From the perspective of practicing behavioral ecologists relying on either correlative or experimental data, we review the most relevant features of information theoretic approaches, Bayesian inference, and effect size statistics. We also discuss concerns about data quality, missing data, and repeatability. We emphasize the necessity of moving away from a heavy reliance on statistical significance while focusing attention on biological relevance and effect sizes, with the recognition that uncertainty is an inherent feature of biological data. Furthermore, we point to the importance of integrating previous knowledge in the current analysis, for which novel approaches offer a variety of tools. We note, however, that the drawbacks and benefits of these approaches have yet to be carefully examined in association with behavioral data. Therefore, we encourage a philosophical change in the interpretation of statistical outcomes, whereas we still retain a pluralistic perspective for making objective statistical choices given the uncertainties around different approaches in behavioral ecology. We provide recommendations on how these concepts could be made apparent in the presentation of statistical outputs in scientific papers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite increasing sequencing capacity, genetic disease investigation still frequently results in the identification of loci containing multiple candidate disease genes that need to be tested for involvement in the disease. This process can be expedited by prioritizing the candidates prior to testing. Over the last decade, a large number of computational methods and tools have been developed to assist the clinical geneticist in prioritizing candidate disease genes. In this chapter, we give an overview of computational tools that can be used for this purpose, all of which are freely available over the web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an effort to engage children in mathematics learning, many primary teachers use mathematical games and activities. Games have been employed for drill and practice, warm-up activities and rewards. The effectiveness of games as a pedagogical tool requires further examination if games are to be employed for the teaching of mathematical concepts. This paper reports research that compared the effectiveness of non-digital games with non-game but engaging activities as pedagogical tools for promoting mathematical learning. In the classrooms that played games, the effects of adding teacher-led whole class discussion was explored. The research was conducted with 10–12-year-old children in eight classrooms in three Australian primary schools, using differing instructional approaches to teach multiplication and division of decimals. A quasi-experimental design with pre-test, post-test and delayed post-test was employed, and the effects of the interventions were measured by the children’s written test performance. Test results indicated lesser gains in learning in game playing situations versus non-game activities and that teacher-led discussions during and following the game playing did not improve children’s learning. The finding that these games did not help children demonstrate a mathematical understanding of concepts under test conditions suggests that educators should carefully consider the application and appropriateness of games before employing them as a vehicle for introducing mathematical concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim.  This paper is a report of a study to investigate whether the Australian National Competency Standards for Registered Nurses demonstrate correlations with the Finnish Nurse Competency Scale. Background.  Competency assessment has become popular as a key regulatory requirement and performance indicator. The term competency, however, does not have a globally accepted definition and this has the potential to create controversy, ambiguity and confusion. Variations in meaning and definitions adopted in workplaces and educational settings will affect the interpretation of research findings and have implications for the nursing profession. Method.  A non-experimental cross-sectional survey design was used with a convenience sample of 116 new graduate nurses in 2005. The second version of the Australian National Competency Standards and the Nurse Competency Scale was used to elicit responses to self-assessed competency in the transitional year (first year as a Registered Nurse). Findings.  Correlational analysis of self-assessed levels of competence revealed a relationship between the Australian National Competency Standards (ANCI) and the Nurse Competency Scale (NCS). The correlational relation between ANCI domains and NCS factors suggests that these scales are indeed used to measure related dimensions. A statistically significant relationship (r = 0·75) was found between the two competency measures. Conclusion.  Although the finding of convergent validity is insufficient to establish construct validity for competency as used in both measures in this study, it is an important step towards this goal. Future studies on relationships between competencies must take into account the validity and reliability of the tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Patients are a valuable source of information about ways to prevent harm in primary care and are in a unique position to provide feedback about the factors that contribute to safety incidents. Unlike in the hospital setting, there are currently no tools that allow the systematic capture of this information from patients. The aim of this study was to develop a quantitative primary care patient measure of safety (PC PMOS). METHODS: A two-stage approach was undertaken to develop questionnaire domains and items. Stage 1 involved a modified Delphi process. An expert panel reached consensus on domains and items based on three sources of information (validated hospital PMOS, previous research conducted by our study team and literature on threats to patient safety). Stage 2 involved testing the face validity of the questionnaire developed during stage 1 with patients and primary care staff using the 'think aloud' method. Following this process, the questionnaire was revised accordingly. RESULTS: The PC PMOS was received positively by both patients and staff during face validity testing. Barriers to completion included the length, relevance and clarity of questions. The final PC PMOS consisted of 50 items across 15 domains. The contributory factors to safety incidents centred on communication, access to care, patient-related factors, organisation and care planning, task performance and information flow. DISCUSSION: This is the first tool specifically designed for primary care settings, which allows patients to provide feedback about factors contributing to potential safety incidents. The PC PMOS provides a way for primary care organisations to learn about safety from the patient perspective and make service improvements with the aim of reducing harm in this setting. Future research will explore the reliability and construct validity of the PC PMOS.