932 resultados para Exception Handling. Exceptional Behavior. Exception Policy. Software Testing. Design Rules


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent empirical studies in the area of mobile application testing indicate the need for specific testing techniques and methods for mobile applications. This is due to mobile applications being significantly different than traditional web and desktop applications, particularly in terms of the physical constraints of mobile devices and the very different features of their operating systems. In this paper, we presented a multiple case-study involving four software development companies in the area of mobile and smartphones application. We aimed to identify testing techniques currently being applied by developers and challenges that they are facing. Our principle results are that many industrial teams seem to lack sufficient knowledge on how to test mobile applications, particularly in the areas of mobile application life-cycle conformance, context-awareness, and integration testing. We also found that there is no formal testing approach or methodology that can facilitate a development team to systematically test a critical mobile application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A discussion of 2008/2009 developments in Australian educational policy, with specific reference to the adoption of US and UK trends in accountability, testing and school reform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the details of experimental studies on the shear behaviour of a recently developed, cold-formed steel beam known as LiteSteel Beam (LSB). The LSB section has a unique shape of a channel beam with two rectangular hollow flanges and is produced by a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. To date, no research has been undertaken on the shear behaviour of LiteSteel beams with torsionally rigid, rectangular hollow flanges. In the present investigation, experimental studies involving more than 30 shear tests were carried out to investigate the shear behaviour of 13 different LSB sections. It was found that the current design rules in cold-formed steel structures design codes are very conservative for the shear design of LiteSteel beams. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Experimental results are presented and compared with corresponding predictions from the current design codes in this paper. Appropriate improvements have been proposed for the shear strength of LSBs based on AS/NZS 4600 design equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scalable high-resolution tiled display walls are becoming increasingly important to decision makers and researchers because high pixel counts in combination with large screen areas facilitate content rich, simultaneous display of computer-generated visualization information and high-definition video data from multiple sources. This tutorial is designed to cater for new users as well as researchers who are currently operating tiled display walls or 'OptiPortals'. We will discuss the current and future applications of display wall technology and explore opportunities for participants to collaborate and contribute in a growing community. Multiple tutorial streams will cover both hands-on practical development, as well as policy and method design for embedding these technologies into the research process. Attendees will be able to gain an understanding of how to get started with developing similar systems themselves, in addition to becoming familiar with typical applications and large-scale visualisation techniques. Presentations in this tutorial will describe current implementations of tiled display walls that highlight the effective usage of screen real-estate with various visualization datasets, including collaborative applications such as visualcasting, classroom learning and video conferencing. A feature presentation for this tutorial will be given by Jurgen Schulze from Calit2 at the University of California, San Diego. Jurgen is an expert in scientific visualization in virtual environments, human-computer interaction, real-time volume rendering, and graphics algorithms on programmable graphics hardware.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reporting and auditing of patient dose is an important component of radiotherapy quality assurance. The manual extraction of dose-volume metrics is time consuming and undesirable when auditing the dosimetric quality of a large cohort of patient plans. A dose assessment application was written to overcome this, allowing the calculation of various dose-volume metrics for large numbers of plans exported from treatment planning systems. This application expanded on the DICOM-handling functionality of the MCDTK software suite. The software extracts dose values in the volume of interest by using a ray casting point-in-polygon algorithm, where the polygons have been defined by the contours in the RTSTRUCT file...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extant models of decision making in social neurobiological systems have typically explained task dynamics as characterized by transitions between two attractors. In this paper, we model a three-attractor task exemplified in a team sport context. The model showed that an attacker–defender dyadic system can be described by the angle x between a vector connecting the participants and the try line. This variable was proposed as an order parameter of the system and could be dynamically expressed by integrating a potential function. Empirical evidence has revealed that this kind of system has three stable attractors, with a potential function of the form V(x)=−k1x+k2ax2/2−bx4/4+x6/6, where k1 and k2 are two control parameters. Random fluctuations were also observed in system behavior, modeled as white noise εt, leading to the motion equation dx/dt = −dV/dx+Q0.5εt, where Q is the noise variance. The model successfully mirrored the behavioral dynamics of agents in a social neurobiological system, exemplified by interactions of players in a team sport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bug fixing is a highly cooperative work activity where developers, testers, product managers and other stake-holders collaborate using a bug tracking system. In the context of Global Software Development (GSD), where software development is distributed across different geographical locations, we focus on understanding the role of bug trackers in supporting software bug fixing activities. We carried out a small-scale ethnographic fieldwork in a software product team distributed between Finland and India at a multinational engineering company. Using semi-structured interviews and in-situ observations of 16 bug cases, we show that the bug tracker 1) supported information needs of different stake holder, 2) established common-ground, and 3) reinforced issues related to ownership, performance and power. Consequently, we provide implications for design around these findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical transport behavior of organic photo-voltaic devices with nano-pillar transparent electrodes is investigated in this paper in order to understand possible enhancement of their charge-collection efficiency. Modeling and simulations of optical transport due to this architecture show an interesting regime of length-scale dependent optical characteristics. An electromagnetic wave propagation model is employed with simulation objectives toward understanding the mechanism of optical scattering and waveguide effects due to the nano-pillars and effective transmission through the active layer. Partial filling of gaps between the nano-pillars due to the nano-fabrication process is taken into consideration. Observations made in this paper will facilitate appropriate design rules for nano-pillar electrodes. (C) 2014 AIP Publishing LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Designing and implementing thread-safe multithreaded libraries can be a daunting task as developers of these libraries need to ensure that their implementations are free from concurrency bugs, including deadlocks. The usual practice involves employing software testing and/or dynamic analysis to detect. deadlocks. Their effectiveness is dependent on well-designed multithreaded test cases. Unsurprisingly, developing multithreaded tests is significantly harder than developing sequential tests for obvious reasons. In this paper, we address the problem of automatically synthesizing multithreaded tests that can induce deadlocks. The key insight to our approach is that a subset of the properties observed when a deadlock manifests in a concurrent execution can also be observed in a single threaded execution. We design a novel, automatic, scalable and directed approach that identifies these properties and synthesizes a deadlock revealing multithreaded test. The input to our approach is the library implementation under consideration and the output is a set of deadlock revealing multithreaded tests. We have implemented our approach as part of a tool, named OMEN1. OMEN is able to synthesize multithreaded tests on many multithreaded Java libraries. Applying a dynamic deadlock detector on the execution of the synthesized tests results in the detection of a number of deadlocks, including 35 real deadlocks in classes documented as thread-safe. Moreover, our experimental results show that dynamic analysis on multithreaded tests that are either synthesized randomly or developed by third-party programmers are ineffective in detecting the deadlocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os testes são uma atividade crucial no desenvolvimento de sistemas, pois uma boa execução dos testes podem expor anomalias do software e estas podem ser corrigidas ainda no processo de desenvolvimento, reduzindo custos. Esta dissertação apresenta uma ferramenta de testes chamada SIT (Sistema de Testes) que auxiliará no teste de Sistemas de Informações Geográficas (SIG). Os SIG são caracterizados pelo uso de informações espaciais georreferenciadas, que podem gerar um grande número de casos de teste complexos. As técnicas tradicionais de teste são divididas em funcionais e estruturais. Neste trabalho, o SIT abordará os testes funcionais, focado em algumas técnicas clássicas como o particionamento de equivalência e análise do Valor Limite. O SIT também propõe o uso de Lógica Nebulosa como uma ferramenta que irá sugerir um conjunto mínimo de testes a executar nos SIG, ilustrando os benefícios da ferramenta.