901 resultados para Exception Handling. Exceptional Behavior. Exception Policy. Software Testing. Design Rules
Resumo:
Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.
Resumo:
Urban Mass Transportation Administration, Washington, D.C.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Distributed applications are exposed as reusable components that are dynamically discovered and integrated to create new applications. These new applications, in the form of aggregate services, are vulnerable to failure due to the autonomous and distributed nature of their integrated components. This vulnerability creates the need for adaptability in aggregate services. The need for adaptation is accentuated for complex long-running applications as is found in scientific Grid computing, where distributed computing nodes may participate to solve computation and data-intensive problems. Such applications integrate services for coordinated problem solving in areas such as Bioinformatics. For such applications, when a constituent service fails, the application fails, even though there are other nodes that can substitute for the failed service. This concern is not addressed in the specification of high-level composition languages such as that of the Business Process Execution Language (BPEL). We propose an approach to transparently autonomizing existing BPEL processes in order to make them modifiable at runtime and more resilient to the failures in their execution environment. By transparent introduction of adaptive behavior, adaptation preserves the original business logic of the aggregate service and does not tangle the code for adaptive behavior with that of the aggregate service. The major contributions of this dissertation are: first, we assessed the effectiveness of BPEL language support in developing adaptive mechanisms. As a result, we identified the strengths and limitations of BPEL and came up with strategies to address those limitations. Second, we developed a technique to enhance existing BPEL processes transparently in order to support dynamic adaptation. We proposed a framework which uses transparent shaping and generative programming to make BPEL processes adaptive. Third, we developed a technique to dynamically discover and bind to substitute services. Our technique was evaluated and the result showed that dynamic utilization of components improves the flexibility of adaptive BPEL processes. Fourth, we developed an extensible policy-based technique to specify how to handle exceptional behavior. We developed a generic component that introduces adaptive behavior for multiple BPEL processes. Fifth, we identify ways to apply our work to facilitate adaptability in composite Grid services.
Resumo:
This research has explored the relationship between system test complexity and tacit knowledge. It is proposed as part of this thesis, that the process of system testing (comprising of test planning, test development, test execution, test fault analysis, test measurement, and case management), is directly affected by both complexity associated with the system under test, and also by other sources of complexity, independent of the system under test, but related to the wider process of system testing. While a certain amount of knowledge related to the system under test is inherent, tacit in nature, and therefore difficult to make explicit, it has been found that a significant amount of knowledge relating to these other sources of complexity, can indeed be made explicit. While the importance of explicit knowledge has been reinforced by this research, there has been a lack of evidence to suggest that the availability of tacit knowledge to a test team is of any less importance to the process of system testing, when operating in a traditional software development environment. The sentiment was commonly expressed by participants, that even though a considerable amount of explicit knowledge relating to the system is freely available, that a good deal of knowledge relating to the system under test, which is demanded for effective system testing, is actually tacit in nature (approximately 60% of participants operating in a traditional development environment, and 60% of participants operating in an agile development environment, expressed similar sentiments). To cater for the availability of tacit knowledge relating to the system under test, and indeed, both explicit and tacit knowledge required by system testing in general, an appropriate knowledge management structure needs to be in place. This would appear to be required, irrespective of the employed development methodology.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.
Resumo:
The main theme covered by this dissertation is safety, set in the context of automatic machinery for secondary woodworking. The thesis describes in detail the project of a software module for CNC machining centers to protect the operator against hazards and to report errors in the machine safety management. Its design has been developed during an internship at SCM Group technical department. The development of the safety module is addressed step by step in a detailed way: first the company and the reference framework are introduced and then all the design choices are explained and justified. The discussion begins with a detailed analysis of the standards concerning woodworking machines and safety-related software. In this way, a clear and linear procedure can be established to develop and implement the internal structure of the module, its interface, and its application to specific safety-critical conditions. Afterwards, particular attention is paid to software testing, with the development of a comprehensive test procedure for the module, and to diagnostics, especially oriented towards signal management in IoT mode. Finally, the safety module is used as an anti-regression tool to initiate a design improvement of the machine control program. The refactoring steps performed in the process are explained in detail and the SCENT approach is introduced to test the result.
Resumo:
Objective: To outline the major methodological issues appropriate to the use of the population impact number (PIN) and the disease impact number (DIN) in health policy decision making. Design: Review of literature and calculation of PIN and DIN statistics in different settings. Setting: Previously proposed extensions to the number needed to treat (NNT): the DIN and the PIN, which give a population perspective to this measure. Main results: The PIN and DIN allow us to compare the population impact of different interventions either within the same disease or in different diseases or conditions. The primary studies used for relative risk estimates should have outcomes, time periods and comparison groups that are congruent and relevant to the local setting. These need to be combined with local data on disease rates and population size. Depending on the particular problem, the target may be disease incidence or prevalence and the effects of interest may be either the incremental impact or the total impact of each intervention. For practical application, it will be important to use sensitivity analyses to determine plausible intervals for the impact numbers. Conclusions: Attention to various methodological issues will permit the DIN and PIN to be used to assist health policy makers assign a population perspective to measures of risk.
Resumo:
Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Os robots de movimentação de chapa são bastantes úteis para as empresas de metalomecânica. De facto, cada vez mais existem máquinas de corte por jato de água, laser ou outros processos, nos quais os robots apresentam um papel importante na carga e descarga do material. O trabalho realizado apresenta novas soluções aos sistemas de movimentação existentes no mercado, e permite reduzir os custos na movimentação do material. Este projeto serve essencialmente para chapas em trajetória retilínea, e efetuar o seu levantamento do equipamento e deposição em estrutura de suporte (ou viceversa). A vantagem a ter em conta é a diminuição dos custos de movimentação do material. Neste trabalho apresentou-se a metodologia de dimensionamento de um robot automatizado que transporta chapas com um peso máximo de 3500 kg, tendo por base as normas do EC3-P1.8 e o Método de Elementos Finitos (MEF). No decurso do projeto foram abordadas os seguintes temas: Abordagem inicial da geometria através do Solidworks; Dimensionamento da estrutura por software de Elementos Finitos, o Solidworks; Dimensionamento das correntes, carretos/discos ou coroas e rolamentos; Dimensionamento e seleção dos moto-redutores, bomba de vácuo e ventosas; Cálculo das solicitações em cada membro da estrutura por software de análise estrutural, o Multiframe3D, e respetivo dimensionamento das ligações aparafusadas e soldadas; Elaboração dos desenhos de projeto finais, processos de fabrico e custos; Dimensionamento do acionamento, MG e disposição dos dispositivos no quadro elétrico. Como conclusão refere-se que se conseguiu realizar o projeto e obter uma solução final otimizada, com a ajuda de ferramentas importantes, como sejam o MEF, resultando num equipamento cujas solicitações para a estrutura e sistema de movimentação foram otimizadas, resultando num equipamento eficiente, robusto, seguro e de custo reduzido.
Resumo:
Objectives: This study analyzed the moderating role of partners’ support and satisfaction with healthcare services in the relationship between psychological morbidity and adherence to diet in patients with type 2 diabetes (T2DM). Methods: Participants were 387 recently diagnosed T2DM patients that answered the following instruments: Revised Summary of Diabetes Self- Care Activities Measure, Hospital Anxiety and Depression Scales, Multidimensional Diabetes Questionnaire and Patient Satisfaction Questionnaire. Results: Partners’ positive and negative support moderated the relationship between psychological morbidity and adherence to diet. Satisfaction with healthcare services also moderated the relationship between psychological morbidity and adherence to diet. Conclusions: Intervention programs to promote adherence to diet in patients with type 2 diabetes should focus on partners’ support and patient satisfaction with healthcare services.
Resumo:
Dissertação de mestrado em Engenharia de Sistemas