993 resultados para code analysis
Resumo:
The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.
Resumo:
The new reactor concepts proposed in the Generation IV International Forum require the development and validation of computational tools able to assess their safety performance. In the first part of this paper the models of the ESFR design developed by several organisations in the framework of the CP-ESFR project were presented and their reliability validated via a benchmarking exercise. This second part of the paper includes the application of those tools for the analysis of design basis accident (DBC) scenarios of the reference design. Further, this paper also introduces the main features of the core optimisation process carried out within the project with the objective to enhance the core safety performance through the reduction of the positive coolant density reactivity effect. The influence of this optimised core design on the reactor safety performance during the previously analysed transients is also discussed. The conclusion provides an overview of the work performed by the partners involved in the project towards the development and enhancement of computational tools specifically tailored to the evaluation of the safety performance of the Generation IV innovative nuclear reactor designs.
Resumo:
The simulation of design basis accidents in a containment building is usually conducted with a lumped parameter model. The codes normally used by Westinghouse Electric Company (WEC) for that license analysis are WGOTHIC or COCO, which are suitable to provide an adequate estimation of the overall peak temperature and pressure of the containment. However, for the detailed study of the thermal-hydraulic behavior in every room and compartment of the containment building, it could be more convenient to model the containment with a more detailed 3D representation of the geometry of the whole building. The main objective of this project is to obtain a standard PWR Westinghouse as well as an AP1000® containment model for a CFD code to analyze the thermal-hydraulic detailed behavior during a design basis accident. In this paper the development and testing of both containment models is presented.
Resumo:
The results of empirical studies are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies in software engineering can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining existing studies, conducting power analyses for an accurate minimum sample size and benefiting from previous studies through replication. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of verification and validation (V&V) of concurrent Java components. The combination of these V&V technologies was shown to be cost-effective despite the size of the study, which thus contributes to research in V&V technology evaluation.
Resumo:
Timinganalysis of assembler code is essential to achieve the strongest possible guarantee of correctness for safety-critical, real-time software. Previous work has shown how timingconstrain ts on controlflow paths through high-level language programs can be formalised using the semantics of the statements comprisingthe path. We extend these results to assembler-level code where it becomes possible to not only determine timingconstrain ts, but also to verify them against the known execution times for each instruction. A minimal formal model is developed with both a weakest liberal precondition and a strongest postcondition semantics. However, despite the formalism’s simplicity, it is shown that complex timingb ehaviour associated with instruction pipeliningand iterative code can be modelled accurately.
Resumo:
Красимир Манев, Нели Манева, Хараламби Хараламбиев - Подходът с използване на бизнес правила (БП) беше въведен в края на миналия век, за да се улесни специфицирането на фирмен софтуер и да може той да задоволи по-добре нуждите на съответния бизнес. Днес повечето от целите на подхода са постигнати. Но усилията, в научно-изследователски и практически аспект, за постигане на „’формална основа за обратно извличане на БП от съществуващи системи “продължават. В статията е представен подход за извличане на БП от програмен код, базиран на методи за статичен анализ на кода. Посочени са някои предимства и недостатъци на такъв подход.
Resumo:
Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.
Resumo:
Abstract not available
Resumo:
The purpose of this paper is twofold. Firstly it presents a preliminary and ethnomethodologically-informed analysis of the way in which the growing structure of a particular program's code was ongoingly derived from its earliest stages. This was motivated by an interest in how the detailed structure of completed program `emerged from nothing' as a product of the concrete practices of the programmer within the framework afforded by the language. The analysis is broken down into three sections that discuss: the beginnings of the program's structure; the incremental development of structure; and finally the code productions that constitute the structure and the importance of the programmer's stock of knowledge. The discussion attempts to understand and describe the emerging structure of code rather than focus on generating `requirements' for supporting the production of that structure. Due to time and space constraints, however, only a relatively cursory examination of these features was possible. Secondly the paper presents some thoughts on the difficulties associated with the analytic---in particular ethnographic---study of code, drawing on general problems as well as issues arising from the difficulties and failings encountered as part of the analysis presented in the first section.
Resumo:
The present article reflects the progress of an ongoing master’s dissertation on language engineering. The main goal of the work here described, is to infer a programmer’s profile through the analysis of his source code. After such analysis the programmer shall be placed on a scale that characterizes him on his language abilities. There are several potential applications for such profiling, namely, the evaluation of a programmer’s skills and proficiency on a given language or the continuous evaluation of a student’s progress on a programming course. Throughout the course of this project and as a proof of concept, a tool that allows the automatic profiling of a Java programmer is under development. This tool is also introduced in the paper and its preliminary outcomes are discussed.
Resumo:
Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física