979 resultados para program code generation
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
Effective natural resource policy depends on knowing what is needed to sustain a resource and building the capacity to identify, develop, and implement flexible policies. This retrospective case study applies resilience concepts to a 16-year citizen science program and vernal pool regulatory development process in Maine, USA. We describe how citizen science improved adaptive capacities for innovative and effective policies to regulate vernal pools. We identified two core program elements that allowed people to act within narrow windows of opportunity for policy transformation, including (1) the simultaneous generation of useful, credible scientific knowledge and construction of networks among diverse institutions, and (2) the formation of diverse leadership that promoted individual and collective abilities to identify problems and propose policy solutions. If citizen science program leaders want to promote social-ecological systems resilience and natural resource policies as outcomes, we recommend they create a system for internal project evaluation, publish scientific studies using citizen science data, pursue resources for program sustainability, and plan for leadership diversity and informal networks to foster adaptive governance.
Resumo:
Abstract : Since at least the 1980's, a growing number of companies have set up an ethics or a compliance program within their organization. However, in the field of study of business management, there is a paucity of research studies concerning these management systems. This observation warranted the present investigation of one company's compliance program. Compliance programs are set up so that individuals working within an organization observe the laws and regulations which pertain to their work. This study used a constructivist grounded theory methodology to examine the process by which a specific compliance program, that of Siemens Canada Limited, was implemented throughout its organization. In conformity with this methodology, instead of proceeding with the investigation in accordance to a particular theoretical framework, the study established a number of theoretical constructs used strictly as reference points. The study's research question was stated as: what are the characteristics of the process by which Siemens' compliance program integrated itself into the existing organizational structure and gained employee acceptance? Data consisted of documents produced by the company and of interviews done with twenty-four managers working for Siemens Canada Limited. The researcher used QSR-Nvivo computer assisted software to code transcripts and to help with analyzing interviews and documents. Triangulation was done by using a number of analysis techniques and by constantly comparing findings with extant theory. A descriptive model of the implementation process grounded in the experience of participants and in the contents of the documents emerged from the data. The process was called "Remolding"; remolding being the core category having emerged. This main process consisted of two sub-processes identified as "embedding" and "appraising." The investigation was able to provide a detailed account of the appraising process. It identified that employees appraised the compliance program according to three facets: the impact of the program on the employee's daily activities, the relationship employees have with the local compliance organization, and the relationship employees have with the corporate ethics identity. The study suggests that a company who is entertaining the idea of implementing a compliance program should consider all three facets. In particular, it suggests that any company interested in designing and implementing a compliance program should pay particular attention to its corporate ethics identity. This is because employee's acceptance of the program is influenced by their comparison of the company's ethics identity to their local ethics identity. Implications of the study suggest that personnel responsible for the development and organizational support of a compliance program should understand the appraisal process by which employees build their relationship with the program. The originality of this study is that it points emphatically that companies must pay special attention in developing a corporate ethics identify which is coherent, well documented and well explained.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
The generation of neurons from neural stem cells requires large-scale changes in gene expression that are controlled to a large extent by proneural transcription factors, such as Ascl1. While recent studies have characterized the differentiation genes activated by proneural factors, less is known on the mechanisms that suppress progenitor cell identity. Here, we show that Ascl1 induces the transcription factor MyT1 while promoting neuronal differentiation. We combined functional studies of MyT1 during neurogenesis with the characterization of its transcriptional program. MyT1 binding is associated with repression of gene transcription in neural progenitor cells. It promotes neuronal differentiation by counteracting the inhibitory activity of Notch signaling at multiple levels, targeting the Notch1 receptor and many of its downstream targets. These include regulators of the neural progenitor program, such as Hes1, Sox2, Id3, and Olig1. Thus, Ascl1 suppresses Notch signaling cell-autonomously via MyT1, coupling neuronal differentiation with repression of the progenitor fate.
Resumo:
A correct understanding about how computers run code is mandatory in order to effectively learn to program. Lectures have historically been used in programming courses to teach how computers execute code, and students are assessed through traditional evaluation methods, such as exams. Constructivism learning theory objects to students passiveness during lessons, and traditional quantitative methods for evaluating a complex cognitive process such as understanding. Constructivism proposes complimentary techniques, such as conceptual contraposition and colloquies. We enriched lectures of a Programming II (CS2) course combining conceptual contraposition with program memory tracing, then we evaluated students understanding of programming concepts through colloquies. Results revealed that these techniques applied to the lecture are insufficient to help students develop satisfactory mental models of the C++ notional machine, and colloquies behaved as the most comprehensive traditional evaluations conducted in the course.
A new age of fuel performance code criteria studied through advanced atomistic simulation techniques
Resumo:
A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.
Resumo:
The purpose of this paper is twofold. Firstly it presents a preliminary and ethnomethodologically-informed analysis of the way in which the growing structure of a particular program's code was ongoingly derived from its earliest stages. This was motivated by an interest in how the detailed structure of completed program `emerged from nothing' as a product of the concrete practices of the programmer within the framework afforded by the language. The analysis is broken down into three sections that discuss: the beginnings of the program's structure; the incremental development of structure; and finally the code productions that constitute the structure and the importance of the programmer's stock of knowledge. The discussion attempts to understand and describe the emerging structure of code rather than focus on generating `requirements' for supporting the production of that structure. Due to time and space constraints, however, only a relatively cursory examination of these features was possible. Secondly the paper presents some thoughts on the difficulties associated with the analytic---in particular ethnographic---study of code, drawing on general problems as well as issues arising from the difficulties and failings encountered as part of the analysis presented in the first section.
Resumo:
La vérification de la résistance aux attaques des implémentations embarquées des vérifieurs de code intermédiaire Java Card est une tâche complexe. Les méthodes actuelles n'étant pas suffisamment efficaces, seule la génération de tests manuelle est possible. Pour automatiser ce processus, nous proposons une méthode appelée VTG (Vulnerability Test Generation, génération de tests de vulnérabilité). En se basant sur une représentation formelle des comportements fonctionnels du système sous test, un ensemble de tests d'intrusions est généré. Cette méthode s'inspire des techniques de mutation et de test à base de modèle. Dans un premier temps, le modèle est muté selon des règles que nous avons définies afin de représenter les potentielles attaques. Les tests sont ensuite extraits à partir des modèles mutants. Deux modèles Event-B ont été proposés. Le premier représente les contraintes structurelles des fichiers d'application Java Card. Le VTG permet en quelques secondes de générer des centaines de tests abstraits. Le second modèle est composé de 66 événements permettant de représenter 61 instructions Java Card. La mutation est effectuée en quelques secondes. L'extraction des tests permet de générer 223 tests en 45 min. Chaque test permet de vérifier une précondition ou une combinaison de préconditions d'une instruction. Cette méthode nous a permis de tester différents mécanismes d'implémentations de vérifieur de code intermédiaire Java Card. Bien que développée pour notre cas d'étude, la méthode proposée est générique et a été appliquée à d'autres cas d'études.
Resumo:
In the context of computer numerical control (CNC) and computer aided manufacturing (CAM), the capabilities of programming languages such as symbolic and intuitive programming, program portability and geometrical portfolio have special importance -- They allow to save time and to avoid errors during part programming and permit code re-usage -- Our updated literature review indicates that the current state of art presents voids in parametric programming, program portability and programming flexibility -- In response to this situation, this article presents a compiler implementation for EGCL (Extended G-code Language), a new, enriched CNC programming language which allows the use of descriptive variable names, geometrical functions and flow-control statements (if-then-else, while) -- Our compiler produces low-level generic, elementary ISO-compliant Gcode, thus allowing for flexibility in the choice of the executing CNC machine and in portability -- Our results show that readable variable names and flow control statements allow a simplified and intuitive part programming and permit re-usage of the programs -- Future work includes allowing the programmer to define own functions in terms of EGCL, in contrast to the current status of having them as library built-in functions
Resumo:
Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.
Resumo:
We report the observation of multiple harmonic generation in electric dipole spin resonance in an InAs nanowire double quantum dot. The harmonics display a remarkable detuning dependence: near the interdot charge transition as many as eight harmonics are observed, while at large detunings we only observe the fundamental spin resonance condition. The detuning dependence indicates that the observed harmonics may be due to Landau-Zener transition dynamics at anticrossings in the energy level spectrum.